Source-receptor matrix calculation with a Lagrangian particle dispersion model in backward mode
Directory of Open Access Journals (Sweden)
P. Seibert
2004-01-01
Full Text Available The possibility to calculate linear-source receptor relationships for the transport of atmospheric trace substances with a Lagrangian particle dispersion model (LPDM running in backward mode is shown and presented with many tests and examples. This mode requires only minor modifications of the forward LPDM. The derivation includes the action of sources and of any first-order processes (transformation with prescribed rates, dry and wet deposition, radioactive decay, etc.. The backward mode is computationally advantageous if the number of receptors is less than the number of sources considered. The combination of an LPDM with the backward (adjoint methodology is especially attractive for the application to point measurements, which can be handled without artificial numerical diffusion. Practical hints are provided for source-receptor calculations with different settings, both in forward and backward mode. The equivalence of forward and backward calculations is shown in simple tests for release and sampling of particles, pure wet deposition, pure convective redistribution and realistic transport over a short distance. Furthermore, an application example explaining measurements of Cs-137 in Stockholm as transport from areas contaminated heavily in the Chernobyl disaster is included.
Operational source receptor calculations for large agglomerations
Gauss, Michael; Shamsudheen, Semeena V.; Valdebenito, Alvaro; Pommier, Matthieu; Schulz, Michael
2016-04-01
For Air quality policy an important question is how much of the air pollution within an urbanized region can be attributed to local sources and how much of it is imported through long-range transport. This is critical information for a correct assessment of the effectiveness of potential emission measures. The ratio between indigenous and long-range transported air pollution for a given region depends on its geographic location, the size of its area, the strength and spatial distribution of emission sources, the time of the year, but also - very strongly - on the current meteorological conditions, which change from day to day and thus make it important to provide such calculations in near-real-time to support short-term legislation. Similarly, long-term analysis over longer periods (e.g. one year), or of specific air quality episodes in the past, can help to scientifically underpin multi-regional agreements and long-term legislation. Within the European MACC projects (Monitoring Atmospheric Composition and Climate) and the transition to the operational CAMS service (Copernicus Atmosphere Monitoring Service) the computationally efficient EMEP MSC-W air quality model has been applied with detailed emission data, comprehensive calculations of chemistry and microphysics, driven by high quality meteorological forecast data (up to 96-hour forecasts), to provide source-receptor calculations on a regular basis in forecast mode. In its current state, the product allows the user to choose among different regions and regulatory pollutants (e.g. ozone and PM) to assess the effectiveness of fictive emission reductions in air pollutant emissions that are implemented immediately, either within the agglomeration or outside. The effects are visualized as bar charts, showing resulting changes in air pollution levels within the agglomeration as a function of time (hourly resolution, 0 to 4 days into the future). The bar charts not only allow assessing the effects of emission
Overview of receptor-based source apportionment studies for speciated atmospheric mercury
Cheng, I.; Xu, X.; Zhang, L.
2015-01-01
Receptor-based source apportionment studies of speciated atmospheric mercury are not only concerned with source contributions but also with the influence of transport, transformation, and deposition processes on speciated atmospheric mercury concentrations at receptor locations. Previous studies applied multivariate receptor models including principal components analysis and positive matrix factorization, and back trajectory receptor models including potential source contri...
Direct calculation of off-diagonal matrix elements
International Nuclear Information System (INIS)
Killingbeck, J P; Jolicard, G
2011-01-01
Gauss elimination is used in a sequence of calculations which give the squares of the off-diagonal matrix elements of x between quartic oscillator eigenstates, in a modification of the original sum rule approach of Tipping et al to the problem. New and more flexible methods are then devised and tested and are shown to permit the isolation and calculation of individual squared matrix elements of x and x 2 .
Convergent j-matrix calculation of electron-helium resonances
International Nuclear Information System (INIS)
Konovalov, D.A.; McCarthy, I.E.
1994-12-01
Resonance structures in n=2 and n=3 electron-helium excitation cross sections are calculated using the J-matrix method. The number of close-coupled helium bound and continuum states is taken to convergence, e.g. about 100 channels are coupled for each total spin and angular momentum. It is found that the present J-matrix results are in good shape agreement with recent 29-state R-matrix calculations. However the J-matrix absolute cross sections are slightly lower due to the influence of continuum channels included in the present method. Experiment and theory agree on the positions of n=2 and n=3 resonances. 22 refs., 1 tab.; 3 figs
pyRMSD: a Python package for efficient pairwise RMSD matrix calculation and handling.
Gil, Víctor A; Guallar, Víctor
2013-09-15
We introduce pyRMSD, an open source standalone Python package that aims at offering an integrative and efficient way of performing Root Mean Square Deviation (RMSD)-related calculations of large sets of structures. It is specially tuned to do fast collective RMSD calculations, as pairwise RMSD matrices, implementing up to three well-known superposition algorithms. pyRMSD provides its own symmetric distance matrix class that, besides the fact that it can be used as a regular matrix, helps to save memory and increases memory access speed. This last feature can dramatically improve the overall performance of any Python algorithm using it. In addition, its extensibility, testing suites and documentation make it a good choice to those in need of a workbench for developing or testing new algorithms. The source code (under MIT license), installer, test suites and benchmarks can be found at https://pele.bsc.es/ under the tools section. victor.guallar@bsc.es Supplementary data are available at Bioinformatics online.
Syndecans as receptors and organizers of the extracellular matrix
DEFF Research Database (Denmark)
Xian, Xiaojie; Gopal, Sandeep; Couchman, John
2009-01-01
, the collagens and glycoproteins of the extracellular matrix are prominent. Frequently, they do so in conjunction with other receptors, most notably the integrins. For this reason, they are often referred to as "co-receptors". However, just as with integrins, syndecans can interact with actin-associated proteins...... and signalling molecules, such as protein kinases. Some aspects of syndecan signalling are understood but much remains to be learned. The functions of syndecans in regulating cell adhesion and extracellular matrix assembly are described here. Evidence from null mice suggests that syndecans have roles...
Massively parallel sparse matrix function calculations with NTPoly
Dawson, William; Nakajima, Takahito
2018-04-01
We present NTPoly, a massively parallel library for computing the functions of sparse, symmetric matrices. The theory of matrix functions is a well developed framework with a wide range of applications including differential equations, graph theory, and electronic structure calculations. One particularly important application area is diagonalization free methods in quantum chemistry. When the input and output of the matrix function are sparse, methods based on polynomial expansions can be used to compute matrix functions in linear time. We present a library based on these methods that can compute a variety of matrix functions. Distributed memory parallelization is based on a communication avoiding sparse matrix multiplication algorithm. OpenMP task parallellization is utilized to implement hybrid parallelization. We describe NTPoly's interface and show how it can be integrated with programs written in many different programming languages. We demonstrate the merits of NTPoly by performing large scale calculations on the K computer.
A J matrix engine for density functional theory calculations
International Nuclear Information System (INIS)
White, C.A.; Head-Gordon, M.
1996-01-01
We introduce a new method for the formation of the J matrix (Coulomb interaction matrix) within a basis of Cartesian Gaussian functions, as needed in density functional theory and Hartree endash Fock calculations. By summing the density matrix into the underlying Gaussian integral formulas, we have developed a J matrix open-quote open-quote engine close-quote close-quote which forms the exact J matrix without explicitly forming the full set of two electron integral intermediates. Several precomputable quantities have been identified, substantially reducing the number of floating point operations and memory accesses needed in a J matrix calculation. Initial timings indicate a speedup of greater than four times for the (pp parallel pp) class of integrals with speedups increasing to over ten times for (ff parallel ff) integrals. copyright 1996 American Institute of Physics
Extracellular matrix and its receptors in Drosophila neural development
Broadie, Kendal; Baumgartner, Stefan; Prokop, Andreas
2011-01-01
Extracellular matrix (ECM) and matrix receptors are intimately involved in most biological processes. The ECM plays fundamental developmental and physiological roles in health and disease, including processes underlying the development, maintenance and regeneration of the nervous system. To understand the principles of ECM-mediated functions in the nervous system, genetic model organisms like Drosophila provide simple, malleable and powerful experimental platforms. This article provides an overview of ECM proteins and receptors in Drosophila. It then focuses on their roles during three progressive phases of neural development: 1) neural progenitor proliferation, 2) axonal growth and pathfinding and 3) synapse formation and function. Each section highlights known ECM and ECM-receptor components and recent studies done in mutant conditions to reveal their in vivo functions, all illustrating the enormous opportunities provided when merging work on the nervous system with systematic research into ECM-related gene functions. PMID:21688401
Numericware i: Identical by State Matrix Calculator
Directory of Open Access Journals (Sweden)
Bongsong Kim
2017-02-01
Full Text Available We introduce software, Numericware i, to compute identical by state (IBS matrix based on genotypic data. Calculating an IBS matrix with a large dataset requires large computer memory and takes lengthy processing time. Numericware i addresses these challenges with 2 algorithmic methods: multithreading and forward chopping. The multithreading allows computational routines to concurrently run on multiple central processing unit (CPU processors. The forward chopping addresses memory limitation by dividing a dataset into appropriately sized subsets. Numericware i allows calculation of the IBS matrix for a large genotypic dataset using a laptop or a desktop computer. For comparison with different software, we calculated genetic relationship matrices using Numericware i, SPAGeDi, and TASSEL with the same genotypic dataset. Numericware i calculates IBS coefficients between 0 and 2, whereas SPAGeDi and TASSEL produce different ranges of values including negative values. The Pearson correlation coefficient between the matrices from Numericware i and TASSEL was high at .9972, whereas SPAGeDi showed low correlation with Numericware i (.0505 and TASSEL (.0587. With a high-dimensional dataset of 500 entities by 10 000 000 SNPs, Numericware i spent 382 minutes using 19 CPU threads and 64 GB memory by dividing the dataset into 3 pieces, whereas SPAGeDi and TASSEL failed with the same dataset. Numericware i is freely available for Windows and Linux under CC-BY 4.0 license at https://figshare.com/s/f100f33a8857131eb2db .
Hardware matrix multiplier/accumulator for lattice gauge theory calculations
International Nuclear Information System (INIS)
Christ, N.H.; Terrano, A.E.
1984-01-01
The design and operating characteristics of a special-purpose matrix multiplier/accumulator are described. The device is connected through a standard interface to a host PDP11 computer. It provides a set of high-speed, matrix-oriented instructions which can be called from a program running on the host. The resulting operations accelerate the complex matrix arithmetic required for a class of Monte Carlo calculations currently of interest in high energy particle physics. A working version of the device is presently being used to carry out a pure SU(3) lattice gauge theory calculation using a PDP11/23 with a performance twice that obtainable on a VAX11/780. (orig.)
Energy Technology Data Exchange (ETDEWEB)
Mansur, Ralph S.; Moura, Carlos A., E-mail: ralph@ime.uerj.br, E-mail: demoura@ime.uerj.br [Universidade do Estado do Rio de Janeiro (UERJ), RJ (Brazil). Departamento de Engenharia Mecanica; Barros, Ricardo C., E-mail: rcbarros@pq.cnpq.br [Universidade do Estado do Rio de Janeiro (UERJ), Nova Friburgo, RJ (Brazil). Departamento de Modelagem Computacional
2017-07-01
Presented here is an application of the Response Matrix (RM) method for adjoint discrete ordinates (S{sub N}) problems in slab geometry applied to energy-dependent source-detector problems. The adjoint RM method is free from spatial truncation errors, as it generates numerical results for the adjoint angular fluxes in multilayer slabs that agree with the numerical values obtained from the analytical solution of the energy multigroup adjoint SN equations. Numerical results are given for two typical source-detector problems to illustrate the accuracy and the efficiency of the offered RM computer code. (author)
Source and replica calculations
International Nuclear Information System (INIS)
Whalen, P.P.
1994-01-01
The starting point of the Hiroshima-Nagasaki Dose Reevaluation Program is the energy and directional distributions of the prompt neutron and gamma-ray radiation emitted from the exploding bombs. A brief introduction to the neutron source calculations is presented. The development of our current understanding of the source problem is outlined. It is recommended that adjoint calculations be used to modify source spectra to resolve the neutron discrepancy problem
Random matrix theory with an external source
Brézin, Edouard
2016-01-01
This is a first book to show that the theory of the Gaussian random matrix is essential to understand the universal correlations with random fluctuations and to demonstrate that it is useful to evaluate topological universal quantities. We consider Gaussian random matrix models in the presence of a deterministic matrix source. In such models the correlation functions are known exactly for an arbitrary source and for any size of the matrices. The freedom given by the external source allows for various tunings to different classes of universality. The main interest is to use this freedom to compute various topological invariants for surfaces such as the intersection numbers for curves drawn on a surface of given genus with marked points, Euler characteristics, and the Gromov–Witten invariants. A remarkable duality for the average of characteristic polynomials is essential for obtaining such topological invariants. The analysis is extended to nonorientable surfaces and to surfaces with boundaries.
Syndecans as receptors and organizers of the extracellular matrix.
Xian, Xiaojie; Gopal, Sandeep; Couchman, John R
2010-01-01
Syndecans are type I transmembrane proteins having a core protein modified with glycosaminoglycan chains, most commonly heparan sulphate. They are an ancient group of molecules, present in invertebrates and vertebrates. Among the plethora of molecules that can interact with heparan sulphate, the collagens and glycoproteins of the extracellular matrix are prominent. Frequently, they do so in conjunction with other receptors, most notably the integrins. For this reason, they are often referred to as "co-receptors". However, just as with integrins, syndecans can interact with actin-associated proteins and signalling molecules, such as protein kinases. Some aspects of syndecan signalling are understood but much remains to be learned. The functions of syndecans in regulating cell adhesion and extracellular matrix assembly are described here. Evidence from null mice suggests that syndecans have roles in postnatal tissue repair, inflammation and tumour progression. Developmental deficits in lower vertebrates in which syndecans are eliminated are also informative and suggest that, in mammals, redundancy is a key issue.
Response matrix calculation of a Bonner Sphere Spectrometer using ENDF/B-VII libraries
Energy Technology Data Exchange (ETDEWEB)
Morató, Sergio; Juste, Belén; Miró, Rafael; Verdú, Gumersindo [Instituto de Seguridad Industrial, Radiofísica y Medioambiental (ISIRYM), Universitat Politècnica de València (Spain); Guardia, Vicent, E-mail: bejusvi@iqn.upv.es [GD Energy Services, Valencia (Spain). Grupo dominguis
2017-07-01
The present work is focused on the reconstruction of a neutron spectra using a multisphere spectrometer also called Bonner Spheres System (BSS). To that, the determination of the response detector curves is necessary therefore we have obtained the response matrix of a neutron detector by Monte Carlo (MC) simulation with MCNP6 where the use of unstructured mesh geometries is introduced as a novelty. The aim of these curves was to study the theoretical response of a widespread neutron spectrometer exposed to neutron radiation. A neutron detector device has been used in this work which is formed by a multispheres spectrometer (BSS) that uses 6 high density polyethylene spheres with different diameters. The BSS consists of a set of 0.95 g/cm{sup 3} high density polyethylene spheres. The detector is composed of a lithium iodide 6LiI cylindrical scintillator crystal 4mm x 4mm size LUDLUM Model 42 coupled to a photomultiplier tube. Thermal tables are required to include polyethylene cross section in the simulation. These data are essential to get correct and accurate results in problems involving neutron thermalization. Nowadays available literature present the response matrix calculated with ENDF.B.V cross section libraries (V.Mares et al 1993) or with ENDF.B.VI (R.Vega Carrillo et al 2007). This work uses two novelties to calculate the response matrix. On the one hand the use of unstructured meshes to simulate the geometry of the detector and the Bonner Spheres and on the other hand the use of the updated ENDF.B.VII cross sections libraries. A set of simulations have been performed to obtain the detector response matrix. 29 mono energetic neutron beams between 10 KeV to 20 MeV were used as source for each moderator sphere up to a total of 174 simulations. Each mono energetic source was defined with the same diameter as the moderating sphere used in its corresponding simulation and the spheres were uniformly irradiated from the top of the photomultiplier tube. Some
Deng, Junjun; Zhang, Yanru; Qiu, Yuqing; Zhang, Hongliang; Du, Wenjiao; Xu, Lingling; Hong, Youwei; Chen, Yanting; Chen, Jinsheng
2018-04-01
Source apportionment of fine particulate matter (PM2.5) were conducted at the Lin'an Regional Atmospheric Background Station (LA) in the Yangtze River Delta (YRD) region in China from July 2014 to April 2015 with three receptor models including principal component analysis combining multiple linear regression (PCA-MLR), UNMIX and Positive Matrix Factorization (PMF). The model performance, source identification and source contribution of the three models were analyzed and inter-compared. Source apportionment of PM2.5 was also conducted with the receptor models. Good correlations between the reconstructed and measured concentrations of PM2.5 and its major chemical species were obtained for all models. PMF resolved almost all masses of PM2.5, while PCA-MLR and UNMIX explained about 80%. Five, four and seven sources were identified by PCA-MLR, UNMIX and PMF, respectively. Combustion, secondary source, marine source, dust and industrial activities were identified by all the three receptor models. Combustion source and secondary source were the major sources, and totally contributed over 60% to PM2.5. The PMF model had a better performance on separating the different combustion sources. These findings improve the understanding of PM2.5 sources in background region.
International Nuclear Information System (INIS)
Dufek, Jan; Holst, Gustaf
2016-01-01
Highlights: • Errors in the fission matrix eigenvector and fission source are correlated. • The error correlations depend on coarseness of the spatial mesh. • The error correlations are negligible when the mesh is very fine. - Abstract: Previous studies raised a question about the level of a possible correlation of errors in the cumulative Monte Carlo fission source and the fundamental-mode eigenvector of the fission matrix. A number of new methods tally the fission matrix during the actual Monte Carlo criticality calculation, and use its fundamental-mode eigenvector for various tasks. The methods assume the fission matrix eigenvector is a better representation of the fission source distribution than the actual Monte Carlo fission source, although the fission matrix and its eigenvectors do contain statistical and other errors. A recent study showed that the eigenvector could be used for an unbiased estimation of errors in the cumulative fission source if the errors in the eigenvector and the cumulative fission source were not correlated. Here we present new numerical study results that answer the question about the level of the possible error correlation. The results may be of importance to all methods that use the fission matrix. New numerical tests show that the error correlation is present at a level which strongly depends on properties of the spatial mesh used for tallying the fission matrix. The error correlation is relatively strong when the mesh is coarse, while the correlation weakens as the mesh gets finer. We suggest that the coarseness of the mesh is measured in terms of the value of the largest element in the tallied fission matrix as that way accounts for the mesh as well as system properties. In our test simulations, we observe only negligible error correlations when the value of the largest element in the fission matrix is about 0.1. Relatively strong error correlations appear when the value of the largest element in the fission matrix raises
Single-channel source separation using non-negative matrix factorization
DEFF Research Database (Denmark)
Schmidt, Mikkel Nørgaard
-determined and its solution relies on making appropriate assumptions concerning the sources. This dissertation is concerned with model-based probabilistic single-channel source separation based on non-negative matrix factorization, and consists of two parts: i) three introductory chapters and ii) five published...... papers. The first part introduces the single-channel source separation problem as well as non-negative matrix factorization and provides a comprehensive review of existing approaches, applications, and practical algorithms. This serves to provide context for the second part, the published papers......, in which a number of methods for single-channel source separation based on non-negative matrix factorization are presented. In the papers, the methods are applied to separating audio signals such as speech and musical instruments and separating different types of tissue in chemical shift imaging....
Matrix model calculations beyond the spherical limit
International Nuclear Information System (INIS)
Ambjoern, J.; Chekhov, L.; Kristjansen, C.F.; Makeenko, Yu.
1993-01-01
We propose an improved iterative scheme for calculating higher genus contributions to the multi-loop (or multi-point) correlators and the partition function of the hermitian one matrix model. We present explicit results up to genus two. We develop a version which gives directly the result in the double scaling limit and present explicit results up to genus four. Using the latter version we prove that the hermitian and the complex matrix model are equivalent in the double scaling limit and that in this limit they are both equivalent to the Kontsevich model. We discuss how our results away from the double scaling limit are related to the structure of moduli space. (orig.)
Bayne, E K; Anderson, M J; Fambrough, D M
1984-10-01
Monoclonal antibodies recognizing laminin, heparan sulfate proteoglycan, fibronectin, and two apparently novel connective tissue components have been used to examine the organization of extracellular matrix of skeletal muscle in vivo and in vitro. Four of the five monoclonal antibodies are described for the first time here. Immunocytochemical experiments with frozen-sectioned muscle demonstrated that both the heparan sulfate proteoglycan and laminin exhibited staining patterns identical to that expected for components of the basal lamina. In contrast, the remaining matrix constituents were detected in all regions of muscle connective tissue: the endomysium, perimysium, and epimysium. Embryonic muscle cells developing in culture elaborated an extracellular matrix, each antigen exhibiting a unique distribution. Of particular interest was the organization of extracellular matrix on myotubes: the build-up of matrix components was most apparent in plaques overlying clusters of an integral membrane protein, the acetylcholine receptor (AChR). The heparan sulfate proteoglycan was concentrated at virtually all AChR clusters and showed a remarkable level of congruence with receptor organization; laminin was detected at 70-95% of AChR clusters but often was not completely co-distributed with AChR within the cluster; fibronectin and the two other extracellular matrix antigens occurred at approximately 20, 8, and 2% of the AChR clusters, respectively, and showed little or no congruence with AChR. From observations on the distribution of extracellular matrix components in tissue cultured fibroblasts and myogenic cells, several ideas about the organization of extracellular matrix are suggested. (a) Congruence between AChR clusters and heparan sulfate proteoglycan suggests the existence of some linkage between the two molecules, possibly important for regulation of AChR distribution within the muscle membrane. (b) The qualitatively different patterns of extracellular matrix
R-matrix calculations for few-quark bound states
International Nuclear Information System (INIS)
Shalchi, M.A.; Hadizadeh, M.R.
2016-01-01
The R-matrix method is implemented to study the heavy charm and bottom diquark, triquark, tetraquark, and pentaquarks in configuration space, as the bound states of quark-antiquark, diquark-quark, diquark-antidiquark, and diquark-antitriquark systems, respectively. The mass spectrum and the size of these systems are calculated for different partial wave channels. The calculated masses are compared with recent theoretical results obtained by other methods in momentum and configuration spaces and also by available experimental data. (orig.)
Inter-comparison of receptor models for PM source apportionment: Case study in an industrial area
Viana, M.; Pandolfi, M.; Minguillón, M. C.; Querol, X.; Alastuey, A.; Monfort, E.; Celades, I.
2008-05-01
Receptor modelling techniques are used to identify and quantify the contributions from emission sources to the levels and major and trace components of ambient particulate matter (PM). A wide variety of receptor models are currently available, and consequently the comparability between models should be evaluated if source apportionment data are to be used as input in health effects studies or mitigation plans. Three of the most widespread receptor models (principal component analysis, PCA; positive matrix factorization, PMF; chemical mass balance, CMB) were applied to a single PM10 data set (n=328 samples, 2002-2005) obtained from an industrial area in NE Spain, dedicated to ceramic production. Sensitivity and temporal trend analyses (using the Mann-Kendall test) were applied. Results evidenced the good overall performance of the three models (r2>0.83 and α>0.91×between modelled and measured PM10 mass), with a good agreement regarding source identification and high correlations between input (CMB) and output (PCA, PMF) source profiles. Larger differences were obtained regarding the quantification of source contributions (up to a factor of 4 in some cases). The combined application of different types of receptor models would solve the limitations of each of the models, by constructing a more robust solution based on their strengths. The authors suggest the combined use of factor analysis techniques (PCA, PMF) to identify and interpret emission sources, and to obtain a first quantification of their contributions to the PM mass, and the subsequent application of CMB. Further research is needed to ensure that source apportionment methods are robust enough for application to PM health effects assessments.
A systematic examination of a random sampling strategy for source apportionment calculations.
Andersson, August
2011-12-15
Estimating the relative contributions from multiple potential sources of a specific component in a mixed environmental matrix is a general challenge in diverse fields such as atmospheric, environmental and earth sciences. Perhaps the most common strategy for tackling such problems is by setting up a system of linear equations for the fractional influence of different sources. Even though an algebraic solution of this approach is possible for the common situation with N+1 sources and N source markers, such methodology introduces a bias, since it is implicitly assumed that the calculated fractions and the corresponding uncertainties are independent of the variability of the source distributions. Here, a random sampling (RS) strategy for accounting for such statistical bias is examined by investigating rationally designed synthetic data sets. This random sampling methodology is found to be robust and accurate with respect to reproducibility and predictability. This method is also compared to a numerical integration solution for a two-source situation where source variability also is included. A general observation from this examination is that the variability of the source profiles not only affects the calculated precision but also the mean/median source contributions. Copyright © 2011 Elsevier B.V. All rights reserved.
Energy Technology Data Exchange (ETDEWEB)
Lucarelli, F. [Department of Physics and Astronomy, University of Florence, Via G. Sansone 1, 50019 Sesto Fiorentino (Italy); National Institute of Nuclear Physics (INFN)-Florence, Via G. Sansone 1, 50019 Sesto Fiorentino (Italy); Nava, S., E-mail: nava@fi.infn.it [National Institute of Nuclear Physics (INFN)-Florence, Via G. Sansone 1, 50019 Sesto Fiorentino (Italy); Calzolai, G. [Department of Physics and Astronomy, University of Florence, Via G. Sansone 1, 50019 Sesto Fiorentino (Italy); Chiari, M. [National Institute of Nuclear Physics (INFN)-Florence, Via G. Sansone 1, 50019 Sesto Fiorentino (Italy); Giannoni, M.; Traversi, R.; Udisti, R. [Department of Chemistry, University of Florence, Via della Lastruccia 3, 50019 Sesto Fiorentino (Italy)
2015-11-15
Particle Induced X-ray Emission (PIXE) analysis of aerosol samples allows simultaneous detection of several elements, including important tracers of many particulate matter sources. This capability, together with the possibility of analyzing a high number of samples in very short times, makes PIXE a very effective tool for source apportionment studies by receptor modeling. However, important aerosol components, like nitrates, OC and EC, cannot be assessed by PIXE: this limitation may strongly compromise the results of a source apportionment study if based on PIXE data alone. In this work, an experimental dataset characterised by an extended chemical speciation (elements, EC–OC, ions) is used to test the effect of reducing input species in the application of one of the most widely used receptor model, namely Positive Matrix Factorization (PMF). The main effect of using only PIXE data is that the secondary nitrate source is not identified and the contribution of biomass burning is overestimated, probably due to the similar seasonal pattern of these two sources.
Weighted Geometric Dilution of Precision Calculations with Matrix Multiplication
Directory of Open Access Journals (Sweden)
Chien-Sheng Chen
2015-01-01
Full Text Available To enhance the performance of location estimation in wireless positioning systems, the geometric dilution of precision (GDOP is widely used as a criterion for selecting measurement units. Since GDOP represents the geometric effect on the relationship between measurement error and positioning determination error, the smallest GDOP of the measurement unit subset is usually chosen for positioning. The conventional GDOP calculation using matrix inversion method requires many operations. Because more and more measurement units can be chosen nowadays, an efficient calculation should be designed to decrease the complexity. Since the performance of each measurement unit is different, the weighted GDOP (WGDOP, instead of GDOP, is used to select the measurement units to improve the accuracy of location. To calculate WGDOP effectively and efficiently, the closed-form solution for WGDOP calculation is proposed when more than four measurements are available. In this paper, an efficient WGDOP calculation method applying matrix multiplication that is easy for hardware implementation is proposed. In addition, the proposed method can be used when more than exactly four measurements are available. Even when using all-in-view method for positioning, the proposed method still can reduce the computational overhead. The proposed WGDOP methods with less computation are compatible with global positioning system (GPS, wireless sensor networks (WSN and cellular communication systems.
Subsurface Shielding Source Term Specification Calculation
International Nuclear Information System (INIS)
S.Su
2001-01-01
The purpose of this calculation is to establish appropriate and defensible waste-package radiation source terms for use in repository subsurface shielding design. This calculation supports the shielding design for the waste emplacement and retrieval system, and subsurface facility system. The objective is to identify the limiting waste package and specify its associated source terms including source strengths and energy spectra. Consistent with the Technical Work Plan for Subsurface Design Section FY 01 Work Activities (CRWMS M and O 2001, p. 15), the scope of work includes the following: (1) Review source terms generated by the Waste Package Department (WPD) for various waste forms and waste package types, and compile them for shielding-specific applications. (2) Determine acceptable waste package specific source terms for use in subsurface shielding design, using a reasonable and defensible methodology that is not unduly conservative. This calculation is associated with the engineering and design activity for the waste emplacement and retrieval system, and subsurface facility system. The technical work plan for this calculation is provided in CRWMS M and O 2001. Development and performance of this calculation conforms to the procedure, AP-3.12Q, Calculations
International Nuclear Information System (INIS)
Raufman, Jean-Pierre; Cheng, Kunrong; Saxena, Neeraj; Chahdi, Ahmed; Belo, Angelica; Khurana, Sandeep; Xie, Guofeng
2011-01-01
Highlights: ► Muscarinic receptor agonists stimulated robust human colon cancer cell invasion. ► Anti-matrix metalloproteinase1 antibody pre-treatment blocks cell invasion. ► Bile acids stimulate MMP1 expression, cell migration and MMP1-dependent invasion. -- Abstract: Mammalian matrix metalloproteinases (MMPs) which degrade extracellular matrix facilitate colon cancer cell invasion into the bloodstream and extra-colonic tissues; in particular, MMP1 expression correlates strongly with advanced colon cancer stage, hematogenous metastasis and poor prognosis. Likewise, muscarinic receptor signaling plays an important role in colon cancer; muscarinic receptors are over-expressed in colon cancer compared to normal colon epithelial cells. Muscarinic receptor activation stimulates proliferation, migration and invasion of human colon cancer cells. In mouse intestinal neoplasia models genetic ablation of muscarinic receptors attenuates carcinogenesis. In the present work, we sought to link these observations by showing that MMP1 expression and activation plays a mechanistic role in muscarinic receptor agonist-induced colon cancer cell invasion. We show that acetylcholine, which robustly increases MMP1 expression, stimulates invasion of HT29 and H508 human colon cancer cells into human umbilical vein endothelial cell monolayers – this was abolished by pre-incubation with atropine, a non-selective muscarinic receptor inhibitor, and by pre-incubation with anti-MMP1 neutralizing antibody. Similar results were obtained using a Matrigel chamber assay and deoxycholyltaurine (DCT), an amidated dihydroxy bile acid associated with colon neoplasia in animal models and humans, and previously shown to interact functionally with muscarinic receptors. DCT treatment of human colon cancer cells resulted in time-dependent, 10-fold increased MMP1 expression, and DCT-induced cell invasion was also blocked by pre-treatment with anti-MMP1 antibody. This study contributes to understanding
Energy Technology Data Exchange (ETDEWEB)
Raufman, Jean-Pierre, E-mail: jraufman@medicine.umaryland.edu [Division of Gastroenterology and Hepatology, University of Maryland School of Medicine, Baltimore, MD (United States); Cheng, Kunrong; Saxena, Neeraj; Chahdi, Ahmed; Belo, Angelica; Khurana, Sandeep; Xie, Guofeng [Division of Gastroenterology and Hepatology, University of Maryland School of Medicine, Baltimore, MD (United States)
2011-11-18
Highlights: Black-Right-Pointing-Pointer Muscarinic receptor agonists stimulated robust human colon cancer cell invasion. Black-Right-Pointing-Pointer Anti-matrix metalloproteinase1 antibody pre-treatment blocks cell invasion. Black-Right-Pointing-Pointer Bile acids stimulate MMP1 expression, cell migration and MMP1-dependent invasion. -- Abstract: Mammalian matrix metalloproteinases (MMPs) which degrade extracellular matrix facilitate colon cancer cell invasion into the bloodstream and extra-colonic tissues; in particular, MMP1 expression correlates strongly with advanced colon cancer stage, hematogenous metastasis and poor prognosis. Likewise, muscarinic receptor signaling plays an important role in colon cancer; muscarinic receptors are over-expressed in colon cancer compared to normal colon epithelial cells. Muscarinic receptor activation stimulates proliferation, migration and invasion of human colon cancer cells. In mouse intestinal neoplasia models genetic ablation of muscarinic receptors attenuates carcinogenesis. In the present work, we sought to link these observations by showing that MMP1 expression and activation plays a mechanistic role in muscarinic receptor agonist-induced colon cancer cell invasion. We show that acetylcholine, which robustly increases MMP1 expression, stimulates invasion of HT29 and H508 human colon cancer cells into human umbilical vein endothelial cell monolayers - this was abolished by pre-incubation with atropine, a non-selective muscarinic receptor inhibitor, and by pre-incubation with anti-MMP1 neutralizing antibody. Similar results were obtained using a Matrigel chamber assay and deoxycholyltaurine (DCT), an amidated dihydroxy bile acid associated with colon neoplasia in animal models and humans, and previously shown to interact functionally with muscarinic receptors. DCT treatment of human colon cancer cells resulted in time-dependent, 10-fold increased MMP1 expression, and DCT-induced cell invasion was also blocked by pre
Calculations of hadronic weak matrix elements: A status report
International Nuclear Information System (INIS)
Sharpe, S.R.
1988-01-01
I review the calculations of hadronic matrix elements of the weak Hamiltonian. My major emphasis is on lattice calculations. I discuss the application to weak decay constants (f/sub K/, f/sub D/, f/sub B/), K 0 /minus/ /bar K/sup 0// and B 0 /minus/ /bar B/sup 0// mixing, K → ππ decays, and the CP violation parameters ε and ε'. I close with speculations on future progress. 57 refs., 4 figs., 2 tabs
International Nuclear Information System (INIS)
Gregersen, A.W.
1977-01-01
A comparison is made between matrix elements calculated using the uncoupled channel Sussex approach to second order in DWBA and matrix elements calculated using a square well potential. The square well potential illustrated the problem of the determining parameter independence balanced with the concept of phase shift difference. The super-soft core potential was used to discuss the systematics of the Sussex approach as a function of angular momentum as well as the relation between Sussex generated and effective interaction matrix elements. In the uncoupled channels the original Sussex method of extracting effective interaction matrix elements was found to be satisfactory. In the coupled channels emphasis was placed upon the 3 S 1 -- 3 D 1 coupled channel matrix elements. Comparison is made between exactly calculated matrix elements, and matrix elements derived using an extended formulation of the coupled channel Sussex method. For simplicity the potential used is a nonseparable cut-off oscillator. The eigenphases of this potential can be made to approximate the realistic nucleon--nucleon phase shifts at low energies. By using the cut-off oscillator test potential, the original coupled channel Sussex method of determining parameter independence was shown to be incapable of accurately reproducing the exact cut-off oscillator matrix elements. The extended Sussex method was found to be accurate to within 10 percent. The extended method is based upon more general coupled channel DWBA and a noninfinite oscillator wave function solution to the cut-off oscillator auxiliary potential. A comparison is made in the coupled channels between matrix elements generated using the original Sussex method and the extended method. Tables of matrix elements generated using the original uncoupled channel Sussex method and the extended coupled channel Sussex method are presented for all necessary angular momentum channels
International Nuclear Information System (INIS)
Pinnera, I.; Perez, G.; Ramos, M.; Guibert, R.; Aldape, F.; Flores M, J.; Martinez, M.; Molina, E.; Fernandez, A.
2011-01-01
In previous study a set of samples of fine and coarse airborne particulate matter collected in a urban area of Havana City were analyzed by Particle-Induced X-ray Emission (PIXE) technique. The concentrations of 14 elements (S, Cl, K, Ca, Ti, V, Cr, Mn, Fe, Ni, Cu, Zn, Br and Pb) were consistently determined in both particle sizes. The analytical database provided by PIXE was statistically analyzed in order to determine the local pollution sources. The Positive Matrix Factorization (PMF) technique was applied to fine particle data in order to identify possible pollution sources. These sources were further verified by enrichment factor (EF) calculation. A general discussion about these results is presented in this work. (Author)
International Nuclear Information System (INIS)
Song Yu; Dai Wei; Shao Min; Liu Ying; Lu Sihua; Kuster, William; Goldan, Paul
2008-01-01
Identifying the sources of volatile organic compounds (VOCs) is key to reducing ground-level ozone and secondary organic aerosols (SOAs). Several receptor models have been developed to apportion sources, but an intercomparison of these models had not been performed for VOCs in China. In the present study, we compared VOC sources based on chemical mass balance (CMB), UNMIX, and positive matrix factorization (PMF) models. Gasoline-related sources, petrochemical production, and liquefied petroleum gas (LPG) were identified by all three models as the major contributors, with UNMIX and PMF producing quite similar results. The contributions of gasoline-related sources and LPG estimated by the CMB model were higher, and petrochemical emissions were lower than in the UNMIX and PMF results, possibly because the VOC profiles used in the CMB model were for fresh emissions and the profiles extracted from ambient measurements by the two-factor analysis models were 'aged'. - VOCs sources were similar for three models with CMB showing a higher estimate for vehicles
Energy Technology Data Exchange (ETDEWEB)
Song Yu; Dai Wei [Department of Environmental Sciences, Peking University, Beijing 100871 (China); Shao Min [State Joint Key Laboratory of Environmental Simulation and Pollution Control, Peking University, Beijing 100871 (China)], E-mail: mshao@pku.edu.cn; Liu Ying; Lu Sihua [State Joint Key Laboratory of Environmental Simulation and Pollution Control, Peking University, Beijing 100871 (China); Kuster, William; Goldan, Paul [Chemical Sciences Division, NOAA Earth System Research Laboratory, Boulder, CO 80305 (United States)
2008-11-15
Identifying the sources of volatile organic compounds (VOCs) is key to reducing ground-level ozone and secondary organic aerosols (SOAs). Several receptor models have been developed to apportion sources, but an intercomparison of these models had not been performed for VOCs in China. In the present study, we compared VOC sources based on chemical mass balance (CMB), UNMIX, and positive matrix factorization (PMF) models. Gasoline-related sources, petrochemical production, and liquefied petroleum gas (LPG) were identified by all three models as the major contributors, with UNMIX and PMF producing quite similar results. The contributions of gasoline-related sources and LPG estimated by the CMB model were higher, and petrochemical emissions were lower than in the UNMIX and PMF results, possibly because the VOC profiles used in the CMB model were for fresh emissions and the profiles extracted from ambient measurements by the two-factor analysis models were 'aged'. - VOCs sources were similar for three models with CMB showing a higher estimate for vehicles.
PET imaging for receptor occupancy: meditations on calculation and simplification.
Zhang, Yumin; Fox, Gerard B
2012-03-01
This invited mini-review briefly summarizes procedures and challenges of measuring receptor occupancy with positron emission tomography. Instead of describing the detailed analytic procedures of in vivo ligand-receptor imaging, the authors provide a pragmatic approach, along with personal perspectives, for conducting positron emission tomography imaging for receptor occupancy, and systematically elucidate the mathematics of receptor occupancy calculations in practical ways that can be understood with elementary algebra. The authors also share insights regarding positron emission tomography imaging for receptor occupancy to facilitate applications for the development of drugs targeting receptors in the central nervous system.
Neutron-deuteron scattering calculations with W-matrix representation of the two-body input
International Nuclear Information System (INIS)
Bartnik, E.A.; Haberzettl, H.; Januschke, T.; Kerwath, U.; Sandhas, W.
1987-05-01
Employing the W-matrix representation of the partial-wave T matrix introduced by Bartnik, Haberzettl, and Sandhas, we show for the example of the Malfliet-Tjon potentials I and III that the single-term separable part of the W-matrix representation, when used as input in three-nucleon neutron-deuteron scattering calculations, is fully capable of reproducing the exact results obtained by Kloet and Tjon. This approximate two-body input not only satisfies the two-body off-shell unitarity relation but, moreover, it also contains a parameter which may be used in optimizing the three-body data. We present numerical evidence that there exists a variational (minimum) principle for the determination of the three-body binding energy which allows one to choose this parameter also in the absence of an exact reference calculation. Our results for neutron-deuteron scattering show that it is precisely this choice of the parameter which provides optimal scattering data. We conclude that the W-matrix approach, despite its simplicity, is a remarkably efficient tool for high-quality three-nucleon calculations. (orig.)
Treatment of pauli exclusion operator in G-matrix calculations for hypernuclei
International Nuclear Information System (INIS)
Kuo, T.T.S.; Hao, Jifa
1995-01-01
We discuss a matrix-inversion method for treating the Pauli exclusion operator Q in the hyperon-nucleon G-matrix equation for hypernuclei such as Λ 16 O. A model space consisted of shell-model wave functions is employed. We discuss that it is preferable to employ a free-particle spectrum for the intermediate states of the G matrix. This leads to the difficulty that the G-matrix intermediate states are plane waves and on this representation the Pauli operator Q has a rather complicated structure. A matrix-inversion method for over-coming this difficulty is examined. To implement this method it is necessary to employ a so-called n 3Λ truncation approximation. Numerical calculations using the Juelich B tilde and A tilde potentials have been performed to study the accuracy of this approximation. (author)
International Nuclear Information System (INIS)
Noh, Si Wan; Sol, Jeong; Lee, Jai Ki; Lee, Jong Il; Kim, Jang Lyul
2012-01-01
Calculation of total number of disintegrations after intake of radioactive nuclides is indispensable to calculate a dose coefficient which means committed effective dose per unit activity (Sv/Bq). In order to calculate the total number of disintegrations analytically, Birch all's algorithm has been commonly used. As described below, an inverse matrix should be calculated in the algorithm. As biokinetic models have been complicated, however, the inverse matrix does not exist sometime and the total number of disintegrations cannot be calculated. Thus, a numerical method has been applied to DCAL code used to calculate dose coefficients in ICRP publication and IMBA code. In this study, however, we applied the pseudo inverse matrix to solve the problem that the inverse matrix does not exist for. In order to validate our method, the method was applied to two examples and the results were compared to the tabulated data in ICRP publication. MATLAB 2012a was used to calculate the total number of disintegrations and exp m and p inv MATLAB built in functions were employed
Tasić, M.; Mijić, Z.; Rajšić, S.; Stojić, A.; Radenković, M.; Joksić, J.
2009-04-01
The primary objective of the present study was to assess anthropogenic impacts of heavy metals to the environment by determination of total atmospheric deposition of heavy metals. Atmospheric depositions (wet + dry) were collected monthly, from June 2002 to December 2006, at three urban locations in Belgrade, using bulk deposition samplers. Concentrations of Fe, Al, Pb, Zn, Cu, Ni, Mn, Cr, V, As and Cd were analyzed using atomic absorption spectrometry. Based upon these results, the study attempted to examine elemental associations in atmospheric deposition and to elucidate the potential sources of heavy metal contaminants in the region by the use of multivariate receptor model Positive Matrix Factorization (PMF).
International Nuclear Information System (INIS)
Tasic, M; Mijic, Z; Rajsic, S; Stojic, A; Radenkovic, M; Joksic, J
2009-01-01
The primary objective of the present study was to assess anthropogenic impacts of heavy metals to the environment by determination of total atmospheric deposition of heavy metals. Atmospheric depositions (wet + dry) were collected monthly, from June 2002 to December 2006, at three urban locations in Belgrade, using bulk deposition samplers. Concentrations of Fe, Al, Pb, Zn, Cu, Ni, Mn, Cr, V, As and Cd were analyzed using atomic absorption spectrometry. Based upon these results, the study attempted to examine elemental associations in atmospheric deposition and to elucidate the potential sources of heavy metal contaminants in the region by the use of multivariate receptor model Positive Matrix Factorization (PMF).
Use of shell model calculations in R-matrix studies of neutron-induced reactions
International Nuclear Information System (INIS)
Knox, H.D.
1986-01-01
R-matrix analyses of neutron-induced reactions for many of the lightest p-shell nuclei are difficult due to a lack of distinct resonance structure in the reaction cross sections. Initial values for the required R-matrix parameters, E,sub(lambda) and γsub(lambdac) for states in the compound system, can be obtained from shell model calculations. In the present work, the results of recent shell model calculations for the lithium isotopes have been used in R-matrix analyses of 6 Li+n and 7 Li+n reactions for E sub(n) 7 Li and 8 Li on the 6 Li+n and 7 Li+n reaction mechanisms and cross sections are discussed. (author)
Calculating Relativistic Transition Matrix Elements for Hydrogenic Atoms Using Monte Carlo Methods
Alexander, Steven; Coldwell, R. L.
2015-03-01
The nonrelativistic transition matrix elements for hydrogen atoms can be computed exactly and these expressions are given in a number of classic textbooks. The relativistic counterparts of these equations can also be computed exactly but these expressions have been described in only a few places in the literature. In part, this is because the relativistic equations lack the elegant simplicity of the nonrelativistic equations. In this poster I will describe how variational Monte Carlo methods can be used to calculate the energy and properties of relativistic hydrogen atoms and how the wavefunctions for these systems can be used to calculate transition matrix elements.
Kulkarni, Sarika
This dissertation presents a scientific framework that facilitates enhanced understanding of aerosol source -- receptor (S/R) relationships and their impact on the local, regional and global air quality by employing a complementary suite of modeling methods. The receptor -- oriented Positive Matrix Factorization (PMF) technique is combined with Potential Source Contribution Function (PSCF), a trajectory ensemble model, to characterize sources influencing the aerosols measured at Gosan, Korea during spring 2001. It is found that the episodic dust events originating from desert regions in East Asia (EA) that mix with pollution along the transit path, have a significant and pervasive impact on the air quality of Gosan. The intercontinental and hemispheric transport of aerosols is analyzed by a series of emission perturbation simulations with the Sulfur Transport and dEposition Model (STEM), a regional scale Chemical Transport Model (CTM), evaluated with observations from the 2008 NASA ARCTAS field campaign. This modeling study shows that pollution transport from regions outside North America (NA) contributed ˜ 30 and 20% to NA sulfate and BC surface concentration. This study also identifies aerosols transported from Europe, NA and EA regions as significant contributors to springtime Arctic sulfate and BC. Trajectory ensemble models are combined with source region tagged tracer model output to identify the source regions and possible instances of quasi-lagrangian sampled air masses during the 2006 NASA INTEX-B field campaign. The impact of specific emission sectors from Asia during the INTEX-B period is studied with the STEM model, identifying residential sector as potential target for emission reduction to combat global warming. The output from the STEM model constrained with satellite derived aerosol optical depth and ground based measurements of single scattering albedo via an optimal interpolation assimilation scheme is combined with the PMF technique to
International Nuclear Information System (INIS)
Berrington, K.A.
1991-01-01
A progress report on R-matrix calculations of electron impact excitation and opacity data for ionized Fe is given. This paper discusses aspects of modern calculations of the electron excitation process in atoms and ions. The Belfast Atomic Data Bank holds much data in this area, including data recommended in regular Atomic Data Workshops held to evaluate atomic data for the applications community: electron excitation data for Fe ions recommended at recent Workshops is summarised. The main R-matrix programs currently in use are described, and some recent R-matrix calculations on electron excitation in Fe ions are highlighted. Photoabsorption data for all elements up to Fe are also calculated using the R-matrix programs in the international Opacity Project, and a summary is given of the atomic data expected from the Project. Finally some possible future directions are outlined. (orig.)
General-transformation matrix for Dirac spinors and the calculation of spinorial amplitudes
International Nuclear Information System (INIS)
Nam, K.; Moravcsik, M.J.
1983-01-01
A general transformation matrix T(p's';p,s) is constructed which transforms a Dirac spinor psi(p,s) into another Dirac spinor psi(p',s') with arbitrarily given momenta and polarization states by expoloting the so-called Stech operator as one of generators for those transformations. This transformation matrix is then used in a calculation to yield the spinorial matrix element M = anti psi(p',s')GAMMApsi(p,s) for any spin polarization state. The final expressions of these matrix elements show the explicit structure of spin dependence for the process described by these spinorial amplitudes. The kinematical limiting cases such as very low energy or high energy of the various matrix elements can also be easily displayed. Our method is superior to the existing one in the following points. Since we have a well-defined transformation operator between two Dirac spinor states, we can evaluate the necessary phase factor of the matrix elements in an unambiguous way without introducing the coordinate system. This enables us to write down the Feynman amplitudes of complicated processes in any spin basis very easily in terms of previously calculated matrix elements of anti psiGAMMApsi which are building blocks of those Feynman amplitudes. The usefulness of the results is illustrated on Compton scattering and on the elastic scattering of two identical massive leptons where the phase factor is important. It is also shown that the Stech operator as a polarization operator is simply related to the operator K = #betta#(polarized μ . polarized L + 1)/2 which is often used in bound state problems
Energy Technology Data Exchange (ETDEWEB)
Park, Ho Jin [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Shim, Hyung Jin [Seoul National University, Seoul (Korea, Republic of)
2015-05-15
In a Monte Carlo (MC) eigenvalue calculation, it is well known that the apparent variance of a local tally such as pin power differs from the real variance considerably. The MC method in eigenvalue calculations uses a power iteration method. In the power iteration method, the fission matrix (FM) and fission source density (FSD) are used as the operator and the solution. The FM is useful to estimate a variance and covariance because the FM can be calculated by a few cycle calculations even at inactive cycle. Recently, S. Carney have implemented the higher order fission matrix (HOFM) capabilities into the MCNP6 MC code in order to apply to extend the perturbation theory to second order. In this study, the HOFM capability by the Hotelling deflation method was implemented into McCARD and used to predict the behavior of a real and apparent SD ratio. In the simple 1D slab problems, the Endo's theoretical model predicts well the real to apparent SD ratio. It was noted that the Endo's theoretical model with the McCARD higher mode FS solutions by the HOFM yields much better the real to apparent SD ratio than that with the analytic solutions. In the near future, the application for a high dominance ratio problem such as BEAVRS benchmark will be conducted.
M. Alarcón; M. Àvila; J. Belmonte; C. Stefanescu; R. Izquierdo
2010-01-01
The source-receptor models allow the establishment of relationships between a receptor point (sampling point) and the probable source areas (regions of emission) through the association of concentration values at the receptor point with the corresponding atmospheric back-trajectories, and, together with other techniques, to interpret transport phenomena on a synoptic scale. These models are generally used in air pollution studies to determine the areas of origin of chemical compounds measured...
Lattice calculation of hadronic weak matrix elements: the ΔI = 1/2 rule
International Nuclear Information System (INIS)
Bernard, C.
1984-01-01
A lattice Monte Carlo technique for calculating the matrix elements of weak operators is described. Emphasis is placed on the ΔI = 1/2 rule, which is such a large effect that the significant errors associated with current lattice methods (statistics, finite size, finite lattice spacing, extrapolations in quark mass, etc.) should not disguise the important qualitative features. A detailed exposition of the analytic bases for the calculation is given, and an attempt is made to avoid the questionable phenomenological assumptions (such as some of those inherent in the Penguin approach) which were necessary when matrix elements could not be calculated. The current state of the calculation-in-progress is described. This work is being done in collaboration with A. Soni, T. Draper, G. Hockney, and M. Rushton
Reaction matrix calculation of 4He including Δ degrees of freedom
International Nuclear Information System (INIS)
Wakamatsu, Masashi.
1979-06-01
The effects of the Δ(3-3 resonance) components on the binding energy of 4 He are studied within the framework of the reaction matrix theory. In this approach, the Δ configurations in 4 He are introduced in terms of the NΔ transition potential by solving the reaction matrix equation and thus it goes beyond perturbation theory with the NΔ transition potential. Not only the two-body cluster energy but also the three-body cluster energy containing Δ configurations are calculated. (author)
Solution to the inversely stated transient source-receptor problem
International Nuclear Information System (INIS)
Sajo, E.; Sheff, J.R.
1995-01-01
Transient source-receptor problems are traditionally handled via the Boltzmann equation or through one of its variants. In the atmospheric transport of pollutants, meteorological uncertainties in the planetary boundary layer render only a few approximations to the Boltzmann equation useful. Often, due to the high number of unknowns, the atmospheric source-receptor problem is ill-posed. Moreover, models to estimate downwind concentration invariably assume that the source term is known. In this paper, an inverse methodology is developed, based on downwind measurement of concentration and that of meterological parameters to estimate the source term
Convergent J-matrix calculation of the Poet-Temkin model of electron-hydrogen scattering
International Nuclear Information System (INIS)
Konovalov, D.A.; McCarthy, I.E.
1994-01-01
It is shown that the Poet-Temkin model of electron-hydrogen scattering could be solved to any required accuracy using the J-matrix method. The convergence in the basis size is achieved to an accuracy of better than 2% with the inclusion of 37 basis L 2 functions. Previously observed pseudoresonances in the J-matrix calculation naturally disappear with an increase in basis size. No averaging technique is necessary to smooth the convergent J-matrix results. (Author)
Al-Refaie, Ahmed F.; Tennyson, Jonathan
2017-12-01
Construction and diagonalization of the Hamiltonian matrix is the rate-limiting step in most low-energy electron - molecule collision calculations. Tennyson (1996) implemented a novel algorithm for Hamiltonian construction which took advantage of the structure of the wavefunction in such calculations. This algorithm is re-engineered to make use of modern computer architectures and the use of appropriate diagonalizers is considered. Test calculations demonstrate that significant speed-ups can be gained using multiple CPUs. This opens the way to calculations which consider higher collision energies, larger molecules and / or more target states. The methodology, which is implemented as part of the UK molecular R-matrix codes (UKRMol and UKRMol+) can also be used for studies of bound molecular Rydberg states, photoionization and positron-molecule collisions.
Identification of the sources of PM10 in a subway tunnel using positive matrix factorization.
Park, Duckshin; Lee, Taejeong; Hwang, Doyeon; Jung, Wonseok; Lee, Yongil; Cho, KiChul; Kim, Dongsool; Lees, Kiyoung
2014-12-01
The level of particulate matter of less than 10 μm diameter (PM10) at subway platforms can be significantly reduced by installing a platform screen-door system. However, both workers and passengers might be exposed to higher PM10 levels while the cars are within the tunnel because it is a more confined environment. This study determined the PM10 levels in a subway tunnel, and identified the sources of PM10 using elemental analysis and receptor modeling. Forty-four PM10 samples were collected in the tunnel between the Gireum and Mia stations on Line 4 in metropolitan Seoul and analyzed using inductively coupled plasma-atomic emission spectrometry and ion chromatography. The major PM10 sources were identified using positive matrix factorization (PMF). The average PM10 concentration in the tunnels was 200.8 ± 22.0 μg/m3. Elemental analysis indicated that the PM10 consisted of 40.4% inorganic species, 9.1% anions, 4.9% cations, and 45.6% other materials. Iron was the most abundant element, with an average concentration of 72.5 ± 10.4 μg/m3. The PM10 sources characterized by PMF included rail, wheel, and brake wear (59.6%), soil combustion (17.0%), secondary aerosols (10.0%), electric cable wear (8.1%), and soil and road dust (5.4%). Internal sources comprising rail, wheel, brake, and electric cable wear made the greatest contribution to the PM10 (67.7%) in tunnel air. Implications: With installation of a platform screen door, PM10 levels in subway tunnels were higher than those on platforms. Tunnel PM10 levels exceeded 150 µg/m3 of the Korean standard for subway platform. Elemental analysis of PM10 in a tunnel showed that Fe was the most abundant element. Five PM10 sources in tunnel were identified by positive matrix factorization. Railroad-related sources contributed 68% of PM10 in the subway tunnel.
Matrix-operator method for calculation of dynamics of intense beams of charged particles
International Nuclear Information System (INIS)
Kapchinskij, M.I.; Korenev, I.L.; Rinskij, L.A.
1989-01-01
Calculation algorithm for particle dynamics in high-current cyclic and linear accelerators is suggested. Particle movement in six-dimensional phase space is divided into coherent and incoherent components. Incoherent movement is described by envelope method; particle cluster is considered to be even-charged by tri-axial ellipsoid. Coherent movement is described in para-axial approximation; each structure element of the accelerator transport channel is characterized by six-dimensional matrix of phase coordinate transformation of cluster centre and by shift vector resulting from deviation of focusing element parameters from calculated values. Effect of space charge reflected forces is taken into account in the element matrix. Algorithm software is realized using well-known TRANSPORT program
Calculation of hadronic matrix elements using lattice QCD
International Nuclear Information System (INIS)
Gupta, R.
1993-01-01
The author gives a brief introduction to the scope of lattice QCD calculations in his effort to extract the fundamental parameters of the standard model. This goal is illustrated by two examples. First the author discusses the extraction of CKM matrix elements from measurements of form factors for semileptonic decays of heavy-light pseudoscalar mesons such as D → Keν. Second, he presents the status of results for the kaon B parameter relevant to CP violation. He concludes the talk with a short outline of his experiences with optimizing QCD codes on the CM5
Calculation of hadronic matrix elements using lattice QCD
Energy Technology Data Exchange (ETDEWEB)
Gupta, R.
1993-08-01
The author gives a brief introduction to the scope of lattice QCD calculations in his effort to extract the fundamental parameters of the standard model. This goal is illustrated by two examples. First the author discusses the extraction of CKM matrix elements from measurements of form factors for semileptonic decays of heavy-light pseudoscalar mesons such as D {yields} Ke{nu}. Second, he presents the status of results for the kaon B parameter relevant to CP violation. He concludes the talk with a short outline of his experiences with optimizing QCD codes on the CM5.
Direct calculation of resonance energies and widths using an R-matrix approach
International Nuclear Information System (INIS)
Schneider, B.I.
1981-01-01
A modified R-matrix technique is presented which determines the eigenvalues and widths of resonant states by the direct diagonalization of a complex, non-Hermitian matrix. The method utilizes only real basis sets and requires a minimum of complex arithmetic. The method is applied to two problems, a set of coupled square wells and the Pi/sub g/ resonance of N 2 in the static-exchange approximation. The results of the calculation are in good agreement with other methods and converge very quickly with basis-set size
Integrins and extracellular matrix in mechanotransduction
Directory of Open Access Journals (Sweden)
Ramage L
2011-12-01
Full Text Available Lindsay RamageQueen’s Medical Research Institute, University of Edinburgh, Edinburgh, UKAbstract: Integrins are a family of cell surface receptors which mediate cell–matrix and cell–cell adhesions. Among other functions they provide an important mechanical link between the cells external and intracellular environments while the adhesions that they form also have critical roles in cellular signal-transduction. Cell–matrix contacts occur at zones in the cell surface where adhesion receptors cluster and when activated the receptors bind to ligands in the extracellular matrix. The extracellular matrix surrounds the cells of tissues and forms the structural support of tissue which is particularly important in connective tissues. Cells attach to the extracellular matrix through specific cell-surface receptors and molecules including integrins and transmembrane proteoglycans. Integrins work alongside other proteins such as cadherins, immunoglobulin superfamily cell adhesion molecules, selectins, and syndecans to mediate cell–cell and cell–matrix interactions and communication. Activation of adhesion receptors triggers the formation of matrix contacts in which bound matrix components, adhesion receptors, and associated intracellular cytoskeletal and signaling molecules form large functional, localized multiprotein complexes. Cell–matrix contacts are important in a variety of different cell and tissue properties including embryonic development, inflammatory responses, wound healing, and adult tissue homeostasis. This review summarizes the roles and functions of integrins and extracellular matrix proteins in mechanotransduction.Keywords: ligand binding, α subunit, ß subunit, focal adhesion, cell differentiation, mechanical loading, cell–matrix interaction
NLTE steady-state response matrix method.
Faussurier, G.; More, R. M.
2000-05-01
A connection between atomic kinetics and non-equilibrium thermodynamics has been recently established by using a collisional-radiative model modified to include line absorption. The calculated net emission can be expressed as a non-local thermodynamic equilibrium (NLTE) symmetric response matrix. In the paper, this connection is extended to both cases of the average-atom model and the Busquet's model (RAdiative-Dependent IOnization Model, RADIOM). The main properties of the response matrix still remain valid. The RADIOM source function found in the literature leads to a diagonal response matrix, stressing the absence of any frequency redistribution among the frequency groups at this order of calculation.
Energy Technology Data Exchange (ETDEWEB)
Roemelt, Michael, E-mail: michael.roemelt@theochem.rub.de [Lehrstuhl für Theoretische Chemie, Ruhr-Universität Bochum, D-44780 Bochum, Germany and Max-Planck Institut für Kohlenforschung, Kaiser-Wilhelm-Platz 1, 45470 Mülheim an der Ruhr (Germany)
2015-07-28
Spin Orbit Coupling (SOC) is introduced to molecular ab initio density matrix renormalization group (DMRG) calculations. In the presented scheme, one first approximates the electronic ground state and a number of excited states of the Born-Oppenheimer (BO) Hamiltonian with the aid of the DMRG algorithm. Owing to the spin-adaptation of the algorithm, the total spin S is a good quantum number for these states. After the non-relativistic DMRG calculation is finished, all magnetic sublevels of the calculated states are constructed explicitly, and the SOC operator is expanded in the resulting basis. To this end, spin orbit coupled energies and wavefunctions are obtained as eigenvalues and eigenfunctions of the full Hamiltonian matrix which is composed of the SOC operator matrix and the BO Hamiltonian matrix. This treatment corresponds to a quasi-degenerate perturbation theory approach and can be regarded as the molecular equivalent to atomic Russell-Saunders coupling. For the evaluation of SOC matrix elements, the full Breit-Pauli SOC Hamiltonian is approximated by the widely used spin-orbit mean field operator. This operator allows for an efficient use of the second quantized triplet replacement operators that are readily generated during the non-relativistic DMRG algorithm, together with the Wigner-Eckart theorem. With a set of spin-orbit coupled wavefunctions at hand, the molecular g-tensors are calculated following the scheme proposed by Gerloch and McMeeking. It interprets the effective molecular g-values as the slope of the energy difference between the lowest Kramers pair with respect to the strength of the applied magnetic field. Test calculations on a chemically relevant Mo complex demonstrate the capabilities of the presented method.
International Nuclear Information System (INIS)
Yan Guanghua; Liu, Chihray; Lu Bo; Palta, Jatinder R; Li, Jonathan G
2008-01-01
The purpose of this study was to choose an appropriate head scatter source model for the fast and accurate independent planar dose calculation for intensity-modulated radiation therapy (IMRT) with MLC. The performance of three different head scatter source models regarding their ability to model head scatter and facilitate planar dose calculation was evaluated. A three-source model, a two-source model and a single-source model were compared in this study. In the planar dose calculation algorithm, in-air fluence distribution was derived from each of the head scatter source models while considering the combination of Jaw and MLC opening. Fluence perturbations due to tongue-and-groove effect, rounded leaf end and leaf transmission were taken into account explicitly. The dose distribution was calculated by convolving the in-air fluence distribution with an experimentally determined pencil-beam kernel. The results were compared with measurements using a diode array and passing rates with 2%/2 mm and 3%/3 mm criteria were reported. It was found that the two-source model achieved the best agreement on head scatter factor calculation. The three-source model and single-source model underestimated head scatter factors for certain symmetric rectangular fields and asymmetric fields, but similar good agreement could be achieved when monitor back scatter effect was incorporated explicitly. All the three source models resulted in comparable average passing rates (>97%) when the 3%/3 mm criterion was selected. The calculation with the single-source model and two-source model was slightly faster than the three-source model due to their simplicity
Energy Technology Data Exchange (ETDEWEB)
Yan Guanghua [Department of Nuclear and Radiological Engineering, University of Florida, Gainesville, FL 32611 (United States); Liu, Chihray; Lu Bo; Palta, Jatinder R; Li, Jonathan G [Department of Radiation Oncology, University of Florida, Gainesville, FL 32610-0385 (United States)
2008-04-21
The purpose of this study was to choose an appropriate head scatter source model for the fast and accurate independent planar dose calculation for intensity-modulated radiation therapy (IMRT) with MLC. The performance of three different head scatter source models regarding their ability to model head scatter and facilitate planar dose calculation was evaluated. A three-source model, a two-source model and a single-source model were compared in this study. In the planar dose calculation algorithm, in-air fluence distribution was derived from each of the head scatter source models while considering the combination of Jaw and MLC opening. Fluence perturbations due to tongue-and-groove effect, rounded leaf end and leaf transmission were taken into account explicitly. The dose distribution was calculated by convolving the in-air fluence distribution with an experimentally determined pencil-beam kernel. The results were compared with measurements using a diode array and passing rates with 2%/2 mm and 3%/3 mm criteria were reported. It was found that the two-source model achieved the best agreement on head scatter factor calculation. The three-source model and single-source model underestimated head scatter factors for certain symmetric rectangular fields and asymmetric fields, but similar good agreement could be achieved when monitor back scatter effect was incorporated explicitly. All the three source models resulted in comparable average passing rates (>97%) when the 3%/3 mm criterion was selected. The calculation with the single-source model and two-source model was slightly faster than the three-source model due to their simplicity.
Directory of Open Access Journals (Sweden)
Thomas Gomez
2018-04-01
Full Text Available Atomic structure of N-electron atoms is often determined by solving the Hartree-Fock equations, which are a set of integro-differential equations. The integral part of the Hartree-Fock equations treats electron exchange, but the Hartree-Fock equations are not often treated as an integro-differential equation. The exchange term is often approximated as an inhomogeneous or an effective potential so that the Hartree-Fock equations become a set of ordinary differential equations (which can be solved using the usual shooting methods. Because the Hartree-Fock equations are an iterative-refinement method, the inhomogeneous term relies on the previous guess of the wavefunction. In addition, there are numerical complications associated with solving inhomogeneous differential equations. This work uses matrix methods to solve the Hartree-Fock equations as an integro-differential equation. It is well known that a derivative operator can be expressed as a matrix made of finite-difference coefficients; energy eigenvalues and eigenvectors can be obtained by using linear-algebra packages. The integral (exchange part of the Hartree-Fock equation can be approximated as a sum and written as a matrix. The Hartree-Fock equations can be solved as a matrix that is the sum of the differential and integral matrices. We compare calculations using this method against experiment and standard atomic structure calculations. This matrix method can also be used to solve for free-electron wavefunctions, thus improving how the atoms and free electrons interact. This technique is important for spectral line broadening in two ways: it improves the atomic structure calculations, and it improves the motion of the plasma electrons that collide with the atom.
Mohr, Stephan; Dawson, William; Wagner, Michael; Caliste, Damien; Nakajima, Takahito; Genovese, Luigi
2017-10-10
We present CheSS, the "Chebyshev Sparse Solvers" library, which has been designed to solve typical problems arising in large-scale electronic structure calculations using localized basis sets. The library is based on a flexible and efficient expansion in terms of Chebyshev polynomials and presently features the calculation of the density matrix, the calculation of matrix powers for arbitrary powers, and the extraction of eigenvalues in a selected interval. CheSS is able to exploit the sparsity of the matrices and scales linearly with respect to the number of nonzero entries, making it well-suited for large-scale calculations. The approach is particularly adapted for setups leading to small spectral widths of the involved matrices and outperforms alternative methods in this regime. By coupling CheSS to the DFT code BigDFT, we show that such a favorable setup is indeed possible in practice. In addition, the approach based on Chebyshev polynomials can be massively parallelized, and CheSS exhibits excellent scaling up to thousands of cores even for relatively small matrix sizes.
Calculation of source terms for NUREG-1150
International Nuclear Information System (INIS)
Breeding, R.J.; Williams, D.C.; Murfin, W.B.; Amos, C.N.; Helton, J.C.
1987-10-01
The source terms estimated for NUREG-1150 are generally based on the Source Term Code Package (STCP), but the actual source term calculations used in computing risk are performed by much smaller codes which are specific to each plant. This was done because the method of estimating the uncertainty in risk for NUREG-1150 requires hundreds of source term calculations for each accident sequence. This is clearly impossible with a large, detailed code like the STCP. The small plant-specific codes are based on simple algorithms and utilize adjustable parameters. The values of the parameters appearing in these codes are derived from the available STCP results. To determine the uncertainty in the estimation of the source terms, these parameters were varied as specified by an expert review group. This method was used to account for the uncertainties in the STCP results and the uncertainties in phenomena not considered by the STCP
Effective source approach to self-force calculations
International Nuclear Information System (INIS)
Vega, Ian; Wardell, Barry; Diener, Peter
2011-01-01
Numerical evaluation of the self-force on a point particle is made difficult by the use of delta functions as sources. Recent methods for self-force calculations avoid delta functions altogether, using instead a finite and extended 'effective source' for a point particle. We provide a review of the general principles underlying this strategy, using the specific example of a scalar point charge moving in a black hole spacetime. We also report on two new developments: (i) the construction and evaluation of an effective source for a scalar charge moving along a generic orbit of an arbitrary spacetime, and (ii) the successful implementation of hyperboloidal slicing that significantly improves on previous treatments of boundary conditions used for effective-source-based self-force calculations. Finally, we identify some of the key issues related to the effective source approach that will need to be addressed by future work.
Positron collisions with acetylene calculated using the R-matrix with pseudo-states method
Zhang, Rui; Galiatsatos, Pavlos G.; Tennyson, Jonathan
2011-10-01
Eigenphase sums, total cross sections and differential cross sections are calculated for low-energy collisions of positrons with C2H2. The calculations demonstrate that the use of appropriate pseudo-state expansions very significantly improves the representation of this process giving both realistic eigenphases and cross sections. Differential cross sections are strongly forward peaked in agreement with the measurements. These calculations are computationally very demanding; even with improved procedures for matrix diagonalization, fully converged calculations are too expensive with current computer resources. Nonetheless, the calculations show clear evidence for the formation of a virtual state but no indication that acetylene actually binds a positron at its equilibrium geometry.
Positron collisions with acetylene calculated using the R-matrix with pseudo-states method
Energy Technology Data Exchange (ETDEWEB)
Zhang Rui; Galiatsatos, Pavlos G; Tennyson, Jonathan, E-mail: j.tennyson@ucl.ac.uk [Department of Physics and Astronomy, University College London, Gower St., London WC1E 6BT (United Kingdom)
2011-10-14
Eigenphase sums, total cross sections and differential cross sections are calculated for low-energy collisions of positrons with C{sub 2}H{sub 2}. The calculations demonstrate that the use of appropriate pseudo-state expansions very significantly improves the representation of this process giving both realistic eigenphases and cross sections. Differential cross sections are strongly forward peaked in agreement with the measurements. These calculations are computationally very demanding; even with improved procedures for matrix diagonalization, fully converged calculations are too expensive with current computer resources. Nonetheless, the calculations show clear evidence for the formation of a virtual state but no indication that acetylene actually binds a positron at its equilibrium geometry.
Evaluation of the streaming-matrix method for discrete-ordinates duct-streaming calculations
International Nuclear Information System (INIS)
Clark, B.A.; Urban, W.T.; Dudziak, D.J.
1983-01-01
A new deterministic streaming technique called the Streaming Matrix Hybrid Method (SMHM) is applied to two realistic duct-shielding problems. The results are compared to standard discrete-ordinates and Monte Carlo calculations. The SMHM shows promise as an alternative deterministic streaming method to standard discrete-ordinates
International Nuclear Information System (INIS)
Cheng Lan; Huang Weizhi; Zhou Baosen
1996-01-01
Using the matrix elements of M-3Y force as the equivalent G-matrix elements, the spectra of 210 Pb, 206 Pb, 206 Hg and 210 Po are calculated in the framework of the Folded Diagram Method. The results show that such equivalent matrix elements are suitable for microscopic calculations of the nuclear structure in heavy mass region
Source Apportionment of PM10 by Positive Matrix Factorization in Urban Area of Mumbai, India
Directory of Open Access Journals (Sweden)
Indrani Gupta
2012-01-01
Full Text Available Particulate Matter (PM10 has been one of the main air pollutants exceeding the ambient standards in most of the major cities in India. During last few years, receptor models such as Chemical Mass Balance, Positive Matrix Factorization (PMF, PCA–APCS and UNMIX have been used to provide solutions to the source identification and contributions which are accepted for developing effective and efficient air quality management plans. Each site poses different complexities while resolving PM10 contributions. This paper reports the variability of four sites within Mumbai city using PMF. Industrial area of Mahul showed sources such as residual oil combustion and paved road dust (27%, traffic (20%, coal fired boiler (17%, nitrate (15%. Residential area of Khar showed sources such as residual oil combustion and construction (25%, motor vehicles (23%, marine aerosol and nitrate (19%, paved road dust (18% compared to construction and natural dust (27%, motor vehicles and smelting work (25%, nitrate (16% and biomass burning and paved road dust (15% in Dharavi, a low income slum residential area. The major contributors of PM10 at Colaba were marine aerosol, wood burning and ammonium sulphate (24%, motor vehicles and smelting work (22%, Natural soil (19%, nitrate and oil burning (18%.
Jaschke, Daniel; Wall, Michael L.; Carr, Lincoln D.
2018-04-01
Numerical simulations are a powerful tool to study quantum systems beyond exactly solvable systems lacking an analytic expression. For one-dimensional entangled quantum systems, tensor network methods, amongst them Matrix Product States (MPSs), have attracted interest from different fields of quantum physics ranging from solid state systems to quantum simulators and quantum computing. Our open source MPS code provides the community with a toolset to analyze the statics and dynamics of one-dimensional quantum systems. Here, we present our open source library, Open Source Matrix Product States (OSMPS), of MPS methods implemented in Python and Fortran2003. The library includes tools for ground state calculation and excited states via the variational ansatz. We also support ground states for infinite systems with translational invariance. Dynamics are simulated with different algorithms, including three algorithms with support for long-range interactions. Convenient features include built-in support for fermionic systems and number conservation with rotational U(1) and discrete Z2 symmetries for finite systems, as well as data parallelism with MPI. We explain the principles and techniques used in this library along with examples of how to efficiently use the general interfaces to analyze the Ising and Bose-Hubbard models. This description includes the preparation of simulations as well as dispatching and post-processing of them.
RADSHI: shielding calculation program for different geometries sources
International Nuclear Information System (INIS)
Gelen, A.; Alvarez, I.; Lopez, H.; Manso, M.
1996-01-01
A computer code written in pascal language for IBM/Pc is described. The program calculates the optimum thickness of slab shield for different geometries sources. The Point Kernel Method is employed, which enables the obtention of the ionizing radiation flux density. The calculation takes into account the possibility of self-absorption in the source. The air kerma rate for gamma radiation is determined, and with the concept of attenuation length through the equivalent attenuation length the shield is obtained. The scattering and the exponential attenuation inside the shield material is considered in the program. The shield materials can be: concrete, water, iron or lead. It also calculates the shield for point isotropic neutron source, using as shield materials paraffin, concrete or water. (authors). 13 refs
Normalization Of Thermal-Radiation Form-Factor Matrix
Tsuyuki, Glenn T.
1994-01-01
Report describes algorithm that adjusts form-factor matrix in TRASYS computer program, which calculates intraspacecraft radiative interchange among various surfaces and environmental heat loading from sources such as sun.
Calculations of accelerator-based neutron sources characteristics
International Nuclear Information System (INIS)
Tertytchnyi, R.G.; Shorin, V.S.
2000-01-01
Accelerator-based quasi-monoenergetic neutron sources (T(p,n), D(d;n), T(d;n) and Li (p,n)-reactions) are widely used in experiments on measuring the interaction cross-sections of fast neutrons with nuclei. The present work represents the code for calculation of the yields and spectra of neutrons generated in (p, n)- and ( d; n)-reactions on some targets of light nuclei (D, T; 7 Li). The peculiarities of the stopping processes of charged particles (with incident energy up to 15 MeV) in multilayer and multicomponent targets are taken into account. The code version is made in terms of the 'SOURCE,' a subroutine for the well-known MCNP code. Some calculation results for the most popular accelerator- based neutron sources are given. (authors)
Calculation of the fast multiplication factor by the fission matrix method
International Nuclear Information System (INIS)
Naumov, V.A.; Rozin, S.G.; Ehl'perin, T.I.
1976-01-01
A variation of the Monte Carlo method to calculate an effective breeding factor of a nuclear reactor is described. The evaluation procedure of reactivity perturbations by the Monte Carlo method in the first order perturbation theory is considered. The method consists in reducing an integral neutron transport equation to a set of linear algebraic equations. The coefficients of this set are elements of a fission matrix. The fission matrix being a Grin function of the neutron transport equation, is evaluated by the Monte Carlo method. In the program realizing the suggested algorithm, the game for initial neutron energy of a fission spectrum and then for the region of neutron birth, ΔVsub(f)sup(i)has been played in proportion to the product of Σsub(f)sup(i)ΔVsub(f)sup(i), where Σsub(f)sup(i) is a macroscopic cross section in the region numbered at the birth energy. Further iterations of a space distribution of neutrons in the system are performed by the generation method. In the adopted scheme of simulation of neutron histories the emission of secondary neutrons is controlled by weights; it occurs at every collision and not only in the end on the history. The breeding factor is calculated simultaneously with the space distribution of neutron worth in the system relative to the fission process and neutron flux. Efficiency of the described procedure has been tested on the calculation of the breeding factor for the Godiva assembly, simulating a fast reactor with a hard spectrum. A high accuracy of calculations at moderate number of zones in the core and reasonable statistics has been stated
Comparison of matrix exponential methods for fuel burnup calculations
International Nuclear Information System (INIS)
Oh, Hyung Suk; Yang, Won Sik
1999-01-01
Series expansion methods to compute the exponential of a matrix have been compared by applying them to fuel depletion calculations. Specifically, Taylor, Pade, Chebyshev, and rational Chebyshev approximations have been investigated by approximating the exponentials of bum matrices by truncated series of each method with the scaling and squaring algorithm. The accuracy and efficiency of these methods have been tested by performing various numerical tests using one thermal reactor and two fast reactor depletion problems. The results indicate that all the four series methods are accurate enough to be used for fuel depletion calculations although the rational Chebyshev approximation is relatively less accurate. They also show that the rational approximations are more efficient than the polynomial approximations. Considering the computational accuracy and efficiency, the Pade approximation appears to be better than the other methods. Its accuracy is better than the rational Chebyshev approximation, while being comparable to the polynomial approximations. On the other hand, its efficiency is better than the polynomial approximations and is similar to the rational Chebyshev approximation. In particular, for fast reactor depletion calculations, it is faster than the polynomial approximations by a factor of ∼ 1.7. (author). 11 refs., 4 figs., 2 tabs
Energy Technology Data Exchange (ETDEWEB)
Aguiar, Julio C. [Autoridad Regulatoria Nuclear, Laboratorio de Espectrometria Gamma-CTBTO, Av. Del Libertador 8250, C1429BNP Buenos Aires (Argentina)], E-mail: jaguiar@sede.arn.gov.ar
2008-08-15
An analytical expression for the so-called full-energy peak efficiency {epsilon}(E) for cylindrical source with perpendicular axis to an HPGe detector is derived, using point-source measurements. The formula covers different measuring distances, matrix compositions, densities and gamma-ray energies; the only assumption is that the radioactivity is homogeneously distributed within the source. The term for the photon self-attenuation is included in the calculation. Measurements were made using three different sized cylindrical sources of {sup 241}Am, {sup 57}Co, {sup 137}Cs, {sup 54}Mn, and {sup 60}Co with corresponding peaks of 59.5, 122, 662, 835, 1173, and 1332 keV, respectively, and one measurement of radioactive waste drum for 662, 1173, and 1332 keV.
Nahar, S. N.
2003-01-01
Most astrophysical plasmas entail a balance between ionization and recombination. We present new results from a unified method for self-consistent and ab initio calculations for the inverse processes of photoionization and (e + ion) recombination. The treatment for (e + ion) recombination subsumes the non-resonant radiative recombination and the resonant dielectronic recombination processes in a unified scheme (S.N. Nahar and A.K. Pradhan, Phys. Rev. A 49, 1816 (1994);H.L. Zhang, S.N. Nahar, and A.K. Pradhan, J.Phys.B, 32,1459 (1999)). Calculations are carried out using the R-matrix method in the close coupling approximation using an identical wavefunction expansion for both processes to ensure self-consistency. The results for photoionization and recombination cross sections may also be compared with state-of-the-art experiments on synchrotron radiation sources for photoionization, and on heavy ion storage rings for recombination. The new experiments display heretofore unprecedented detail in terms of resonances and background cross sections and thereby calibrate the theoretical data precisely. We find a level of agreement between theory and experiment at about 10% for not only the ground state but also the metastable states. The recent experiments therefore verify the estimated accuracy of the vast amount of photoionization data computed under the OP, IP and related works. features. Present work also reports photoionization cross sections including relativistic effects in the Breit-Pauli R-matrix (BPRM) approximation. Detailed features in the calculated cross sections exhibit the missing resonances due to fine structure. Self-consistent datasets for photoionization and recombination have so far been computed for approximately 45 atoms and ions. These are being reported in a continuing series of publications in Astrophysical J. Supplements (e.g. references below). These data will also be available from the electronic database TIPTOPBASE (http://heasarc.gsfc.nasa.gov)
Receptor modeling for source apportionment of polycyclic aromatic hydrocarbons in urban atmosphere.
Singh, Kunwar P; Malik, Amrita; Kumar, Ranjan; Saxena, Puneet; Sinha, Sarita
2008-01-01
This study reports source apportionment of polycyclic aromatic hydrocarbons (PAHs) in particulate depositions on vegetation foliages near highway in the urban environment of Lucknow city (India) using the principal components analysis/absolute principal components scores (PCA/APCS) receptor modeling approach. The multivariate method enables identification of major PAHs sources along with their quantitative contributions with respect to individual PAH. The PCA identified three major sources of PAHs viz. combustion, vehicular emissions, and diesel based activities. The PCA/APCS receptor modeling approach revealed that the combustion sources (natural gas, wood, coal/coke, biomass) contributed 19-97% of various PAHs, vehicular emissions 0-70%, diesel based sources 0-81% and other miscellaneous sources 0-20% of different PAHs. The contributions of major pyrolytic and petrogenic sources to the total PAHs were 56 and 42%, respectively. Further, the combustion related sources contribute major fraction of the carcinogenic PAHs in the study area. High correlation coefficient (R2 > 0.75 for most PAHs) between the measured and predicted concentrations of PAHs suggests for the applicability of the PCA/APCS receptor modeling approach for estimation of source contribution to the PAHs in particulates.
International Nuclear Information System (INIS)
Song Hong-qiu; Wang Zixing; Cai Yanhuang; Huang Weizhi
1987-01-01
The matrix elements of the M-3Y force are adopted as the equivalent G-matrix elements and the folded diagram method is used to calculate the spectra of 18 O and 18 F. The results show that the matrix elements of the M-3Y force as the equivalent G-matrix elements are suitable for microscopic calculations of the nuclei in the s-d shell
Dolenc, Jožica; Riniker, Sereina; Gaspari, Roberto; Daura, Xavier; van Gunsteren, Wilfred F
2011-08-01
Docking algorithms for computer-aided drug discovery and design often ignore or restrain the flexibility of the receptor, which may lead to a loss of accuracy of the relative free enthalpies of binding. In order to evaluate the contribution of receptor flexibility to relative binding free enthalpies, two host-guest systems have been examined: inclusion complexes of α-cyclodextrin (αCD) with 1-chlorobenzene (ClBn), 1-bromobenzene (BrBn) and toluene (MeBn), and complexes of DNA with the minor-groove binding ligands netropsin (Net) and distamycin (Dist). Molecular dynamics simulations and free energy calculations reveal that restraining of the flexibility of the receptor can have a significant influence on the estimated relative ligand-receptor binding affinities as well as on the predicted structures of the biomolecular complexes. The influence is particularly pronounced in the case of flexible receptors such as DNA, where a 50% contribution of DNA flexibility towards the relative ligand-DNA binding affinities is observed. The differences in the free enthalpy of binding do not arise only from the changes in ligand-DNA interactions but also from changes in ligand-solvent interactions as well as from the loss of DNA configurational entropy upon restraining.
Source-receptor relationships for atmospheric mercury in urban Detroit, Michigan
Lynam, Mary M.; Keeler, Gerald J.
Speciated hourly mercury measurements were made in Detroit, Michigan during four sampling campaigns from 2000 to 2002. In addition, other chemical and meteorological parameters were measured concurrently. These data were analyzed using principal components analysis (PCA) in order to develop source receptor relationships for mercury species in urban Detroit. Reactive gaseous mercury (RGM) was found to cluster on two main factors; photochemistry and a coal combustion factor. Particulate phase mercury, Hg p, tended to cluster with RGM on the same factor. The photochemistry factor corroborates previous observations of the presence of RGM in highly oxidizing atmospheres and does not point to a specific source emission type. Instead, it likely represents local emissions and regional transport of photochemically processed air masses. The coal combustion factor is indicative of emissions from coal-fired power plants near the receptor site. Elemental mercury was found on a factor for combustion from automobiles and points to the influence these emissions have on the receptor site, which was located proximate to two major interstate highways and the largest border crossing in the United States. This analysis reveals that the receptor site which is located in an industrialized sector of the city of Detroit experienced impacts from both stationary and point sources of mercury that are both local and regional in nature.
Linear cascade calculations of matrix due to neutron-induced nuclear reactions
International Nuclear Information System (INIS)
Avila, Ricardo E
2000-01-01
A method is developed to calculate the total number of displacements created by energetic particles resulting from neutron-induced nuclear reactions. The method is specifically conceived to calculate the damage in lithium ceramics by the 6L i(n, α)T reaction. The damage created by any particle is related to that caused by atoms from the matrix recoiling after collision with the primary particle. An integral equation for that self-damage is solved by interactions, using the magic stopping powers of Ziegler, Biersack and Littmark. A projectile-substrate dependent Kinchin-Pease model is proposed, giving and analytic approximation to the total damage as a function of the initial particle energy (au)
High performance shape annealing matrix (HPSAM) methodology for core protection calculators
International Nuclear Information System (INIS)
Cha, K. H.; Kim, Y. H.; Lee, K. H.
1999-01-01
In CPC(Core Protection Calculator) of CE-type nuclear power plants, the core axial power distribution is calculated to evaluate the safety-related parameters. The accuracy of the CPC axial power distribution highly depends on the quality of the so called shape annealing matrix(SAM). Currently, SAM is determined by using data measured during startup test and used throughout the entire cycle. An issue concerned with SAM is that it is fairly sensitive to measurements and thus the fidelity of SAM is not guaranteed for all cycles. In this paper, a novel method to determine a high-performance SAM (HPSAM) is proposed, where both measured and simulated data are used in determining SAM
International Nuclear Information System (INIS)
Jowzani-Moghaddam, A.
1981-01-01
An integral transport method of calculating the geometrical shadowing factor in multiregion annular cells for infinite closely packed lattices in cylindrical geometry is developed. This analytical method has been programmed in the TPGS code. This method is based upon a consideration of the properties of the integral transport method for a nonuniform body, which together with Bonalumi's approximations allows the determination of the approximate multiregion collision probability matrix for infinite closely packed lattices with sufficient accuracy. The multiregion geometrical shadowing factors have been calculated for variations in fuel pin annular segment rings in a geometry of annular cells. These shadowing factors can then be used in the calculation of neutron transport from one annulus to another in an infinite lattice. The result of this new geometrical shadowing and collision probability matrix are compared with the Dancoff-Ginsburg correction and the probability matrix using constant shadowing on Yankee fuel elements in an infinite lattice. In these cases the Dancoff-Ginsburg correction factor and collision probability matrix using constant shadowing are in difference by at most 6.2% and 6%, respectively
Roundtrip matrix method for calculating the leaky resonant modes of open nanophotonic structures
DEFF Research Database (Denmark)
de Lasson, Jakob Rosenkrantz; Kristensen, Philip Trøst; Mørk, Jesper
2014-01-01
We present a numerical method for calculating quasi-normal modes of open nanophotonic structures. The method is based on scattering matrices and a unity eigenvalue of the roundtrip matrix of an internal cavity, and we develop it in detail with electromagnetic fields expanded on Bloch modes...
Iterative acceleration methods for Monte Carlo and deterministic criticality calculations
Energy Technology Data Exchange (ETDEWEB)
Urbatsch, T.J.
1995-11-01
If you have ever given up on a nuclear criticality calculation and terminated it because it took so long to converge, you might find this thesis of interest. The author develops three methods for improving the fission source convergence in nuclear criticality calculations for physical systems with high dominance ratios for which convergence is slow. The Fission Matrix Acceleration Method and the Fission Diffusion Synthetic Acceleration (FDSA) Method are acceleration methods that speed fission source convergence for both Monte Carlo and deterministic methods. The third method is a hybrid Monte Carlo method that also converges for difficult problems where the unaccelerated Monte Carlo method fails. The author tested the feasibility of all three methods in a test bed consisting of idealized problems. He has successfully accelerated fission source convergence in both deterministic and Monte Carlo criticality calculations. By filtering statistical noise, he has incorporated deterministic attributes into the Monte Carlo calculations in order to speed their source convergence. He has used both the fission matrix and a diffusion approximation to perform unbiased accelerations. The Fission Matrix Acceleration method has been implemented in the production code MCNP and successfully applied to a real problem. When the unaccelerated calculations are unable to converge to the correct solution, they cannot be accelerated in an unbiased fashion. A Hybrid Monte Carlo method weds Monte Carlo and a modified diffusion calculation to overcome these deficiencies. The Hybrid method additionally possesses reduced statistical errors.
Iterative acceleration methods for Monte Carlo and deterministic criticality calculations
International Nuclear Information System (INIS)
Urbatsch, T.J.
1995-11-01
If you have ever given up on a nuclear criticality calculation and terminated it because it took so long to converge, you might find this thesis of interest. The author develops three methods for improving the fission source convergence in nuclear criticality calculations for physical systems with high dominance ratios for which convergence is slow. The Fission Matrix Acceleration Method and the Fission Diffusion Synthetic Acceleration (FDSA) Method are acceleration methods that speed fission source convergence for both Monte Carlo and deterministic methods. The third method is a hybrid Monte Carlo method that also converges for difficult problems where the unaccelerated Monte Carlo method fails. The author tested the feasibility of all three methods in a test bed consisting of idealized problems. He has successfully accelerated fission source convergence in both deterministic and Monte Carlo criticality calculations. By filtering statistical noise, he has incorporated deterministic attributes into the Monte Carlo calculations in order to speed their source convergence. He has used both the fission matrix and a diffusion approximation to perform unbiased accelerations. The Fission Matrix Acceleration method has been implemented in the production code MCNP and successfully applied to a real problem. When the unaccelerated calculations are unable to converge to the correct solution, they cannot be accelerated in an unbiased fashion. A Hybrid Monte Carlo method weds Monte Carlo and a modified diffusion calculation to overcome these deficiencies. The Hybrid method additionally possesses reduced statistical errors
Effects of sample size on estimates of population growth rates calculated with matrix models.
Directory of Open Access Journals (Sweden)
Ian J Fiske
Full Text Available BACKGROUND: Matrix models are widely used to study the dynamics and demography of populations. An important but overlooked issue is how the number of individuals sampled influences estimates of the population growth rate (lambda calculated with matrix models. Even unbiased estimates of vital rates do not ensure unbiased estimates of lambda-Jensen's Inequality implies that even when the estimates of the vital rates are accurate, small sample sizes lead to biased estimates of lambda due to increased sampling variance. We investigated if sampling variability and the distribution of sampling effort among size classes lead to biases in estimates of lambda. METHODOLOGY/PRINCIPAL FINDINGS: Using data from a long-term field study of plant demography, we simulated the effects of sampling variance by drawing vital rates and calculating lambda for increasingly larger populations drawn from a total population of 3842 plants. We then compared these estimates of lambda with those based on the entire population and calculated the resulting bias. Finally, we conducted a review of the literature to determine the sample sizes typically used when parameterizing matrix models used to study plant demography. CONCLUSIONS/SIGNIFICANCE: We found significant bias at small sample sizes when survival was low (survival = 0.5, and that sampling with a more-realistic inverse J-shaped population structure exacerbated this bias. However our simulations also demonstrate that these biases rapidly become negligible with increasing sample sizes or as survival increases. For many of the sample sizes used in demographic studies, matrix models are probably robust to the biases resulting from sampling variance of vital rates. However, this conclusion may depend on the structure of populations or the distribution of sampling effort in ways that are unexplored. We suggest more intensive sampling of populations when individual survival is low and greater sampling of stages with high
Effects of sample size on estimates of population growth rates calculated with matrix models.
Fiske, Ian J; Bruna, Emilio M; Bolker, Benjamin M
2008-08-28
Matrix models are widely used to study the dynamics and demography of populations. An important but overlooked issue is how the number of individuals sampled influences estimates of the population growth rate (lambda) calculated with matrix models. Even unbiased estimates of vital rates do not ensure unbiased estimates of lambda-Jensen's Inequality implies that even when the estimates of the vital rates are accurate, small sample sizes lead to biased estimates of lambda due to increased sampling variance. We investigated if sampling variability and the distribution of sampling effort among size classes lead to biases in estimates of lambda. Using data from a long-term field study of plant demography, we simulated the effects of sampling variance by drawing vital rates and calculating lambda for increasingly larger populations drawn from a total population of 3842 plants. We then compared these estimates of lambda with those based on the entire population and calculated the resulting bias. Finally, we conducted a review of the literature to determine the sample sizes typically used when parameterizing matrix models used to study plant demography. We found significant bias at small sample sizes when survival was low (survival = 0.5), and that sampling with a more-realistic inverse J-shaped population structure exacerbated this bias. However our simulations also demonstrate that these biases rapidly become negligible with increasing sample sizes or as survival increases. For many of the sample sizes used in demographic studies, matrix models are probably robust to the biases resulting from sampling variance of vital rates. However, this conclusion may depend on the structure of populations or the distribution of sampling effort in ways that are unexplored. We suggest more intensive sampling of populations when individual survival is low and greater sampling of stages with high elasticities.
International Nuclear Information System (INIS)
Pabroa, Preciosa Corazon B.; Bautista VII, Angel T.; Santos, Flora L.; Racho, Joseph Michael D.
2011-01-01
Ambient fine particulate matter (PM 2 .5) levels at the Metro Manila air sampling stations of the Philippine Nuclear Research Research Institute were found to be above the WHO guideline value of 10 μg m 3 indicating, in general, very poor air quality in the area. The elemental components of the fine particulate matter were obtained using the energy-dispersive x-ray fluorescence spectrometry. Positive matrix factorization, a receptor modelling tool, was used to identify and apportion air pollution sources. Location of probable transboundary air pollutants were evaluated using HYSPLIT (Hybrid Single Particle Lagrangian Integrated Trajectory Model) while location of probable local air pollutant sources were determined using the conditional probability function (CPF). Air pollutant sources can either be natural or anthropogenic. This study has shown natural air pollutant sources such as volcanic eruptions from Bulusan volcano in 2006 and from Anatahan volcano in 2005 to have impacted on the region. Fine soils was shown to have originated from China's Mu US Desert some time in 2004. Smoke in the fine fraction in 2006 show indications of coming from forest fires in Sumatra and Borneo. Fine particulate Pb in Valenzuela was shown to be coming from the surrounding area. Many more significant air pollution impacts can be evaluated with the identification of probable air pollutant sources with the use of elemental fingerprints and locating these sources with the use of HYSPLIT and CPF. (author)
Calculation of beam source geometry of electron accelerator for radiation technologies
International Nuclear Information System (INIS)
Balalykin, N.I.; Derendyaev, Yu.S.; Dolbilov, G.V.; Karlov, A.A.; Korenev, S.A.; Petrov, V.A.; Smolyakova, T.F.
1994-01-01
ELLIPT and GRAFOR programmes written in FORTRAN language were developed to calculate the geometry of an electron source. The programmes enable calculation of electromagnetic field of the source and electron trajectories in the source under preset boundary and initial conditions. The GRAFOR programme allows to display electric field curves and calculated trajectories of large particles. 4 refs., 1 fig
Multisample matrix-assisted laser desorption source for molecular beams of neutral peptides
International Nuclear Information System (INIS)
Lupulescu, C.; Abd El Rahim, M.; Antoine, R.; Barbaire, M.; Broyer, M.; Dagany, X.; Maurelli, J.; Rayane, D.; Dugourd, Ph.
2006-01-01
We developed and tested a multisample laser desorption source for producing stable molecular beams of neutral peptides. Our apparatus is based on matrix-assisted laser desorption technique. The source consists of 96 different targets which may be scanned by a software control procedure. Examples of molecular beams of neutral peptides are presented, as well as the influence of the different source parameters on the jet
Efficient GW calculations using eigenvalue-eigenvector decomposition of the dielectric matrix
Nguyen, Huy-Viet; Pham, T. Anh; Rocca, Dario; Galli, Giulia
2011-03-01
During the past 25 years, the GW method has been successfully used to compute electronic quasi-particle excitation spectra of a variety of materials. It is however a computationally intensive technique, as it involves summations over occupied and empty electronic states, to evaluate both the Green function (G) and the dielectric matrix (DM) entering the expression of the screened Coulomb interaction (W). Recent developments have shown that eigenpotentials of DMs can be efficiently calculated without any explicit evaluation of empty states. In this work, we will present a computationally efficient approach to the calculations of GW spectra by combining a representation of DMs in terms of its eigenpotentials and a recently developed iterative algorithm. As a demonstration of the efficiency of the method, we will present calculations of the vertical ionization potentials of several systems. Work was funnded by SciDAC-e DE-FC02-06ER25777.
The finite element response matrix method
International Nuclear Information System (INIS)
Nakata, H.; Martin, W.R.
1983-02-01
A new technique is developed with an alternative formulation of the response matrix method implemented with the finite element scheme. Two types of response matrices are generated from the Galerkin solution to the weak form of the diffusion equation subject to an arbitrary current and source. The piecewise polynomials are defined in two levels, the first for the local (assembly) calculations and the second for the global (core) response matrix calculations. This finite element response matrix technique was tested in two 2-dimensional test problems, 2D-IAEA benchmark problem and Biblis benchmark problem, with satisfatory results. The computational time, whereas the current code is not extensively optimized, is of the same order of the well estabilished coarse mesh codes. Furthermore, the application of the finite element technique in an alternative formulation of response matrix method permits the method to easily incorporate additional capabilities such as treatment of spatially dependent cross-sections, arbitrary geometrical configurations, and high heterogeneous assemblies. (Author) [pt
Method of computer algebraic calculation of the matrix elements in the second quantization language
International Nuclear Information System (INIS)
Gotoh, Masashi; Mori, Kazuhide; Itoh, Reikichi
1995-01-01
An automated method by the algebraic programming language REDUCE3 for specifying the matrix elements expressed in second quantization language is presented and then applied to the case of the matrix elements in the TDHF theory. This program works in a very straightforward way by commuting the electron creation and annihilation operator (a † and a) until these operators have completely vanished from the expression of the matrix element under the appropriate elimination conditions. An improved method using singlet generators of unitary transformations in the place of the electron creation and annihilation operators is also presented. This improvement reduces the time and memory required for the calculation. These methods will make programming in the field of quantum chemistry much easier. 11 refs., 1 tab
On characteristic polynomials for a generalized chiral random matrix ensemble with a source
Fyodorov, Yan V.; Grela, Jacek; Strahov, Eugene
2018-04-01
We evaluate averages involving characteristic polynomials, inverse characteristic polynomials and ratios of characteristic polynomials for a N× N random matrix taken from a L-deformed chiral Gaussian Unitary Ensemble with an external source Ω. Relation to a recently studied statistics of bi-orthogonal eigenvectors in the complex Ginibre ensemble, see Fyodorov (2017 arXiv:1710.04699), is briefly discussed as a motivation to study asymptotics of these objects in the case of external source proportional to the identity matrix. In particular, for an associated complex bulk/chiral edge scaling regime we retrieve the kernel related to Bessel/Macdonald functions.
Influence of external source location in the reactivity calculation
International Nuclear Information System (INIS)
Silva, Adilson Costa da; Silva, Fernando Carvalho da; Martinez, Aquilino Senra
2011-01-01
We used the neutron diffusion equation with external neutron sources, in cartesian geometry and the two groups of energy, to verify the influence of external neutron source locations in the reactivity calculation. For this, a coarse mesh finite difference method was developed for the adjoint flux calculation and simplifies reactivity calculation in PWR type reactor, which uses the output of the nodal expansion method. The results were obtained for different locations on the two-dimensional plane, as well as for different types of fuel elements in the reactor core. (author)
Influence of external source location in the reactivity calculation
Energy Technology Data Exchange (ETDEWEB)
Silva, Adilson Costa da; Silva, Fernando Carvalho da; Martinez, Aquilino Senra, E-mail: asilva@con.ufrj.b, E-mail: fernando@con.ufrj.b, E-mail: Aquilino@lmp.ufrj.b [Coordenacao dos Programas de Pos-Graduacao de Engenharia (PEN/COPPE/UFRJ), Rio de Janeiro, RJ (Brazil). Programa de Engenharia Nuclear
2011-07-01
We used the neutron diffusion equation with external neutron sources, in cartesian geometry and the two groups of energy, to verify the influence of external neutron source locations in the reactivity calculation. For this, a coarse mesh finite difference method was developed for the adjoint flux calculation and simplifies reactivity calculation in PWR type reactor, which uses the output of the nodal expansion method. The results were obtained for different locations on the two-dimensional plane, as well as for different types of fuel elements in the reactor core. (author)
Development of radioactive sealed sources in epoxy matrix
International Nuclear Information System (INIS)
Benega, Marcos A.G.; Nagatomi, Helio R.; Rostelato, Maria Elisa C.M.; Karan Junior, Dib; Souza, Carla D.; Tiezzi, Rodrigo; Rodrigues, Bruna T.; Peleias Junior, Fernando S.
2013-01-01
The aim of the present work is to study and develop commercial resins for manufacturing solid sealed sources. The sources are produced with radionuclides of barium-133, cesium-137 and cobalt-57. They are used in radiation detectors verification. For the immobilization of the radionuclides in the epoxy matrix, it is made use of emulsifying agents that ensure the miscibility between resin and aqueous radioactive solution, as well as curing agents for controlling, curing and sealing the standard radioactive solution completely. As a result, it is expected to obtain standard sealed sources and equivalent to water. The equivalence to water is an important and necessary characteristic. The radioisotopes used in nuclear medicine are supplied in an aqueous form and the resin applied must have a very similar density comparing to the water. The sources must also be comparable in quality to sources produced internationally, but with low cost and wide available materials in the market. It is intended to create a national technology able to meet the demand of this product in the domestic market and achieve excellence in quality through accreditation and certification of the product by the appropriate agencies. The study of the necessary parameters used in the production of these sources, will bring technology for the manufacture of other categories of standard sealed sources, those used for nuclear medicine, image, laboratories and industry. (author)
Calculation of Rydberg interaction potentials
DEFF Research Database (Denmark)
Weber, Sebastian; Tresp, Christoph; Menke, Henri
2017-01-01
for calculating the required electric multipole moments and the inclusion of electromagnetic fields with arbitrary direction. We focus specifically on symmetry arguments and selection rules, which greatly reduce the size of the Hamiltonian matrix, enabling the direct diagonalization of the Hamiltonian up...... to higher multipole orders on a desktop computer. Finally, we present example calculations showing the relevance of the full interaction calculation to current experiments. Our software for calculating Rydberg potentials including all features discussed in this tutorial is available as open source....
Miyata, Yasuyoshi; Sagara, Yuji; Kanda, Shigeru; Hayashi, Tomayoshi; Kanetake, Hiroshi
2009-04-01
Hepatocyte growth factor receptor/c-Met is associated with malignant aggressiveness and survival in various cancers including bladder cancer. Although phosphorylation of hepatocyte growth factor receptor/c-Met is essential for its function, the pathologic significance of phosphorylated hepatocyte growth factor receptor/c-Met in bladder cancer remains elusive. We investigated the clinical significance of its expression, and its correlation with cancer cell progression-related molecules. The expression levels of 2 tyrosine residues of hepatocyte growth factor receptor/c-Met (pY1234/1235 and pY1349) were examined immunohistochemically in 133 specimens with nonmetastatic bladder cancer. We also investigated their correlation with matrix metalloproteinase-1, -2, -7, and -14; urokinase-type plasminogen activator; E-cadherin; CD44 standard, variant 3, and variant 6; and vascular endothelial growth factor. Expression of phosphorylated hepatocyte growth factor receptor/c-Met was detected in cancer cells, but was rare in normal urothelial cells. Although hepatocyte growth factor receptor/c-Met, pY1234/1235 hepatocyte growth factor receptor/c-Met, and pY1349 hepatocyte growth factor receptor/c-Met were associated with pT stage, multivariate analysis identified pY1349 hepatocyte growth factor receptor/c-met expression only as a significant factor for high pT stage. Expression of pY1349 hepatocyte growth factor receptor/c-Met was a marker of metastasis and (P = .001) and cause-specific survival (P = .003). Expressions of matrix metalloproteinase-2, matrix metalloproteinase-7, and E-cadherin correlated with pY1349 hepatocyte growth factor receptor/c-Met expression. Our results demonstrated that pY1349 hepatocyte growth factor receptor/c-Met plays an important role in tumor development, and its expression is a significant predictor of metastasis and survival of patients with bladder cancer. The results suggest that these activities are mediated, at least in part, by matrix
A dielectric matrix calculation of the surface-plasmon energy for the silicon (100) surface
International Nuclear Information System (INIS)
Forsyth, A.J.; Smith, A.E.; Josefsson, T.W.
1996-01-01
Full text: As an extension of previous work, we present preliminary calculations for the dielectric properties of the silicon (100) surface. In particular, the |q|→0 and |q|=2π/a(1,0,0) surface loss function, and corresponding surface plasmon energies have been calculated within a simple model for the silicon surface. The results have been obtained from the Adler and Wiser dielectric matrix (DM). The bandstructure used for the calculation was based on the highly successful empirical pseudopotential method of Cohen and Chelikovsky. We have used a 59 plane wave basis for the bandstructure, and have chosen a DM size of 59 x 59. Results are compared and contrasted with volume plasmon calculations, free electron calculations and experiment
Directory of Open Access Journals (Sweden)
Abolfazl Asadian
2014-06-01
Full Text Available The helicopter-borne electromagnetic (HEM frequency-domain exploration method is an airborne electromagnetic (AEM technique that is widely used for vast and rough areas for resistivity imaging. The vast amount of digitized data flowing from the HEM method requires an efficient and accurate inversion algorithm. Generally, the inverse modelling of HEM data in the first step requires a precise and efficient technique provided by a forward modelling algorithm. The exact calculation of the sensitivity matrix or Jacobian is also of the utmost importance. As such, the main objective of this study is to design an efficient algorithm for the forward modelling of HEM frequency-domain data for the configuration of horizontal coplanar (HCP coils using fast Hankel transforms (FHTs. An attempt is also made to use an analytical approach to derive the required equations for the Jacobian matrix. To achieve these goals, an elaborated algorithm for the simultaneous calculation of the forward computation and sensitivity matrix is provided. Finally, using two synthetic models, the accuracy of the calculations of the proposed algorithm is verified. A comparison indicates that the obtained results of forward modelling are highly consistent with those reported in Simon et al. (2009 for a four-layer model. Furthermore, the comparison of the results for the sensitivity matrix for a two-layer model with those obtained from software is being used by the BGR Centre in Germany, showing that the proposed algorithm enjoys a high degree of accuracy in calculating this matrix.
The calculation of dose rates from rectangular sources
International Nuclear Information System (INIS)
Hartley, B.M.
1998-01-01
A common problem in radiation protection is the calculation of dose rates from extended sources and irregular shapes. Dose rates are proportional to the solid angle subtended by the source at the point of measurement. Simple methods of calculating solid angles would assist in estimating dose rates from large area sources and therefore improve predictive dose estimates when planning work near such sources. The estimation of dose rates is of particular interest to producers of radioactive ores but other users of bulk radioactive materials may have similar interest. The use of spherical trigonometry can assist in determination of solid angles and a simple equation is derived here for the determination of the dose at any distance from a rectangular surface. The solid angle subtended by complex shapes can be determined by modelling the area as a patchwork of rectangular areas and summing the solid angles from each rectangle. The dose rates from bags of thorium bearing ores is of particular interest in Western Australia and measured dose rates from bags and containers of monazite are compared with theoretical estimates based on calculations of solid angle. The agreement is fair but more detailed measurements would be needed to confirm the agreement with theory. (author)
Matrix kernels for MEG and EEG source localization and imaging
International Nuclear Information System (INIS)
Mosher, J.C.; Lewis, P.S.; Leahy, R.M.
1994-01-01
The most widely used model for electroencephalography (EEG) and magnetoencephalography (MEG) assumes a quasi-static approximation of Maxwell's equations and a piecewise homogeneous conductor model. Both models contain an incremental field element that linearly relates an incremental source element (current dipole) to the field or voltage at a distant point. The explicit form of the field element is dependent on the head modeling assumptions and sensor configuration. Proper characterization of this incremental element is crucial to the inverse problem. The field element can be partitioned into the product of a vector dependent on sensor characteristics and a matrix kernel dependent only on head modeling assumptions. We present here the matrix kernels for the general boundary element model (BEM) and for MEG spherical models. We show how these kernels are easily interchanged in a linear algebraic framework that includes sensor specifics such as orientation and gradiometer configuration. We then describe how this kernel is easily applied to ''gain'' or ''transfer'' matrices used in multiple dipole and source imaging models
Astrocytes as a source for Extracellular matrix molecules and cytokines
Directory of Open Access Journals (Sweden)
Stefan eWiese
2012-06-01
Full Text Available Research of the past 25 years has shown that astrocytes do more than participating and building up the blood brain barrier and detoxify the active synapse by reuptake of neurotransmitters and ions. Indeed, astrocytes express neurotransmitter receptors and, as a consequence, respond to stimuli. Deeper knowledge of the differentiation processes during development of the central nervous system (CNS might help explaining and even help treating neurological diseases like Alzheimer’s disease, Amyotrophic lateral sclerosis (ALS and psychiatric disorders in which astrocytes have been shown to play a role. Astrocytes and oligodendrocytes develop from a multipotent stem cell that prior to this has produced primarily neuronal precursor cells. This switch towards the more astroglial differentiation is regulated by a change in receptor composition on the cell surface and responsiveness of the respective trophic factors Fibroblast growth factor (FGF and Epidermal growth factor (EGF. The glial precursor cell is driven into the astroglial direction by signaling molecules like Ciliary neurotrophic factor (CNTF, Bone Morphogenetic Proteins (BMPs, and EGF. However, the early astrocytes influence their environment not only by releasing and responding to diverse soluble factors but also express a wide range of extracellular matrix (ECM molecules, in particular proteoglycans of the lectican family and tenascins. Lately these ECM molecules have been shown to participate in glial development. In this regard, especially the matrix protein Tenascin C (Tnc proved to be an important regulator of astrocyte precursor cell proliferation and migration during spinal cord development. On the other hand, ECM molecules expressed by reactive astrocytes are also known to act mostly in an inhibitory fashion under pathophysiological conditions. In this regard, we further summarize recent data concerning the role of chondroitin sulfate proteoglycans and Tnc under pathological
Shielding calculations for the Intense Neutron Source Facility. Final report
International Nuclear Information System (INIS)
Battat, M.E.; Henninger, R.J.; Macdonald, J.L.; Dudziak, D.J.
1978-06-01
Results of shielding calculations for the Intnse Neutron Source (INS) facility are presented. The INS facility is designed to house two sources, each of which will produce D--T neutrons with intensities in the range from 1 to 3 x 10 15 n/s on a continuous basis. Topics covered include the design of the biological shield, use of two-dimensional discrete-ordinates results to specify the source terms for a Monte Carlo skyshine calculation, air activation, and dose rates in the source cell (after shutdown) due to activation of the biological shield
Dose calculation for iridium-192 sources by a personal computer
International Nuclear Information System (INIS)
Takahashi, Kenichi; Ishigaki, Hideyo; Udagawa, Kimio; Saito, Masami; Yamaguchi, Kyoko
1988-01-01
Recently Ir-192 sources have been used for interstitial radiotherapy instead of Ra-226 needles. One end of Ir-192 (single-pin) is formed with circlet and implanted Ir-192 sources are not always straight line. So the authors have developed a new dose calculation system, in which the authers employed conventional method considering oblique filteration for linear source and multi-point source method for curved source. Conventionally the positions of sources in three dimensions are determined from projections of the implanted sources on orthogonal or stereo radiographs. But it is frequentry impossible to define the end of sources on account of overlap. Then the authers have devised a method to determine the positions of sources from two radiographs which were taken with arbitrary directions. For tongue cancer injuries of mandibula so frequently occur after interstitial radiotherapy that the calculation of gingival dose is necessary. The positions of the gingival line are determined from two directional radiographs too. Further the three dimensional dose distributions can be displayed on the cathod ray tube. These calculations are performed by using a personal computer because of its distinctive features such as superiority in cost performance and flexibility for development and modification of programs. (author)
Dirac R-matrix calculations of electron-impact excitation of neon-like krypton
Energy Technology Data Exchange (ETDEWEB)
Griffin, D C; Ballance, C P [Department of Physics, Rollins College, Winter Park, FL 32789 (United States); Mitnik, D M [Instituto de Astronomia y Fisica del Espacio, and Departamento de Fisica, Universidad de Buenos Aires (Argentina); Berengut, J C [School of Physics, University of New South Wales, Sydney 2052 (Australia)
2008-11-14
We have employed the Dirac R-matrix method to determine electron-impact excitation cross sections and effective collision strengths in Ne-like Kr{sup 26+}. Both the configuration-interaction expansion of the target and the close-coupling expansion employed in the scattering calculation included 139 levels up through n = 5. Many of the cross sections are found to exhibit very strong resonances, yet the effects of radiation damping on the resonance contributions are relatively small. Using these collisional data along with multi-configuration Dirac-Fock radiative rates, we have performed collisional-radiative modeling calculations to determine line-intensity ratios for various radiative transitions that have been employed for diagnostics of other Ne-like ions.
Calculation of the counting efficiency for extended sources
International Nuclear Information System (INIS)
Korun, M.; Vidmar, T.
2002-01-01
A computer program for calculation of efficiency calibration curves for extended samples counted on gamma- and X ray spectrometers is described. The program calculates efficiency calibration curves for homogeneous cylindrical samples placed coaxially with the symmetry axis of the detector. The method of calculation is based on integration over the sample volume of the efficiencies for point sources measured in free space on an equidistant grid of points. The attenuation of photons within the sample is taken into account using the self-attenuation function calculated with a two-dimensional detector model. (author)
Calculation of dose for β point and sphere sources in soft tissue
International Nuclear Information System (INIS)
Sun Fuyin; Yuan Shuyu; Tan Jian
1999-01-01
Objective: To compare the results of the distribution of dose rate calculated by three typical methods for point source and sphere source of β nuclide. Methods: Calculating and comparing the distributions of dose rate from 32 P β point and sphere sources in soft tissue calculated by the three methods published in references, [1]. [2] and [3], respectively. Results: For the point source of 3.7 x 10 7 Bq (1mCi), the variations of the calculation results of the three formulas are within 10% if r≤0.35 g/cm 2 , r being the distance from source, and larger than 10% if r > 0.35 g/cm 2 . For the sphere source whose volume is 50 μl and activity is 3.7 x 10 7 Bq(1 mCi), the variations are within 10% if z≤0.15 g/cm 2 , z being the distance from the surface of the sphere source to a point outside the sphere. Conclusion: The agreement of the distributions of the dose rate calculated by the three methods mentioned above for point and sphere β source are good if the distances from point source or the surface of sphere source to the points observed are small, and poor if they are large
Matrix elements and few-body calculations within the unitary correlation operator method
International Nuclear Information System (INIS)
Roth, R.; Hergert, H.; Papakonstantinou, P.
2005-01-01
We employ the unitary correlation operator method (UCOM) to construct correlated, low-momentum matrix elements of realistic nucleon-nucleon interactions. The dominant short-range central and tensor correlations induced by the interaction are included explicitly by an unitary transformation. Using correlated momentum-space matrix elements of the Argonne V18 potential, we show that the unitary transformation eliminates the strong off-diagonal contributions caused by the short-range repulsion and the tensor interaction and leaves a correlated interaction dominated by low-momentum contributions. We use correlated harmonic oscillator matrix elements as input for no-core shell model calculations for few-nucleon systems. Compared to the bare interaction, the convergence properties are dramatically improved. The bulk of the binding energy can already be obtained in very small model spaces or even with a single Slater determinant. Residual long-range correlations, not treated explicitly by the unitary transformation, can easily be described in model spaces of moderate size allowing for fast convergence. By varying the range of the tensor correlator we are able to map out the Tjon line and can in turn constrain the optimal correlator ranges. (orig.)
Matrix elements and few-body calculations within the unitary correlation operator method
International Nuclear Information System (INIS)
Roth, R.; Hergert, H.; Papakonstantinou, P.; Neff, T.; Feldmeier, H.
2005-01-01
We employ the unitary correlation operator method (UCOM) to construct correlated, low-momentum matrix elements of realistic nucleon-nucleon interactions. The dominant short-range central and tensor correlations induced by the interaction are included explicitly by an unitary transformation. Using correlated momentum-space matrix elements of the Argonne V18 potential, we show that the unitary transformation eliminates the strong off-diagonal contributions caused by the short-range repulsion and the tensor interaction and leaves a correlated interaction dominated by low-momentum contributions. We use correlated harmonic oscillator matrix elements as input for no-core shell model calculations for few-nucleon systems. Compared to the bare interaction, the convergence properties are dramatically improved. The bulk of the binding energy can already be obtained in very small model spaces or even with a single Slater determinant. Residual long-range correlations, not treated explicitly by the unitary transformation, can easily be described in model spaces of moderate size allowing for fast convergence. By varying the range of the tensor correlator we are able to map out the Tjon line and can in turn constrain the optimal correlator ranges
Directory of Open Access Journals (Sweden)
John H. Cantrell
2015-03-01
Full Text Available The chemical treatment of carbon fibers used in carbon fiber-epoxy matrix composites greatly affects the fraction of hydrogen bonds (H-bonds formed at the fiber-matrix interface. The H-bonds are major contributors to the fiber-matrix interfacial shear strength and play a direct role in the interlaminar shear strength (ILSS of the composite. The H-bond contributions τ to the ILSS and magnitudes KN of the fiber-matrix interfacial stiffness moduli of seven carbon fiber-epoxy matrix composites, subjected to different fiber surface treatments, are calculated from the Morse potential for the interactions of hydroxyl and carboxyl acid groups formed on the carbon fiber surfaces with epoxy receptors. The τ calculations range from 7.7 MPa to 18.4 MPa in magnitude, depending on fiber treatment. The KN calculations fall in the range (2.01 – 4.67 ×1017 N m−3. The average ratio KN/|τ| is calculated to be (2.59 ± 0.043 × 1010 m−1 for the seven composites, suggesting a nearly linear connection between ILSS and H-bonding at the fiber-matrix interfaces. The linear connection indicates that τ may be assessable nondestructively from measurements of KN via a technique such as angle beam ultrasonic spectroscopy.
Hedberg, Emma; Gidhagen, Lars; Johansson, Christer
Sampling of particles (PM10) was conducted during a one-year period at two rural sites in Central Chile, Quillota and Linares. The samples were analyzed for elemental composition. The data sets have undergone source-receptor analyses in order to estimate the sources and their abundance's in the PM10 size fraction, by using the factor analytical method positive matrix factorization (PMF). The analysis showed that PM10 was dominated by soil resuspension at both sites during the summer months, while during winter traffic dominated the particle mass at Quillota and local wood burning dominated the particle mass at Linares. Two copper smelters impacted the Quillota station, and contributed to 10% and 16% of PM10 as an average during summer and winter, respectively. One smelter impacted Linares by 8% and 19% of PM10 in the summer and winter, respectively. For arsenic the two smelters accounted for 87% of the monitored arsenic levels at Quillota and at Linares one smelter contributed with 72% of the measured mass. In comparison with PMF, the use of a dispersion model tended to overestimate the smelter contribution to arsenic levels at both sites. The robustness of the PMF model was tested by using randomly reduced data sets, where 85%, 70%, 50% and 33% of the samples were included. In this way the ability of the model to reconstruct the sources initially found by the original data set could be tested. On average for all sources the relative standard deviation increased from 7% to 25% for the variables identifying the sources, when decreasing the data set from 85% to 33% of the samples, indicating that the solution initially found was very stable to begin with. But it was also noted that sources due to industrial or combustion processes were more sensitive for the size of the data set, compared to the natural sources as local soil and sea spray sources.
International Nuclear Information System (INIS)
Burkitt, A.N.; Irving, A.C.
1988-01-01
Two of the methods that are widely used in lattice gauge theory calculations requiring inversion of the fermion matrix are the Lanczos and the conjugate gradient algorithms. Those algorithms are already known to be closely related. In fact for matrix inversion, in exact arithmetic, they give identical results at each iteration and are just alternative formulations of a single algorithm. This equivalence survives rounding errors. We give the identities between the coefficients of the two formulations, enabling many of the best features of them to be combined. (orig.)
Response matrix of a multisphere neutron spectrometer with an 3 He proportional counter
International Nuclear Information System (INIS)
Vega C, H.R.; Manzanares A, E.; Hernandez D, V.M.; Mercado S, G.A.
2005-01-01
The response matrix of a Bonner sphere spectrometer was calculated by use of the MCNP code. As thermal neutron counter, the spectrometer has a 3.2 cm-diameter 3 He-filled proportional counter which is located at the center of a set of polyethylene spheres. The response was calculated for 0, 3, 5, 6, 8, 10, 12, and 16 inches-diameter polyethylene spheres for neutrons whose energy goes from 10 -9 to 20 MeV. The response matrix was compared with a set of responses measured with several monoenergetic neutron sources. In this comparison the calculated matrix agrees with the experimental results. The matrix was also compared with the response matrix calculated for the PTB C spectrometer. Even though that calculation was carried out using a detailed model to describe the proportional counter; both matrices do agree, but small differences are observed in the bare case because of the difference in the model used during calculations. Other differences are in some spheres for 14.8 and 20 MeV neutrons, probably due to the differences in the cross sections used during both calculations. (Author) 28 refs., 1 tab., 6 figs
Subcriticality calculation in nuclear reactors with external neutron sources
Energy Technology Data Exchange (ETDEWEB)
Silva, Adilson Costa da; Martinez, Aquilino Senra; Silva, Fernando Carvalho da [Universidade Federal do Rio de Janeiro (UFRJ), RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia (COPPE). Programa de Engenharia Nuclear]. E-mails: asilva@con.ufrj.br; aquilino@lmp.ufrj.br; fernando@con.ufrj.br
2007-07-01
The main objective of this paper consists on the development of a methodology to monitor subcriticality. We used the inverse point kinetic equation with 6 precursor groups and external neutron sources for the calculation of reactivity. The input data for the inverse point kinetic equation was adjusted, in order to use the neutron counting rates obtained from the subcritical multiplication (1/M) in a nuclear reactor. In this paper, we assumed that the external neutron sources strength is constant and we define it in terms of a known initial condition. The results obtained from inverse point kinetic equation with external neutron sources were compared with the results obtained with a benchmark calculation, and showed good accuracy (author)
Subcriticality calculation in nuclear reactors with external neutron sources
International Nuclear Information System (INIS)
Silva, Adilson Costa da; Martinez, Aquilino Senra; Silva, Fernando Carvalho da
2007-01-01
The main objective of this paper consists on the development of a methodology to monitor subcriticality. We used the inverse point kinetic equation with 6 precursor groups and external neutron sources for the calculation of reactivity. The input data for the inverse point kinetic equation was adjusted, in order to use the neutron counting rates obtained from the subcritical multiplication (1/M) in a nuclear reactor. In this paper, we assumed that the external neutron sources strength is constant and we define it in terms of a known initial condition. The results obtained from inverse point kinetic equation with external neutron sources were compared with the results obtained with a benchmark calculation, and showed good accuracy (author)
Ionization efficiency calculations for cavity thermoionization ion source
International Nuclear Information System (INIS)
Turek, M.; Pyszniak, K.; Drozdziel, A.; Sielanko, J.; Maczka, D.; Yuskevich, Yu.V.; Vaganov, Yu.A.
2009-01-01
The numerical model of ionization in a thermoionization ion source is presented. The review of ion source ionization efficiency calculation results for various kinds of extraction field is given. The dependence of ionization efficiency on working parameters like ionizer length and extraction voltage is discussed. Numerical simulations results are compared to theoretical predictions obtained from a simplified ionization model
International Nuclear Information System (INIS)
Krenciglowa, E.M.; Kung, C.L.; Kuo, T.T.S.; Osnes, E.; and Department of Physics, State University of New York at Stony Brook, Stony Brook, New York 11794)
1976-01-01
Different definitions of the reaction matrix G appropriate to the calculation of nuclear structure are reviewed and discussed. Qualitative physical arguments are presented in support of a two-step calculation of the G-matrix for finite nuclei. In the first step the high-energy excitations are included using orthogonalized plane-wave intermediate states, and in the second step the low-energy excitations are added in, using harmonic oscillator intermediate states. Accurate calculations of G-matrix elements for nuclear structure calculations in the Aapprox. =18 region are performed following this procedure and treating the Pauli exclusion operator Q 2 /sub p/ by the method of Tsai and Kuo. The treatment of Q 2 /sub p/, the effect of the intermediate-state spectrum and the energy dependence of the reaction matrix are investigated in detail. The present matrix elements are compared with various matrix elements given in the literature. In particular, close agreement is obtained with the matrix elements calculated by Kuo and Brown using approximate methods
The finite element response Matrix method
International Nuclear Information System (INIS)
Nakata, H.; Martin, W.R.
1983-01-01
A new method for global reactor core calculations is described. This method is based on a unique formulation of the response matrix method, implemented with a higher order finite element method. The unique aspects of this approach are twofold. First, there are two levels to the overall calculational scheme: the local or assembly level and the global or core level. Second, the response matrix scheme, which is formulated at both levels, consists of two separate response matrices rather than one response matrix as is generally the case. These separate response matrices are seen to be quite beneficial for the criticality eigenvalue calculation, because they are independent of k /SUB eff/. The response matrices are generated from a Galerkin finite element solution to the weak form of the diffusion equation, subject to an arbitrary incoming current and an arbitrary distributed source. Calculational results are reported for two test problems, the two-dimensional International Atomic Energy Agency benchmark problem and a two-dimensional pressurized water reactor test problem (Biblis reactor), and they compare well with standard coarse mesh methods with respect to accuracy and efficiency. Moreover, the accuracy (and capability) is comparable to fine mesh for a fraction of the computational cost. Extension of the method to treat heterogeneous assemblies and spatial depletion effects is discussed
Coupling-matrix approach to the Chern number calculation in disordered systems
International Nuclear Information System (INIS)
Zhang Yi-Fu; Ju Yan; Sheng Li; Shen Rui; Xing Ding-Yu; Yang Yun-You; Sheng Dong-Ning
2013-01-01
The Chern number is often used to distinguish different topological phases of matter in two-dimensional electron systems. A fast and efficient coupling-matrix method is designed to calculate the Chern number in finite crystalline and disordered systems. To show its effectiveness, we apply the approach to the Haldane model and the lattice Hofstadter model, and obtain the correct quantized Chern numbers. The disorder-induced topological phase transition is well reproduced, when the disorder strength is increased beyond the critical value. We expect the method to be widely applicable to the study of topological quantum numbers. (rapid communication)
Calculations of the properties of superconducting alloys via the average T-matrix approximation
International Nuclear Information System (INIS)
Chatterjee, P.
1980-01-01
The theoretical formula of McMillan, modified via the multiple-scattering theory by Gomersall and Gyorffy, has been very successful in computing the electron-phonon coupling constant (lambda) and the transition temperature (Tsub(c)) of many superconducting elements and compounds. For disordered solids, such as substitutional alloys, however, this theory fails because of the breakdown of the translational symmetry used in the multiple-scattering theory. Under these conditions the problem can still be solved if the t-matrix is averaged in the random phase approximation (average T-matrix approximation). Gomersall and Gyorffy's expression is reformulated for lambda in the random phase approximation. This theory is applied to calculate lambda and Tsub(c) of the binary substitutional NbMo alloy system at different concentrations. The results appear to be in fair agreement with experiments. (author)
International Nuclear Information System (INIS)
Josefsson, T.W.; Smith, A.E.
1994-01-01
Inelastic scattering of electrons in a crystalline environment may be represented by a complex non-hermitian potential. Completed generalised expressions for this inelastic electron scattering potential matrix, including virtual inelastic scattering, are derived for outer-shell electron and plasmon excitations. The relationship between these expressions and the general anisotropic dielectric response matrix of the solid is discussed. These generalised expressions necessarily include the off-diagonal terms representing effects due to departure from translational invariance in the interaction. Results are presented for the diagonal back structure dependent inelastic and virtual inelastic scattering potentials for Si, from a calculation of the inverse dielectric matrix in the random phase approximation. Good agreement is found with experiment as a function of incident energies from 10 eV to 100 keV. Anisotropy effects and hence the interaction de localisation represented by the off-diagonal scattering potential terms, are found to be significant below 1 keV. 38 refs., 2 figs
SKYSHIN: A computer code for calculating radiation dose over a barrier
International Nuclear Information System (INIS)
Atwood, C.L.; Boland, J.R.; Dickman, P.T.
1986-11-01
SKYSHIN is a computer code for calculating the radioactive dose (mrem), when there is a barrier between the point source and the receptor. The two geometrical configurations considered are: the source and receptor separated by a rectangular wall, and the source at the bottom of a cylindrical hole in the ground. Each gamma ray traveling over the barrier is assumed to be scattered at a single point. The dose to a receptor from such paths is numerically integrated for the total dose, with symmetry used to reduce the triple integral to a double integral. The buildup factor used along a straight line through air is based on published data, and extrapolated in a stable way to low energy levels. This buildup factor was validated by comparing calculated and experimental line-of-sight doses. The entire code shows good agreement to limited field data. The code runs on a CDC or on a Vax computer, and could be modified easily for others
Source term calculations - Ringhals 2 PWR
International Nuclear Information System (INIS)
Johansson, L.L.
1998-02-01
This project was performed within the fifth and final phase of sub-project RAK-2.1 of the Nordic Co-operative Reactor Safety Program, NKS.RAK-2.1 has also included studies of reflooding of degraded core, recriticality and late phase melt progression. Earlier source term calculations for Swedish nuclear power plants are based on the integral code MAAP. A need was recognised to compare these calculations with calculations done with mechanistic codes. In the present work SCDAP/RELAP5 and CONTAIN were used. Only limited results could be obtained within the frame of RAK-2.1, since many problems were encountered using the SCDAP/RELAP5 code. The main obstacle was the extremely long execution times of the MOD3.1 version, but also some dubious fission product calculations. However, some interesting results were obtained for the studied sequence, a total loss of AC power. The report describes the modelling approach for SCDAP/RELAP5 and CONTAIN, and discusses results for the transient including the event of a surge line creep rupture. The study will probably be completed later, providing that an improved SCDAP/RELAP5 code version becomes available. (au) becomes available. (au)
FEMB, 2-D Homogeneous Neutron Diffusion in X-Y Geometry with Keff Calculation, Dyadic Fission Matrix
International Nuclear Information System (INIS)
Misfeldt, I.B.
1987-01-01
1 - Nature of physical problem solved: The two-dimensional neutron diffusion equation (xy geometry) is solved in the homogeneous form (K eff calculation). The boundary conditions specify each group current as a linear homogeneous function of the group fluxes (gamma matrix concept). For each material, the fission matrix is assumed to be dyadic. 2 - Method of solution: Finite element formulation with Lagrange type elements. Solution technique: SOR with extrapolation. 3 - Restrictions on the complexity of the problem: Maximum order of the Lagrange elements is 6
Miyashita, Keiko; Oyama, Tohru; Sakuta, Tetsuya; Tokuda, Masayuki; Torii, Mitsuo
2012-06-01
Anandamide (N-arachidonoylethanolamine [AEA]) is one of the main endocannabinoids. Endocannabinoids are implicated in various physiological and pathologic functions, inducing not only nociception but also regeneration and inflammation. The role of the endocannabinoid system in peripheral organs was recently described. The aim of this study was to investigate the effect of AEA on matrix metalloproteinase (MMP)-2 induction in human dental pulp cells (HPC). We examined AEA-induced MMP-2 production and the expression of AEA receptors (cannabinoid [CB] receptor-1, CB2, and transient receptor potential vanilloid-1 [TRPV1]) in HPC by Western blot. MMP-2 concentrations in supernatants were determined by enzyme-linked immunosorbent assay. We then investigated the role of the AEA receptors and mitogen-activated protein kinase in AEA-induced MMP-2 production in HPC. AEA significantly induced MMP-2 production in HPC. HPC expressed all 3 types of AEA receptor (CB1, CB2, and TRPV1). AEA-induced MMP-2 production was blocked by CB1 or TRPV1 antagonists and by small interfering RNA for CB1 or TRPV1. Furthermore, c-Jun N-terminal kinase inhibitor also reduced MMP-2 production. We demonstrated for the first time that AEA induced MMP-2 production via CB1 and TRPV1 in HPC. Copyright © 2012 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.
Luo, D.; Pradhan, A. K.
1990-01-01
The new R-matrix package for comprehensive close-coupling calculations for electron scattering with the first three ions in the boron isoelectronic sequence, the astrophysically significant C(+), N(2+), and O(3+), is presented. The collision strengths are calculated in the LS coupling approximation, as well as in pair-coupling scheme, for the transitions among the fine-structure sublevels. Calculations are carried out at a large number of energies in order to study the detailed effects of autoionizing resonances.
Research on Primary Shielding Calculation Source Generation Codes
Zheng, Zheng; Mei, Qiliang; Li, Hui; Shangguan, Danhua; Zhang, Guangchun
2017-09-01
Primary Shielding Calculation (PSC) plays an important role in reactor shielding design and analysis. In order to facilitate PSC, a source generation code is developed to generate cumulative distribution functions (CDF) for the source particle sample code of the J Monte Carlo Transport (JMCT) code, and a source particle sample code is deveoped to sample source particle directions, types, coordinates, energy and weights from the CDFs. A source generation code is developed to transform three dimensional (3D) power distributions in xyz geometry to source distributions in r θ z geometry for the J Discrete Ordinate Transport (JSNT) code. Validation on PSC model of Qinshan No.1 nuclear power plant (NPP), CAP1400 and CAP1700 reactors are performed. Numerical results show that the theoretical model and the codes are both correct.
Energy Technology Data Exchange (ETDEWEB)
Krebs, M.O.; Trovero, F.; Desban, M.; Gauchy, C.; Glowinski, J.; Kemel, M.L. (College de France, Paris (France))
1991-05-01
Striosome- and matrix-enriched striatal zones were defined in coronal and sagittal brain sections of the rat, on the basis of {sup 3}H-naloxone binding to mu-opiate receptors (a striosome-specific marker). Then, using a new in vitro microsuperfusion device, the NMDA (50 microM)-evoked release of newly synthesized {sup 3}H-dopamine ({sup 3}H-DA) was examined in these four striatal areas under Mg(2+)-free conditions. The amplitudes of the responses were different in striosomal (171 +/- 6% and 161 +/- 5% of the spontaneous release) than in matrix areas (223 +/- 6% and 248 +/- 12%), even when glycine (1 or 100 microM) was coapplied (in the presence of 1 microM strychnine). In the four areas, the NMDA-evoked release of {sup 3}H-DA was blocked completely by Mg{sup 2}{sup +} (1 mM) or (+)-5-methyl-10,11-dihydro-5H-dibenzo(a,d)cyclohepten-5,10-imine maleate (MK-801; 1 microM) and almost totally abolished by kynurenate (100 microM). Because the tetrodotoxin (TTX)-resistant NMDA-evoked release of {sup 3}H-DA was similar in striosome- (148 +/- 5% and 152 +/- 6%) or matrix-enriched (161 +/- 5% and 156 +/- 7%) areas, the indirect (TTX-sensitive) component of NMDA-evoked responses, which involves striatal neurons and/or afferent fibers, seems more important in the matrix- than in the striosome-enriched areas. The modulation of DA release by cortical glutamate and/or aspartate-containing inputs through NMDA receptors in the matrix appears thus to be partly distinct from that observed in the striosomes, providing some functional basis for the histochemical striatal heterogeneity.
Multiphonon states in even-even spherical nuclei. Pt.1. Calculation of the overlap matrix
International Nuclear Information System (INIS)
Piepenbring, R.; Protasov, K.V.; Silvestre-Brac, B.
1995-01-01
The multiphonon method, previously developed for deformed nuclei is extended to the case of even-even spherical nuclei. Recursion formulae, well suited for numerical calculations are given for the overlap matrix elements. The method is illustrated for a single j-shell, where S-, D-, G-, .. phonons are introduced. In such an approach, the Pauli principle is fully and properly taken into account. ((orig.))
Clark, E C; Baxter, L R
2000-11-01
Serotonin (5-HT) 5-HT(2A) and 5-HT(2C) receptors are thought to play important roles in the mammalian striatum. As basal ganglia functions in general are thought highly conserved among amniotes, we decided to use in situ autoradiographic methods to determine the occurrence and distribution of pharmacologically mammal-like 5-HT(2A) and 5-HT(2C) receptors in the lizard, Anolis carolinensis, with particular attention to the striatum. We also determined the distributions of 5-HT(1A), 5-HT(1B/D), 5 HT(3), and 5-HT(uptake) receptors for comparison. All 5-HT receptors examined showed pharmacological binding specificity, and forebrain binding density distributions that resembled those reported for mammals. Anolis 5 HT(2A/C) and 5-HT(1A) site distributions were similar in both in vivo and ex vivo binding experiments. 5-HT(2A & C) receptors occur in both high and low affinity states, the former having preferential affinity for (125)I-(+/-)-2,5-dimethoxy-4-iodo-amphetamine hydrochloride ((125)I-DOI). In mammals (125)I-DOI binding shows a patchy density distribution in the striatum, being more dense in striosomes than in surrounding matrix. There was no evidence of any such patchy density of (125)I-DOI binding in the anole striatum, however. As a further indication that anoles do not possess a striosome and matrix striatal organization, neither (3)H-naloxone binding nor histochemical staining for acetylcholinesterase activity (AChE) were patchy. AChE did show a band-like striatal distribution, however, similar to that seen in birds. Copyright 2001 S. Karger AG, Basel
Receptor models for source apportionment of remote aerosols in Brazil
International Nuclear Information System (INIS)
Artaxo Netto, P.E.
1985-11-01
The PIXE (particle induced X-ray emission), and PESA (proton elastic scattering analysis) method were used in conjunction with receptor models for source apportionment of remote aerosols in Brazil. The PIXE used in the determination of concentration for elements with Z >- 11, has a detection limit of about 1 ng/m 3 . The concentrations of carbon, nitrogen and oxygen in the fine fraction of Amazon Basin aerosols was measured by PESA. We sampled in Jureia (SP), Fernando de Noronha, Arembepe (BA), Firminopolis (GO), Itaberai (GO) and Amazon Basin. For collecting the airbone particles we used cascade impactors, stacked filter units, and streaker samplers. Three receptor models were used: chemical mass balance, stepwise multiple regression analysis and principal factor analysis. The elemental and gravimetric concentrations were explained by the models within the experimental errors. Three sources of aerosol were quantitatively distinguished: marine aerosol, soil dust and aerosols related to forests. The emission of aerosols by vegetation is very clear for all the sampling sites. In Amazon Basin and Jureia it is the major source, responsible for 60 to 80% of airborne concentrations. (Author) [pt
International Nuclear Information System (INIS)
Ozgener, B.; Azgener, H.A.
1991-01-01
In finite element formulations for the solution of the within-group neutron diffusion equation, two different treatments are possible for the group source term: the consistent source approximation (CSA) and the lumped source approximation (LSA). CSA results in intra-group scattering and fission matrices which have the same nondiagonal structure as the global coefficient matrix. This situation might be regarded as a disadvantage, compared to the conventional (i.e. finite difference) methods where the intra-group scattering and fission matrices are diagonal. To overcome this disadvantage, LSA could be used to diagonalize these matrices. LSA is akin to the lumped mass approximation of continuum mechanics. We concentrate on two different aspects of the source approximations. Although it has been reported that LSA does not modify the asymptotic h 2 convergence behaviour for linear elements, the effect of LSA on convergence of higher degree elements has not been investigated. Thus, we would be interested in determining, p, the asymptotic order of convergence, in: Δk |k eff (analytical) -k eff (finite element)| = Ch p (1) for finite element approximations of varying degree (N) with both of the source approximations. Since (1) is valid in the asymptotic limit, we must use ultra-fine meshes and quadruple precision arithmetic. For our order of convergence study, we used infinite cylindrical geometry with azimuthal symmetry. Hence, the effects of singularities remain uninvestigated. The second aspect we dwell on is the performance of LSA in bilinear 3-D finite element calculations, compared to CSA. LSA has been used quite extensively in 1- and 2-D even-parity transport and diffusion calculations. In this work, we will try to assess the relative merits of LSA and CSA in 3-D problems. (author)
Diagrammatic technique for calculating matrix elements of collective operators in superradiance
International Nuclear Information System (INIS)
Lee, C.T.
1975-01-01
Adopting the so-called ''genealogical construction,'' one can express the eigenstates of collective operators corresponding to a specified mode for an N-atom system in terms of those for an (N-1) -atom system. Using these Dicke states as bases and using the Wigner-Eckart theorem, a matrix element of a collective operator of an arbitrary mode can be written as the product of an m-dependent factor and an m-independent reduced matrix element (RME). A set of recursion formulas for the RME is obtained. A graphical representation of the RME on the branching diagram for binary irreducible representations of permutation groups is then introduced. This gives a simple and systematic way of calculating the RME. This method is especially useful when the cooperation number r is close to N/2, where almost exact asymptotic expressions can be obtained easily. The result shows explicitly the geometry dependence of superradiance and the relative importance of r-conserving and r-nonconserving processes. This clears up the chief difficulty encountered in the Dicke-Schwendimann approach to the problem of N two-level atoms, spread over large regions, interacting with a multimode radiation field
A new program for calculating matrix elements of one-particle operators in jj-coupling
International Nuclear Information System (INIS)
Pyper, N.C.; Grant, I.P.; Beatham, N.
1978-01-01
The aim of this paper is to calculate the matrix elements of one-particle tensor operators occurring in atomic and nuclear theory between configuration state functions representing states containing any number of open shells in jj-coupling. The program calculates the angular part of these matrix elements. The program is essentially a new version of RDMEJJ, written by J.J. Chang. The aims of this version are to eliminate inconsistencies from RDMEJJ, to modify its input requirements for consistency with MCP75, and to modify its output so that it can be stored in a discfile for access by other compatible programs. The program assumes that the configurational states are built from a common orthonormal set of basis orbitals. The number of electrons in a shell having j>=9/2 is restricted to be not greater than 2 by the available CFP routines . The present version allows up to 40 orbitals and 50 configurational states with <=10 open shells; these numbers can be changed by recompiling with modified COMMON/DIMENSION statements. The user should ensure that the CPC library subprograms AAGD, ACRI incorporate all current updates and have been converted to use double precision floating point arithmetic. (Auth.)
Monte Carlo calculations and measurements of spectra from a C-14 source
International Nuclear Information System (INIS)
Borg, J.
1996-05-01
To perform Monte Carlo simulations it is necessary to model the physical geometries i.e., the source and detector geometry. However, a complete model of the physical geometry may not be possible or may result in a very low calculation efficiency. Substituting the complete source model with a simplified model is one way of increasing the calculation efficiency. In this report, the study of a simplified model of a 14 C source is described. Results of Monte Carlo calculations with the EGS4 code are compared with measurements with a β spectrometer consisting of two coaxial Si detectors, and a low-energy photon spectrometer being a Si(Li) detector. Calculations and measurements show generally good agreement. However, the difference (a factor of 4) between calculated and measured response to electrons for the Si(Li) detector indicates that this detector has a dead layer about 12 μm thick instead of 0.2 μm as reported by the manufacturer. The efficiency of the calculations is increased by a factor of 10, when the complete source model is replaced by the simplified source model. This reduces the calculation time of detector responses to a few days instead of weeks on the NRC SGI R4400 computers. Good agreement between measured and calculated data also verifies that the MC code EGS4 is a reliable and useful tool for simulating coupled electron and photon transport for particles with energies down to a few keV. (au) 3 tabs., 15 ills., 11 refs
Calculations with off-shell matrix elements, TMD parton densities and TMD parton showers
Energy Technology Data Exchange (ETDEWEB)
Bury, Marcin; Hameren, Andreas van; Kutak, Krzysztof; Sapeta, Sebastian [Polish Academy of Sciences, Institute of Nuclear Physics, Cracow (Poland); Jung, Hannes [Polish Academy of Sciences, Institute of Nuclear Physics, Cracow (Poland); DESY, Hamburg (Germany); Serino, Mirko [Polish Academy of Sciences, Institute of Nuclear Physics, Cracow (Poland); Ben Gurion University of the Negev, Department of Physics, Beersheba (Israel)
2018-02-15
A new calculation using off-shell matrix elements with TMD parton densities supplemented with a newly developed initial state TMD parton shower is described. The calculation is based on the KaTie package for an automated calculation of the partonic process in high-energy factorization, making use of TMD parton densities implemented in TMDlib. The partonic events are stored in an LHE file, similar to the conventional LHE files, but now containing the transverse momenta of the initial partons. The LHE files are read in by the Cascade package for the full TMD parton shower, final state shower and hadronization from Pythia where events in HEPMC format are produced. We have determined a full set of TMD parton densities and developed an initial state TMD parton shower, including all flavors following the TMD distribution. As an example of application we have calculated the azimuthal de-correlation of high p{sub t} dijets as measured at the LHC and found very good agreement with the measurement when including initial state TMD parton showers together with conventional final state parton showers and hadronization. (orig.)
Clearance kinetics and matrix binding partners of the receptor for advanced glycation end products.
Directory of Open Access Journals (Sweden)
Pavle S Milutinovic
Full Text Available Elucidating the sites and mechanisms of sRAGE action in the healthy state is vital to better understand the biological importance of the receptor for advanced glycation end products (RAGE. Previous studies in animal models of disease have demonstrated that exogenous sRAGE has an anti-inflammatory effect, which has been reasoned to arise from sequestration of pro-inflammatory ligands away from membrane-bound RAGE isoforms. We show here that sRAGE exhibits in vitro binding with high affinity and reversibly to extracellular matrix components collagen I, collagen IV, and laminin. Soluble RAGE administered intratracheally, intravenously, or intraperitoneally, does not distribute in a specific fashion to any healthy mouse tissue, suggesting against the existence of accessible sRAGE sinks and receptors in the healthy mouse. Intratracheal administration is the only effective means of delivering exogenous sRAGE to the lung, the organ in which RAGE is most highly expressed; clearance of sRAGE from lung does not differ appreciably from that of albumin.
First principles calculations using density matrix divide-and-conquer within the SIESTA methodology
International Nuclear Information System (INIS)
Cankurtaran, B O; Gale, J D; Ford, M J
2008-01-01
The density matrix divide-and-conquer technique for the solution of Kohn-Sham density functional theory has been implemented within the framework of the SIESTA methodology. Implementation details are provided where the focus is on the scaling of the computation time and memory use, in both serial and parallel versions. We demonstrate the linear-scaling capabilities of the technique by providing ground state calculations of moderately large insulating, semiconducting and (near-) metallic systems. This linear-scaling technique has made it feasible to calculate the ground state properties of quantum systems consisting of tens of thousands of atoms with relatively modest computing resources. A comparison with the existing order-N functional minimization (Kim-Mauri-Galli) method is made between the insulating and semiconducting systems
A matrix-inversion method for gamma-source mapping from gamma-count data - 59082
International Nuclear Information System (INIS)
Bull, Richard K.; Adsley, Ian; Burgess, Claire
2012-01-01
Gamma ray counting is often used to survey the distribution of active waste material in various locations. Ideally the output from such surveys would be a map of the activity of the waste. In this paper a simple matrix-inversion method is presented. This allows an array of gamma-count data to be converted to an array of source activities. For each survey area the response matrix is computed using the gamma-shielding code Microshield [1]. This matrix links the activity array to the count array. The activity array is then obtained via matrix inversion. The method was tested on artificially-created arrays of count-data onto which statistical noise had been added. The method was able to reproduce, quite faithfully, the original activity distribution used to generate the dataset. The method has been applied to a number of practical cases, including the distribution of activated objects in a hot cell and to activated Nimonic springs amongst fuel-element debris in vaults at a nuclear plant. (authors)
Bayesian source term determination with unknown covariance of measurements
Belal, Alkomiet; Tichý, Ondřej; Šmídl, Václav
2017-04-01
Determination of a source term of release of a hazardous material into the atmosphere is a very important task for emergency response. We are concerned with the problem of estimation of the source term in the conventional linear inverse problem, y = Mx, where the relationship between the vector of observations y is described using the source-receptor-sensitivity (SRS) matrix M and the unknown source term x. Since the system is typically ill-conditioned, the problem is recast as an optimization problem minR,B(y - Mx)TR-1(y - Mx) + xTB-1x. The first term minimizes the error of the measurements with covariance matrix R, and the second term is a regularization of the source term. There are different types of regularization arising for different choices of matrices R and B, for example, Tikhonov regularization assumes covariance matrix B as the identity matrix multiplied by scalar parameter. In this contribution, we adopt a Bayesian approach to make inference on the unknown source term x as well as unknown R and B. We assume prior on x to be a Gaussian with zero mean and unknown diagonal covariance matrix B. The covariance matrix of the likelihood R is also unknown. We consider two potential choices of the structure of the matrix R. First is the diagonal matrix and the second is a locally correlated structure using information on topology of the measuring network. Since the inference of the model is intractable, iterative variational Bayes algorithm is used for simultaneous estimation of all model parameters. The practical usefulness of our contribution is demonstrated on an application of the resulting algorithm to real data from the European Tracer Experiment (ETEX). This research is supported by EEA/Norwegian Financial Mechanism under project MSMT-28477/2014 Source-Term Determination of Radionuclide Releases by Inverse Atmospheric Dispersion Modelling (STRADI).
Use of the Streaming Matrix Hybrid Method for discrete-ordinates fusion reactor calculations
International Nuclear Information System (INIS)
Battat, M.E.; Davidson, J.W.; Dudziak, D.J.; Thayer, G.R.
1984-01-01
The use of the discrete-ordinates method for solving two-dimensional, neutral-particle transport in fusion reactor blankets and shields is often limited by inherent inaccuracies due to the ray-effect. This effect presents a particular problem in the case of neutron streaming in the large internal void regions of a fusion reactor. A deterministic streaming technique called the Streaming Matrix Hybrid Method (SMHM) has been incorporated in the two-dimensional discrete-ordinates code TRIDENT-CTR. Calculations have been performed for an actual inertial-confinement fusion (ICF) reactor design using TRIDENT-CTR both with and without the SMHM. Comparisons of the calculated fluxes indicate that substantial mitigation of the ray effect can be achieved with the SMHM. Calculations were performed for the Los Alamos FIRST STEP hybrid ICF reactor designed for tritium production. Conventional 238 U fuel rod assemblies surround the spherical steel target chamber to form an annular cylindrical blanket. An axial fuel region is included to complete the blanket
Extracellular matrix structure.
Theocharis, Achilleas D; Skandalis, Spyros S; Gialeli, Chrysostomi; Karamanos, Nikos K
2016-02-01
Extracellular matrix (ECM) is a non-cellular three-dimensional macromolecular network composed of collagens, proteoglycans/glycosaminoglycans, elastin, fibronectin, laminins, and several other glycoproteins. Matrix components bind each other as well as cell adhesion receptors forming a complex network into which cells reside in all tissues and organs. Cell surface receptors transduce signals into cells from ECM, which regulate diverse cellular functions, such as survival, growth, migration, and differentiation, and are vital for maintaining normal homeostasis. ECM is a highly dynamic structural network that continuously undergoes remodeling mediated by several matrix-degrading enzymes during normal and pathological conditions. Deregulation of ECM composition and structure is associated with the development and progression of several pathologic conditions. This article emphasizes in the complex ECM structure as to provide a better understanding of its dynamic structural and functional multipotency. Where relevant, the implication of the various families of ECM macromolecules in health and disease is also presented. Copyright © 2015 Elsevier B.V. All rights reserved.
A single-source photon source model of a linear accelerator for Monte Carlo dose calculation.
Nwankwo, Obioma; Glatting, Gerhard; Wenz, Frederik; Fleckenstein, Jens
2017-01-01
To introduce a new method of deriving a virtual source model (VSM) of a linear accelerator photon beam from a phase space file (PSF) for Monte Carlo (MC) dose calculation. A PSF of a 6 MV photon beam was generated by simulating the interactions of primary electrons with the relevant geometries of a Synergy linear accelerator (Elekta AB, Stockholm, Sweden) and recording the particles that reach a plane 16 cm downstream the electron source. Probability distribution functions (PDFs) for particle positions and energies were derived from the analysis of the PSF. These PDFs were implemented in the VSM using inverse transform sampling. To model particle directions, the phase space plane was divided into a regular square grid. Each element of the grid corresponds to an area of 1 mm2 in the phase space plane. The average direction cosines, Pearson correlation coefficient (PCC) between photon energies and their direction cosines, as well as the PCC between the direction cosines were calculated for each grid element. Weighted polynomial surfaces were then fitted to these 2D data. The weights are used to correct for heteroscedasticity across the phase space bins. The directions of the particles created by the VSM were calculated from these fitted functions. The VSM was validated against the PSF by comparing the doses calculated by the two methods for different square field sizes. The comparisons were performed with profile and gamma analyses. The doses calculated with the PSF and VSM agree to within 3% /1 mm (>95% pixel pass rate) for the evaluated fields. A new method of deriving a virtual photon source model of a linear accelerator from a PSF file for MC dose calculation was developed. Validation results show that the doses calculated with the VSM and the PSF agree to within 3% /1 mm.
A single-source photon source model of a linear accelerator for Monte Carlo dose calculation.
Directory of Open Access Journals (Sweden)
Obioma Nwankwo
Full Text Available To introduce a new method of deriving a virtual source model (VSM of a linear accelerator photon beam from a phase space file (PSF for Monte Carlo (MC dose calculation.A PSF of a 6 MV photon beam was generated by simulating the interactions of primary electrons with the relevant geometries of a Synergy linear accelerator (Elekta AB, Stockholm, Sweden and recording the particles that reach a plane 16 cm downstream the electron source. Probability distribution functions (PDFs for particle positions and energies were derived from the analysis of the PSF. These PDFs were implemented in the VSM using inverse transform sampling. To model particle directions, the phase space plane was divided into a regular square grid. Each element of the grid corresponds to an area of 1 mm2 in the phase space plane. The average direction cosines, Pearson correlation coefficient (PCC between photon energies and their direction cosines, as well as the PCC between the direction cosines were calculated for each grid element. Weighted polynomial surfaces were then fitted to these 2D data. The weights are used to correct for heteroscedasticity across the phase space bins. The directions of the particles created by the VSM were calculated from these fitted functions. The VSM was validated against the PSF by comparing the doses calculated by the two methods for different square field sizes. The comparisons were performed with profile and gamma analyses.The doses calculated with the PSF and VSM agree to within 3% /1 mm (>95% pixel pass rate for the evaluated fields.A new method of deriving a virtual photon source model of a linear accelerator from a PSF file for MC dose calculation was developed. Validation results show that the doses calculated with the VSM and the PSF agree to within 3% /1 mm.
Haji Gholizadeh, Mohammad; Melesse, Assefa M; Reddi, Lakshmi
2016-10-01
In this study, principal component analysis (PCA), factor analysis (FA), and the absolute principal component score-multiple linear regression (APCS-MLR) receptor modeling technique were used to assess the water quality and identify and quantify the potential pollution sources affecting the water quality of three major rivers of South Florida. For this purpose, 15years (2000-2014) dataset of 12 water quality variables covering 16 monitoring stations, and approximately 35,000 observations was used. The PCA/FA method identified five and four potential pollution sources in wet and dry seasons, respectively, and the effective mechanisms, rules and causes were explained. The APCS-MLR apportioned their contributions to each water quality variable. Results showed that the point source pollution discharges from anthropogenic factors due to the discharge of agriculture waste and domestic and industrial wastewater were the major sources of river water contamination. Also, the studied variables were categorized into three groups of nutrients (total kjeldahl nitrogen, total phosphorus, total phosphate, and ammonia-N), water murkiness conducive parameters (total suspended solids, turbidity, and chlorophyll-a), and salt ions (magnesium, chloride, and sodium), and average contributions of different potential pollution sources to these categories were considered separately. The data matrix was also subjected to PMF receptor model using the EPA PMF-5.0 program and the two-way model described was performed for the PMF analyses. Comparison of the obtained results of PMF and APCS-MLR models showed that there were some significant differences in estimated contribution for each potential pollution source, especially in the wet season. Eventually, it was concluded that the APCS-MLR receptor modeling approach appears to be more physically plausible for the current study. It is believed that the results of apportionment could be very useful to the local authorities for the control and
Vila, Javier; Bowman, Joseph D; Figuerola, Jordi; Moriña, David; Kincl, Laurel; Richardson, Lesley; Cardis, Elisabeth
2017-07-01
To estimate occupational exposures to electromagnetic fields (EMF) for the INTEROCC study, a database of source-based measurements extracted from published and unpublished literature resources had been previously constructed. The aim of the current work was to summarize these measurements into a source-exposure matrix (SEM), accounting for their quality and relevance. A novel methodology for combining available measurements was developed, based on order statistics and log-normal distribution characteristics. Arithmetic and geometric means, and estimates of variability and maximum exposure were calculated by EMF source, frequency band and dosimetry type. The mean estimates were weighted by our confidence in the pooled measurements. The SEM contains confidence-weighted mean and maximum estimates for 312 EMF exposure sources (from 0 Hz to 300 GHz). Operator position geometric mean electric field levels for radiofrequency (RF) sources ranged between 0.8 V/m (plasma etcher) and 320 V/m (RF sealer), while magnetic fields ranged from 0.02 A/m (speed radar) to 0.6 A/m (microwave heating). For extremely low frequency sources, electric fields ranged between 0.2 V/m (electric forklift) and 11,700 V/m (high-voltage transmission line-hotsticks), whereas magnetic fields ranged between 0.14 μT (visual display terminals) and 17 μT (tungsten inert gas welding). The methodology developed allowed the construction of the first EMF-SEM and may be used to summarize similar exposure data for other physical or chemical agents.
Elantkowska, Magdalena; Ruczkowski, Jarosław; Sikorski, Andrzej; Dembczyński, Jerzy
2017-11-01
A parametric analysis of the hyperfine structure (hfs) for the even parity configurations of atomic terbium (Tb I) is presented in this work. We introduce the complete set of 4fN-core states in our high-performance computing (HPC) calculations. For calculations of the huge hyperfine structure matrix, requiring approximately 5000 hours when run on a single CPU, we propose the methods utilizing a personal computer cluster or, alternatively a cluster of Microsoft Azure virtual machines (VM). These methods give a factor 12 performance boost, enabling the calculations to complete in an acceptable time.
International Nuclear Information System (INIS)
Whitcher, Ralph
2007-01-01
1 - Description of program or function: SACALC2B calculates the average solid angle subtended by a rectangular or circular detector window to a coaxial or non-coaxial rectangular, circular or point source, including where the source and detector planes are not parallel. SACALC C YL calculates the average solid angle subtended by a cylinder to a rectangular or circular source, plane or thick, at any location and orientation. This is needed, for example, in calculating the intrinsic gamma efficiency of a detector such as a GM tube. The program also calculates the number of hits on the cylinder side and on each end, and the average path length through the detector volume (assuming no scattering or absorption). Point sources can be modelled by using a circular source of zero radius. NEA-1688/03: Documentation has been updated (January 2006). 2 - Methods: The program uses a Monte Carlo method to calculate average solid angle for source-detector geometries that are difficult to analyse by analytical methods. The values of solid angle are calculated to accuracies of typically better than 0.1%. The calculated values from the Monte Carlo method agree closely with those produced by polygon approximation and numerical integration by Gardner and Verghese, and others. 3 - Restrictions on the complexity of the problem: The program models a circular or rectangular detector in planes that are not necessarily coaxial, nor parallel. Point sources can be modelled by using a circular source of zero radius. The sources are assumed to be uniformly distributed. NEA-1688/04: In SACALC C YL, to avoid rounding errors, differences less than 1 E-12 are assumed to be zero
Scoping calculations of power sources for nuclear electric propulsion
International Nuclear Information System (INIS)
Difilippo, F.C.
1994-05-01
This technical memorandum describes models and calculational procedures to fully characterize the nuclear island of power sources for nuclear electric propulsion. Two computer codes were written: one for the gas-cooled NERVA derivative reactor and the other for liquid metal-cooled fuel pin reactors. These codes are going to be interfaced by NASA with the balance of plant in order to making scoping calculations for mission analysis
Direct calculation of resonance energies and widths from the poles of the multichannel T matrix
International Nuclear Information System (INIS)
Watson, D.K.
1984-01-01
A numerical method is developed to search the complex momentum plane for the poles of the multichannel T matrix. No resonance or continuum wave functions are calculated and no complex basis functions are required. The appropriate Green's function is constructed and used to enforce the asymptotic behavior. Results are obtained for a three-state model problem and compared with results from other techniques
International Nuclear Information System (INIS)
Itagaki, Masafumi; Sahashi, Naoki.
1997-01-01
The multiple reciprocity boundary element method has been applied to three-dimensional two-group neutron diffusion problems. A matrix-type boundary integral equation has been derived to solve the first and the second group neutron diffusion equations simultaneously. The matrix-type fundamental solutions used here satisfy the equation which has a point source term and is adjoint to the neutron diffusion equations. A multiple reciprocity method has been employed to transform the matrix-type domain integral related to the fission source into an equivalent boundary one. The higher order fundamental solutions required for this formulation are composed of a series of two types of analytic functions. The eigenvalue itself is also calculated using only boundary integrals. Three-dimensional test calculations indicate that the present method provides stable and accurate solutions for criticality problems. (author)
International Nuclear Information System (INIS)
Tiwary, S.N.
2000-02-01
Photon impact integral ionization cross section (σ) as well as photoelectron asymmetry parameter (β) for the reactions hν+Ne(1s 2 2s 2 2p 6 ) → Ne + (1s 2 2s 2 2p 5 ) + e - , hν+Ar(1s 2 2s 2 2p 6 3s 2 3p 6 ) → Ar + (1s 2 2s 2 2p 6 3s 2 3p 5 ) + e - , hν+Kr(1s 2 2s 2 2p 6 3s 2 3p 6 3d 10 4s 2 4p 6 ) → Kr + (1s 2 2s 2 2p 6 3s 2 3p 6 3d 10 4s 2 4p 5 ) + e - and hν+Xe(1s 2 2s 2 2p 6 3s 2 3p 6 3d 10 4s 2 4p 6 4d 10 5s 2 5p 6 ) → Xe + (1s 2 2s 2 2p 6 3s 2 3p 6 3d 10 4s 2 4p 6 4d 10 5s 2 5p 5 ) + e - have been calculated in the L-S and j-j coupling schemes using Hartree-Fock (HF) wavefunctions within the reliable non-relativistic R-matrix as well as relativistic R-matrix (RR-matrix) methods in both the length and velocity gauges in the energy range of experimental data available. Comparison is made with all available experimental data as well as other theoretical results. Our present theoretical investigation clearly demonstrates that there is a good agreement between our present R-matrix results and RR-matrix results as well as with other results in the case of neon which reflects that the correlation and relativity are not important in this case, in the energy range of present consideration. Whereas in the case of xenon (Z=54), the independent-particle approximation completely breaks down, i.e., HF cross sections are both qualitatively as well as quantitatively incorrect in the entire energy range which exhibit that the multielectron correlation as well as relativity are both important but interchannel interactions are more important than the intrachannel interaction and relativity for obtaining high precision results. (author)
The calculation and experiment verification of geometry factors of disk sources and detectors
International Nuclear Information System (INIS)
Shi Zhixia; Minowa, Y.
1993-01-01
In alpha counting the efficiency of counting system is most frequently determined from the counter response to a calibrated source. Whenever this procedure is used, however, question invariably arise as to the integrity of the standard source, or indeed the validity of the primary calibration. As a check, therefore, it is often helped to be able to calculate the disintegration rate from counting rate data. The conclusion are: 1. If the source is thin enough the error E is generally less than 5%. It is acceptable in routine measurement. When the standard source lacks for experiment we can use the geometry factor calculated instead of measured efficiency. 2. The geometry factor calculated can be used to correct the counter system, study the effect of each parameters and identify those parameters needing careful control. 3. The method of overlapping area of the source and the projection of the detector is very believable, simple and convenient for calculating geometry. (5 tabs.)
Mizumoto, Shuji; Yamada, Shuhei; Sugahara, Kazuyuki
2015-10-01
Recent functional studies on chondroitin sulfate-dermatan sulfate (CS-DS) demonstrated its indispensable roles in various biological events including brain development and cancer. CS-DS proteoglycans exert their physiological activity through interactions with specific proteins including growth factors, cell surface receptors, and matrix proteins. The characterization of these interactions is essential for regulating the biological functions of CS-DS proteoglycans. Although amino acid sequences on the bioactive proteins required for these interactions have already been elucidated, the specific saccharide sequences involved in the binding of CS-DS to target proteins have not yet been sufficiently identified. In this review, recent findings are described on the interaction between CS-DS and some proteins which are especially involved in the central nervous system and cancer development/metastasis. Copyright © 2015. Published by Elsevier Ltd.
International Nuclear Information System (INIS)
Conti, C.F.S.; Watson, F.V.
1991-01-01
A computational code to solve a two energy group neutron diffusion problem has been developed base d on the Response Matrix Method. That method solves the global problem of PWR core, without using the cross sections homogenization process, thus it is equivalent to a pontwise core calculation. The present version of the code calculates the response matrices by the first order perturbative method and considers developments on arbitrary order Fourier series for the boundary fluxes and interior fluxes. (author)
Directory of Open Access Journals (Sweden)
Pei Li
2016-11-01
Full Text Available Background/Aims: Matrix homeostasis within the disc nucleus pulposus (NP tissue is important for disc function. Increasing evidence indicates that sex hormone can influence the severity of disc degeneration. This study was aimed to study the role of 17β-estradiol (E2 in NP matrix synthesis and its underlying mechanism. Methods: Rat NP cells were cultured with (10-5, 10-7 and 10-9 M or without (control E2 for48 hours. The estrogen receptor (ER-β antagonist PHTPP and ERβ agonist ERB 041 were used to investigate the role mediated by ERβ. The p38 MAPK inhibitor SB203580 was used to investigate the role of p38 MAPK signaling pathway. Gene and protein expression of SOX9, aggrecan and collagen II, glycosaminoglycan (GAG content, and immunostaining assay for aggrecan and collagen II were analyzed to evaluate matrix production in rat NP cells. Results: E2 enhanced NP matrix synthesis in a concentration-dependent manner regarding gene and proetin expression of SOX9, aggrecan and collagen II, protein deposition of aggrecan and collagen II, and GAG content. Moreover, activation of p38 MAPK signaling pathway was increased with elevating E2 concentration. Further analysis indicated that ERB 041 and PHTPP could respectively enhance and suppress effects of E2 on matrix synthesis in NP cells, as well as activation of p38 MAPK pathway. Additionally, inhibition of p38 MAPK signaling pathway significantly abolished the effects of E2 on matrix synthesis. Conclusion: E2 can enhance matrix synthesis of NP cells and the ERβ/p38 MAPK pathway is involved in this regulatory process.
First and second collision source for mitigating ray effects in discrete ordinate calculations
International Nuclear Information System (INIS)
Gomes, L.T.; Stevens, P.N.
1991-01-01
This work revisits the problem of ray effects in discrete ordinates calculations that frequently occurs in two- and three-dimensional systems which contain isolated sources within a highly absorbing medium. The effectiveness of using a first collision source or a second collision source are analyzed as possible remedies to mitigate this problem. The first collision and second collision sources are generated by three-dimensional Monte Carlo calculations that enables its application to a variety of source configurations, and the results can be coupled to a two- or three-dimensional discrete ordinates transport code. (author)
Reactor calculation in coarse mesh by finite element method applied to matrix response method
International Nuclear Information System (INIS)
Nakata, H.
1982-01-01
The finite element method is applied to the solution of the modified formulation of the matrix-response method aiming to do reactor calculations in coarse mesh. Good results are obtained with a short running time. The method is applicable to problems where the heterogeneity is predominant and to problems of evolution in coarse meshes where the burnup is variable in one same coarse mesh, making the cross section vary spatially with the evolution. (E.G.) [pt
Gómez-Zavaglia, Andrea; Kaczor, Agnieszka; Coelho, Daniela; Cristiano, M. Lurdes S.; Fausto, Rui
2009-02-01
2-Allyl-1,2-benzisothiazol-3(2 H)-one 1,1-dioxide (ABIOD) has been studied by matrix-isolation infrared spectroscopy and quantum chemical calculations. A conformational search on the B3LYP/6 -311++G(3df,3pd) potential energy surface of the molecule demonstrated the existence of three conformers, Sk, Sk' and C, with similar energies, differing in the orientation of the allyl group. The calculations predicted the Sk form as the most stable in the gaseous phase, whereas the Sk' and C conformers have calculated relative energies of ca. 0.6 and 0.8-3.0 kJ mol -1, respectively (depending on the level of theory). In agreement with the relatively large (>6 kJ mol -1) calculated barriers for conformational interconversion, the three conformers could be efficiently trapped in an argon matrix at 10 K, the experimental infrared spectrum of the as-deposited matrix fitting well the simulated spectrum built from the calculated spectra for individual conformers scaled by their predicted populations at the temperature of the vapour of the compound prior to matrix deposition. Upon annealing the matrix at 24 K, however, both Sk and Sk' conformers were found to convert to the more polar C conformer, indicating that this latter form becomes the most stable ABIOD conformer in the argon matrix.
Habershon, Scott
2013-09-14
We introduce a new approach for calculating quantum time-correlation functions and time-dependent expectation values in many-body thermal systems; both electronically adiabatic and non-adiabatic cases can be treated. Our approach uses a path integral simulation to sample an initial thermal density matrix; subsequent evolution of this density matrix is equivalent to solution of the time-dependent Schrödinger equation, which we perform using a linear expansion of Gaussian wavepacket basis functions which evolve according to simple classical-like trajectories. Overall, this methodology represents a formally exact approach for calculating time-dependent quantum properties; by introducing approximations into both the imaginary-time and real-time propagations, this approach can be adapted for complex many-particle systems interacting through arbitrary potentials. We demonstrate this method for the spin Boson model, where we find good agreement with numerically exact calculations. We also discuss future directions of improvement for our approach with a view to improving accuracy and efficiency.
International Nuclear Information System (INIS)
Habershon, Scott
2013-01-01
We introduce a new approach for calculating quantum time-correlation functions and time-dependent expectation values in many-body thermal systems; both electronically adiabatic and non-adiabatic cases can be treated. Our approach uses a path integral simulation to sample an initial thermal density matrix; subsequent evolution of this density matrix is equivalent to solution of the time-dependent Schrödinger equation, which we perform using a linear expansion of Gaussian wavepacket basis functions which evolve according to simple classical-like trajectories. Overall, this methodology represents a formally exact approach for calculating time-dependent quantum properties; by introducing approximations into both the imaginary-time and real-time propagations, this approach can be adapted for complex many-particle systems interacting through arbitrary potentials. We demonstrate this method for the spin Boson model, where we find good agreement with numerically exact calculations. We also discuss future directions of improvement for our approach with a view to improving accuracy and efficiency
QmeQ 1.0: An open-source Python package for calculations of transport through quantum dot devices
Kiršanskas, Gediminas; Pedersen, Jonas Nyvold; Karlström, Olov; Leijnse, Martin; Wacker, Andreas
2017-12-01
QmeQ is an open-source Python package for numerical modeling of transport through quantum dot devices with strong electron-electron interactions using various approximate master equation approaches. The package provides a framework for calculating stationary particle or energy currents driven by differences in chemical potentials or temperatures between the leads which are tunnel coupled to the quantum dots. The electronic structures of the quantum dots are described by their single-particle states and the Coulomb matrix elements between the states. When transport is treated perturbatively to lowest order in the tunneling couplings, the possible approaches are Pauli (classical), first-order Redfield, and first-order von Neumann master equations, and a particular form of the Lindblad equation. When all processes involving two-particle excitations in the leads are of interest, the second-order von Neumann approach can be applied. All these approaches are implemented in QmeQ. We here give an overview of the basic structure of the package, give examples of transport calculations, and outline the range of applicability of the different approximate approaches.
Miranda, R. M.; Andrade, M. D. F.; Marien, Y., Sr.
2017-12-01
The atmospheric aerosols sources have been identified in Sao Paulo since the 80´s with the use of receptor models. The Metropolitan Area of São Paulo (MASP) is a megacity with a population of 21 million, corresponding to more than 11% of the total population of Brazil. The first results for the identification of sources of particles were obtained with the application of Absolute Principal Component Analysis, Factor Analysis and Chemical Mass Balance. More recently the Positive Matrix Factorization has been used in combination with the other receptor models. With the improvement of the aerosol composition analytical determination (more elements and better resolution) the source identification has became more accurate. But, in spite of that, the main sources are the same for fine particles: vehicular emission, secondary formation and biomass burning. The large amount of biofuels used in the MASP makes this region an important example of the atmospheric chemistry of fossil fuel and biofuel emissions. The 7 million vehicles can run on gasohol, ethanol (95% ethanol + 5% gasoline) and biodiesel (mostly for trucks and buses). We have considered the Black Carbon as the tracer for diesel engines and biomass burning, being this last source associated not only with burning of sugar cane plantation and forest fires, but also with wood and charcoal used in restaurant and domestic cooking and residues burning. The responsibility of the vehicular emission to the fine particles has been maintained in approximately 50% of the mass. The soil resuspension was associated with 8% of the fine particles origin. We are presenting the data obtained from experiments performed from 1983 to 2014, not continuously and mainly performed in the winter time. It is a long period of data that is going to be considered. The previous results obtained with the application of PCA were compared to that obtained with PMF applied to the historical data collected at MASP, showing the evolution of the
Monte Carlo calculation of ''skyshine'' neutron dose from ALS [Advanced Light Source
International Nuclear Information System (INIS)
Moin-Vasiri, M.
1990-06-01
This report discusses the following topics on ''skyshine'' neutron dose from ALS: Sources of radiation; ALS modeling for skyshine calculations; MORSE Monte-Carlo; Implementation of MORSE; Results of skyshine calculations from storage ring; and Comparison of MORSE shielding calculations
Teif, Vladimir B
2007-01-01
The transfer matrix methodology is proposed as a systematic tool for the statistical-mechanical description of DNA-protein-drug binding involved in gene regulation. We show that a genetic system of several cis-regulatory modules is calculable using this method, considering explicitly the site-overlapping, competitive, cooperative binding of regulatory proteins, their multilayer assembly and DNA looping. In the methodological section, the matrix models are solved for the basic types of short- and long-range interactions between DNA-bound proteins, drugs and nucleosomes. We apply the matrix method to gene regulation at the O(R) operator of phage lambda. The transfer matrix formalism allowed the description of the lambda-switch at a single-nucleotide resolution, taking into account the effects of a range of inter-protein distances. Our calculations confirm previously established roles of the contact CI-Cro-RNAP interactions. Concerning long-range interactions, we show that while the DNA loop between the O(R) and O(L) operators is important at the lysogenic CI concentrations, the interference between the adjacent promoters P(R) and P(RM) becomes more important at small CI concentrations. A large change in the expression pattern may arise in this regime due to anticooperative interactions between DNA-bound RNA polymerases. The applicability of the matrix method to more complex systems is discussed.
Rivard, Mark J; Davis, Stephen D; DeWerd, Larry A; Rusch, Thomas W; Axelrod, Steve
2006-11-01
A new x-ray source, the model S700 Axxent X-Ray Source (Source), has been developed by Xoft Inc. for electronic brachytherapy. Unlike brachytherapy sources containing radionuclides, this Source may be turned on and off at will and may be operated at variable currents and voltages to change the dose rate and penetration properties. The in-water dosimetry parameters for this electronic brachytherapy source have been determined from measurements and calculations at 40, 45, and 50 kV settings. Monte Carlo simulations of radiation transport utilized the MCNP5 code and the EPDL97-based mcplib04 cross-section library. Inter-tube consistency was assessed for 20 different Sources, measured with a PTW 34013 ionization chamber. As the Source is intended to be used for a maximum of ten treatment fractions, tube stability was also assessed. Photon spectra were measured using a high-purity germanium (HPGe) detector, and calculated using MCNP. Parameters used in the two-dimensional (2D) brachytherapy dosimetry formalism were determined. While the Source was characterized as a point due to the small anode size, S700 Source exhibited depth dose behavior similar to low-energy photon-emitting low dose rate sources 125I and l03Pd, yet with capability for variable and much higher dose rates and subsequently adjustable penetration capabilities. This paper presents the calculated and measured in-water brachytherapy dosimetry parameters for the model S700 Source at the aforementioned three operating voltages.
The impact of source initialization on performance of the FMBMC-ICEU algorithm
International Nuclear Information System (INIS)
Wenner, Michael T.; Haghighat, Alireza
2011-01-01
Recent work in the completely fission matrix based Monte Carlo (FMBMC) eigenvalue methodology showed that the fission matrix coefficients are independent of the source eigenvector in the limit of small mesh sizes. As a result, fission matrix element autocorrelation should be insignificant. We have developed a modified fission matrix based Monte Carlo methodology for achieving unbiased solutions even for high Dominance Ratio (DR) problems. This methodology utilizes an initial source from a deterministic calculation using the PENTRAN 3-D Parallel SN code, autocorrelation and normality tests, and a Monte Carlo Iterated Confidence Interval (ICI) formulation for estimation of uncertainties in the fundamental eigenvalue and eigenfunction. This methodology is referred to as Fission Matrix Based Monte Carlo Initial-source Controlled Elements with Uncertainties (FMBMC-ICEU). In this paper, we will investigate the impact of different starting sources (PENTRAN initialized with a flat source and a boundary source) on the final results of a test problem with high source correlation. It is shown that although the fission matrix element correlation is significantly reduced, a good initial guess is still important within the framework of the FMBMC-ICEU methodology since the FMBMC-ICEU methodology still utilizes a standard source iteration scheme. (author)
Near-Field Source Localization Using a Special Cumulant Matrix
Cui, Han; Wei, Gang
A new near-field source localization algorithm based on a uniform linear array was proposed. The proposed algorithm estimates each parameter separately but does not need pairing parameters. It can be divided into two important steps. The first step is bearing-related electric angle estimation based on the ESPRIT algorithm by constructing a special cumulant matrix. The second step is the other electric angle estimation based on the 1-D MUSIC spectrum. It offers much lower computational complexity than the traditional near-field 2-D MUSIC algorithm and has better performance than the high-order ESPRIT algorithm. Simulation results demonstrate that the performance of the proposed algorithm is close to the Cramer-Rao Bound (CRB).
CALCULATING ENERGY STORAGE DUE TO TOPOLOGICAL CHANGES IN EMERGING ACTIVE REGION NOAA AR 11112
International Nuclear Information System (INIS)
Tarr, Lucas; Longcope, Dana
2012-01-01
The minimum current corona model provides a way to estimate stored coronal energy using the number of field lines connecting regions of positive and negative photospheric flux. This information is quantified by the net flux connecting pairs of opposing regions in a connectivity matrix. Changes in the coronal magnetic field, due to processes such as magnetic reconnection, manifest themselves as changes in the connectivity matrix. However, the connectivity matrix will also change when flux sources emerge or submerge through the photosphere, as often happens in active regions. We have developed an algorithm to estimate the changes in flux due to emergence and submergence of magnetic flux sources. These estimated changes must be accounted for in order to quantify storage and release of magnetic energy in the corona. To perform this calculation over extended periods of time, we must additionally have a consistently labeled connectivity matrix over the entire observational time span. We have therefore developed an automated tracking algorithm to generate a consistent connectivity matrix as the photospheric source regions evolve over time. We have applied this method to NOAA Active Region 11112, which underwent a GOES M2.9 class flare around 19:00 on 2010 October 16th, and calculated a lower bound on the free magnetic energy buildup of ∼8.25 × 10 30 erg over 3 days.
International Nuclear Information System (INIS)
Thieke, Christian; Nill, Simeon; Oelfke, Uwe; Bortfeld, Thomas
2002-01-01
In inverse planning for intensity-modulated radiotherapy, the dose calculation is a crucial element limiting both the maximum achievable plan quality and the speed of the optimization process. One way to integrate accurate dose calculation algorithms into inverse planning is to precalculate the dose contribution of each beam element to each voxel for unit fluence. These precalculated values are stored in a big dose calculation matrix. Then the dose calculation during the iterative optimization process consists merely of matrix look-up and multiplication with the actual fluence values. However, because the dose calculation matrix can become very large, this ansatz requires a lot of computer memory and is still very time consuming, making it not practical for clinical routine without further modifications. In this work we present a new method to significantly reduce the number of entries in the dose calculation matrix. The method utilizes the fact that a photon pencil beam has a rapid radial dose falloff, and has very small dose values for the most part. In this low-dose part of the pencil beam, the dose contribution to a voxel is only integrated into the dose calculation matrix with a certain probability. Normalization with the reciprocal of this probability preserves the total energy, even though many matrix elements are omitted. Three probability distributions were tested to find the most accurate one for a given memory size. The sampling method is compared with the use of a fully filled matrix and with the well-known method of just cutting off the pencil beam at a certain lateral distance. A clinical example of a head and neck case is presented. It turns out that a sampled dose calculation matrix with only 1/3 of the entries of the fully filled matrix does not sacrifice the quality of the resulting plans, whereby the cutoff method results in a suboptimal treatment plan
Wave resistance calculation method combining Green functions based on Rankine and Kelvin source
Directory of Open Access Journals (Sweden)
LI Jingyu
2017-12-01
Full Text Available [Ojectives] At present, the Boundary Element Method(BEM of wave-making resistance mostly uses a model in which the velocity distribution near the hull is solved first, and the pressure integral is then calculated using the Bernoulli equation. However,the process of this model of wave-making resistance is complex and has low accuracy.[Methods] To address this problem, the present paper deduces a compound method for the quick calculation of ship wave resistance using the Rankine source Green function to solve the hull surface's source density, and combining the Lagally theorem concerning source point force calculation based on the Kelvin source Green function so as to solve the wave resistance. A case for the Wigley model is given.[Results] The results show that in contrast to the thin ship method of the linear wave resistance theorem, this method has higher precision, and in contrast to the method which completely uses the Kelvin source Green function, this method has better computational efficiency.[Conclusions] In general, the algorithm in this paper provides a compromise between precision and efficiency in wave-making resistance calculation.
Pavlov, V. M.
2013-01-01
A new algorithm is proposed for calculating the complete synthetic seismograms from a point source in the form of the sum of a single force and a dipole with an arbitrary seismic moment tensor in a plane layered medium composed of homogenous elastic isotropic layers. Following the idea of (Alekseev and Mikhailenko, 1978), an artificial cylindrical boundary is introduced, on which the boundary conditions are specified. For this modified problem, the exact solution (in terms of the displacements and stresses on the horizontal plane areal element) in the frequency domain is derived and substantiated. The unknown depth-dependent coefficients form the motion-stress vector, whose components satisfy the known system of ordinary differential equations. This system is solved by the method that involves the matrix impedance and propagator for the vector of motion, as previously suggested by the author in (Pavlov, 2009). In relation to the initial problem, the reflections from the artificial boundary are noise, which, to a certain degree, can be suppressed by selecting a long enough distance to this boundary and owing to the presence of a purely imaginary addition to the frequency. The algorithm is not constrained by the thickness of the layers, is applicable for any frequency range, and is suitable for computing the static offset.
International Nuclear Information System (INIS)
Kitsos, S.; Diop, C.M.; Assad, A.; Nimal, J.C.; Ridoux, P.
1996-01-01
Improvements of gamma-ray transport calculations in S n codes aim at taking into account the bound-electron effect of Compton scattering (incoherent), coherent scattering (Rayleigh), and secondary sources of bremsstrahlung and fluorescence. A computation scheme was developed to take into account these phenomena by modifying the angular and energy transfer matrices, and no modification in the transport code has been made. The incoherent and coherent scatterings as well as the fluorescence sources can be strictly treated by the transfer matrix change. For bremsstrahlung sources, this is possible if one can neglect the charged particles path as they pass through the matter (electrons and positrons) and is applicable for the energy range of interest for us (below 10 MeV). These improvements have been reported on the kernel attenuation codes by the calculation of new buildup factors. The gamma-ray buildup factors have been carried out for 25 natural elements up to 30 mean free paths in the energy range between 15 keV and 10 MeV
Jaiprakash; Singhai, Amrita; Habib, Gazala; Raman, Ramya Sunder; Gupta, Tarun
2017-01-01
Fine aerosol fraction (particulate matter with aerodynamic diameter <= 1.0 μm (PM) 1.0 ) over the Indian Institute of Technology Delhi campus was monitored day and night (10 h each) at 30 m height from November 2009 to March 2010. The samples were analyzed for 5 ions (NH 4 + , NO 3 - , SO 4 2- , F - , and Cl - ) and 12 trace elements (Na, K, Mg, Ca, Pb, Zn, Fe, Mn, Cu, Cd, Cr, and Ni). Importantly, secondary aerosol (sulfate and nitrate) formation was observed during dense foggy events, supporting the fog-smog-fog cycle. A total of 76 samples were used for source apportionment of PM mass. Six factors were resolved by PMF analyses and were identified as secondary aerosol, secondary chloride, biomass burning, soil dust, iron-rich source, and vehicular emission. The geographical location of the sources and/or preferred transport pathways was identified by conditional probability function (for local sources) and potential source contribution function (for regional sources) analyses. Medium- and small-scale metal processing (e.g. steel sheet rolling) industries in Haryana and National Capital Region (NCR) Delhi, coke and petroleum refining in Punjab, and thermal power plants in Pakistan, Punjab, and NCR Delhi were likely contributors to secondary sulfate, nitrate, and secondary chloride at the receptor site. The agricultural residue burning after harvesting season (Sept-Dec and Feb-Apr) in Punjab, and Haryana contributed to potassium at receptor site during November-December and March 2010. The soil dust from North and East Pakistan, and Rajasthan, North-East Punjab, and Haryana along with the local dust contributed to soil dust at the receptor site, during February and March 2010. A combination of temporal behavior and air parcel trajectory ensemble analyses indicated that the iron-rich source was most likely a local source attributed to emissions from metal processing facilities. Further, as expected, the vehicular emissions source did not show any seasonality and
O'Reilly, Kirk T; Pietari, Jaana; Boehm, Paul D
2014-04-01
A realistic understanding of contaminant sources is required to set appropriate control policy. Forensic chemical methods can be powerful tools in source characterization and identification, but they require a multiple-lines-of-evidence approach. Atmospheric receptor models, such as the US Environmental Protection Agency (USEPA)'s chemical mass balance (CMB), are increasingly being used to evaluate sources of pyrogenic polycyclic aromatic hydrocarbons (PAHs) in sediments. This paper describes the assumptions underlying receptor models and discusses challenges in complying with these assumptions in practice. Given the variability within, and the similarity among, pyrogenic PAH source types, model outputs are sensitive to specific inputs, and parsing among some source types may not be possible. Although still useful for identifying potential sources, the technical specialist applying these methods must describe both the results and their inherent uncertainties in a way that is understandable to nontechnical policy makers. The authors present an example case study concerning an investigation of a class of parking-lot sealers as a significant source of PAHs in urban sediment. Principal component analysis is used to evaluate published CMB model inputs and outputs. Targeted analyses of 2 areas where bans have been implemented are included. The results do not support the claim that parking-lot sealers are a significant source of PAHs in urban sediments. © 2013 SETAC.
Leang, Sarom S; Rendell, Alistair P; Gordon, Mark S
2014-03-11
Increasingly, modern computer systems comprise a multicore general-purpose processor augmented with a number of special purpose devices or accelerators connected via an external interface such as a PCI bus. The NVIDIA Kepler Graphical Processing Unit (GPU) and the Intel Phi are two examples of such accelerators. Accelerators offer peak performances that can be well above those of the host processor. How to exploit this heterogeneous environment for legacy application codes is not, however, straightforward. This paper considers how matrix operations in typical quantum chemical calculations can be migrated to the GPU and Phi systems. Double precision general matrix multiply operations are endemic in electronic structure calculations, especially methods that include electron correlation, such as density functional theory, second order perturbation theory, and coupled cluster theory. The use of approaches that automatically determine whether to use the host or an accelerator, based on problem size, is explored, with computations that are occurring on the accelerator and/or the host. For data-transfers over PCI-e, the GPU provides the best overall performance for data sizes up to 4096 MB with consistent upload and download rates between 5-5.6 GB/s and 5.4-6.3 GB/s, respectively. The GPU outperforms the Phi for both square and nonsquare matrix multiplications.
Rojas, Armando; Añazco, Carolina; González, Ileana; Araya, Paulina
2018-04-05
A growing body of epidemiologic evidence suggests that people with diabetes are at a significantly higher risk of many forms of cancer. However, the molecular mechanisms underlying this association are not fully understood. Cancer cells are surrounded by a complex milieu, also known as tumor microenvironment, which contributes to the development and metastasis of tumors. Of note, one of the major components of this niche is the extracellular matrix (ECM), which becomes highly disorganized during neoplastic progression, thereby stimulating cancer cell transformation, growth and spread. One of the consequences of chronic hyperglycemia, the most frequently observed sign of diabetes and the etiological source of diabetes complications, is the irreversible glycation and oxidation of proteins and lipids leading to the formation of the advanced glycation end-products (AGEs). These compounds may covalently crosslink and biochemically modify structure and functions of many proteins, and AGEs accumulation is particularly high in long-living proteins with low biological turnover, features that are shared by most, if not all, ECM proteins. AGEs-modified proteins are recognized by AGE-binding proteins, and thus glycated ECM components have the potential to trigger Receptor for advanced glycation end-products-dependent mechanisms. The biological consequence of receptor for advanced glycation end-products activation mechanisms seems to be connected, in different ways, to drive some hallmarks of cancer onset and tumor growth. The present review intends to highlight the potential impact of ECM glycation on tumor progression by triggering receptor for advanced glycation end-products-mediated mechanisms.
FreeSASA: An open source C library for solvent accessible surface area calculations.
Mitternacht, Simon
2016-01-01
Calculating solvent accessible surface areas (SASA) is a run-of-the-mill calculation in structural biology. Although there are many programs available for this calculation, there are no free-standing, open-source tools designed for easy tool-chain integration. FreeSASA is an open source C library for SASA calculations that provides both command-line and Python interfaces in addition to its C API. The library implements both Lee and Richards' and Shrake and Rupley's approximations, and is highly configurable to allow the user to control molecular parameters, accuracy and output granularity. It only depends on standard C libraries and should therefore be easy to compile and install on any platform. The library is well-documented, stable and efficient. The command-line interface can easily replace closed source legacy programs, with comparable or better accuracy and speed, and with some added functionality.
Vysotskaya, Anna; Kolvakh, Oleg; Stoner, Greg
2016-01-01
The aim of this paper is to describe the innovative teaching approach used in the Southern Federal University, Russia, to teach accounting via a form of matrix mathematics. It thereby contributes to disseminating the technique of teaching to solve accounting cases using mutual calculations to a worldwide audience. The approach taken in this course…
Diffusion theory model for optimization calculations of cold neutron sources
International Nuclear Information System (INIS)
Azmy, Y.Y.
1987-01-01
Cold neutron sources are becoming increasingly important and common experimental facilities made available at many research reactors around the world due to the high utility of cold neutrons in scattering experiments. The authors describe a simple two-group diffusion model of an infinite slab LD 2 cold source. The simplicity of the model permits to obtain an analytical solution from which one can deduce the reason for the optimum thickness based solely on diffusion-type phenomena. Also, a second more sophisticated model is described and the results compared to a deterministic transport calculation. The good (particularly qualitative) agreement between the results suggests that diffusion theory methods can be used in parametric and optimization studies to avoid the generally more expensive transport calculations
International Nuclear Information System (INIS)
Rivard, Mark J.; Davis, Stephen D.; DeWerd, Larry A.; Rusch, Thomas W.; Axelrod, Steve
2006-01-01
A new x-ray source, the model S700 Axxent trade mark sign X-Ray Source (Source), has been developed by Xoft Inc. for electronic brachytherapy. Unlike brachytherapy sources containing radionuclides, this Source may be turned on and off at will and may be operated at variable currents and voltages to change the dose rate and penetration properties. The in-water dosimetry parameters for this electronic brachytherapy source have been determined from measurements and calculations at 40, 45, and 50 kV settings. Monte Carlo simulations of radiation transport utilized the MCNP5 code and the EPDL97-based mcplib04 cross-section library. Inter-tube consistency was assessed for 20 different Sources, measured with a PTW 34013 ionization chamber. As the Source is intended to be used for a maximum of ten treatment fractions, tube stability was also assessed. Photon spectra were measured using a high-purity germanium (HPGe) detector, and calculated using MCNP. Parameters used in the two-dimensional (2D) brachytherapy dosimetry formalism were determined. While the Source was characterized as a point due to the small anode size, P (5) were 0.20, 0.24, and 0.29 for the 40, 45, and 50 kV voltage settings, respectively. For 1 125 I and 103 Pd, yet with capability for variable and much higher dose rates and subsequently adjustable penetration capabilities. This paper presents the calculated and measured in-water brachytherapy dosimetry parameters for the model S700 Source at the aforementioned three operating voltages
Wang, Yufang; Wu, Yanzhao; Feng, Min; Wang, Hui; Jin, Qinghua; Ding, Datong; Cao, Xuewei
2008-12-01
With a simple method-the reduced matrix method, we simplified the calculation of the phonon vibrational frequencies according to SWNTs structure and their phonon symmetric property and got the dispersion properties of all SWNTs at Gamma point in Brillouin zone, whose diameters lie between 0.6 and 2.5 nm. The calculating time is shrunk about 2-4 orders. A series of the dependent relationships between the diameters of SWNTs and the frequencies of Raman and IR active modes are given. Several fine structures including "glazed tile" structures in omega approximately d figures are found, which might predict a certain macro-quantum phenomenon of the phonons in SWNTs.
Extracellular matrix component signaling in cancer
DEFF Research Database (Denmark)
Multhaupt, Hinke A. B.; Leitinger, Birgit; Gullberg, Donald
2016-01-01
Cell responses to the extracellular matrix depend on specific signaling events. These are important from early development, through differentiation and tissue homeostasis, immune surveillance, and disease pathogenesis. Signaling not only regulates cell adhesion cytoskeletal organization and motil...... as well as matrix constitution and protein crosslinking. Here we summarize roles of the three major matrix receptor types, with emphasis on how they function in tumor progression. [on SciFinder(R)]...
Bhanuprasad, S. G.; Venkataraman, Chandra; Bhushan, Mani
The sources of aerosols on a regional scale over India have only recently received attention in studies using back trajectory analysis and chemical transport modelling. Receptor modelling approaches such as positive matrix factorization (PMF) and the potential source contribution function (PSCF) are effective tools in source identification of urban and regional-scale pollution. In this work, PMF and PSCF analysis is applied to identify categories and locations of sources that influenced surface concentrations of aerosols in the Indian Ocean Experiment (INDOEX) domain measured on-board the research vessel Ron Brown [Quinn, P.K., Coffman, D.J., Bates, T.S., Miller, T.L., Johnson, J.E., Welton, E.J., et al., 2002. Aerosol optical properties during INDOEX 1999: means, variability, and controlling factors. Journal of Geophysical Research 107, 8020, doi:10.1029/2000JD000037]. Emissions inventory information is used to identify sources co-located with probable source regions from PSCF. PMF analysis identified six factors influencing PM concentrations during the INDOEX cruise of the Ron Brown including a biomass combustion factor (35-40%), three industrial emissions factors (35-40%), primarily secondary sulphate-nitrate, balance trace elements and Zn, and two dust factors (20-30%) of Si- and Ca-dust. The identified factors effectively predict the measured submicron PM concentrations (slope of regression line=0.90±0.20; R2=0.76). Probable source regions shifted based on changes in surface and elevated flows during different times in the ship cruise. They were in India in the early part of the cruise, but in west Asia, south-east Asia and Africa, during later parts of the cruise. Co-located sources include coal-fired electric utilities, cement, metals and petroleum production in India and west Asia, biofuel combustion for energy and crop residue burning in India, woodland/forest burning in north sub-Saharan Africa and forest burning in south-east Asia. Significant findings
Energy Technology Data Exchange (ETDEWEB)
Strindehag, O; Tollander, B
1968-08-15
Calculated values of the absolute total detection efficiencies of cylindrical scintillation crystals viewing spherical sources of various sizes are presented. The calculation is carried out for 2 x 2 inch and 3 x 3 inch Nal(Tl) crystals and for sources which have the radii 1/4, 1/2, 3/4 and 1 times the crystal radius. Source-detector distances of 5-20 cm and gamma energies in the range 0.1 - 5 MeV are considered. The correction factor for absorption in the sample container wall and in the detector housing is derived and calculated for a practical case.
Flux and brightness calculations for various synchrotron radiation sources
International Nuclear Information System (INIS)
Weber, J.M.; Hulbert, S.L.
1991-11-01
Synchrotron radiation (SR) storage rings are powerful scientific and technological tools. The first generation of storage rings in the US., e.g., SURF (Washington, D.C.), Tantalus (Wisconsin), SSRL (Stanford), and CHESS (Cornell), revolutionized VUV, soft X-ray, and hard X-ray science. The second (present) generation of storage rings, e.g. the NSLS VUV and XRAY rings and Aladdin (Wisconsin), have sustained the revolution by providing higher stored currents and up to a factor of ten smaller electron beam sizes than the first generation sources. This has made possible a large number of experiments that could not performed using first generation sources. In addition, the NSLS XRAY ring design optimizes the performance of wigglers (high field periodic magnetic insertion devices). The third generation storage rings, e.g. ALS (Berkeley) and APS (Argonne), are being designed to optimize the performance of undulators (low field periodic magnetic insertion devices). These extremely high brightness sources will further revolutionize x-ray science by providing diffraction-limited x-ray beams. The output of undulators and wigglers is distinct from that of bending magnets in magnitude, spectral shape, and in spatial and angular size. Using published equations, we have developed computer programs to calculate the flux, central intensity, and brightness output bending magnets and selected wigglers and undulators of the NSLS VUV and XRAY rings, the Advanced Light Source (ALS), and the Advanced Photon Source (APS). Following is a summary of the equations used, the graphs and data produced, and the computer codes written. These codes, written in the C programming language, can be used to calculate the flux, central intensity, and brightness curves for bending magnets and insertion devices on any storage ring
Calculation of neutron flux in the presence of a source
International Nuclear Information System (INIS)
Planchard, J.
1993-09-01
Neutron sources are introduced into the reactors to initiate the chain reaction. For safety reasons, we have to know the distribution and evolution of the flux throughout the startup phase. The flux is calculated iteratively but convergence of the process can slow down arbitrarily as we approach criticality. A calculation method is presented, with a convergence speed which does not depend on the negative reactivity when it is small. (author). 7 refs
Source term calculations - Ringhals 2 PWR. Interim report
Energy Technology Data Exchange (ETDEWEB)
Johansson, Lise-Lotte
1998-03-01
This project was performed within the fifth and final phase of sub-project RAK-2.1 of the Nordic Co-operative Reactor Safety Program, NKS. RAK-2.1 has also included studies of reflooding of degraded core, recriticality and late phase melt progression. Earlier source term calculations for Swedish nuclear power plants are based on the integral code MAAP. A need was recognised to compare these calculations with calculations done with mechanistic codes. In the present work SCDAP/RELAP5 and CONTAIN were used. Only limited results could be obtained within the frame of RAK-2.1, since many problems were encountered using the SCDAP/RELAP5 code. The main obstacle was the extremely long execution times of the MOD3.1 version, but also some dubious fission product calculations. However, some interesting results were obtained for the studied sequence, a total loss of AC power. The report describes the modelling approach for SCDAP/RELAP5 and CONTAIN, and discusses results for the transient including the event of a surge line creep rupture. The study will probably be completed later, providing that an improved SCDAP/RELAP5 code version becomes available 8 refs, 16 figs, 5 tabs
A T-matrix calculation for in-medium heavy-quark gluon scattering
International Nuclear Information System (INIS)
Huggins, K.; Rapp, R.
2012-01-01
The interactions of charm and bottom quarks in a quark-gluon plasma (QGP) are evaluated using a thermodynamic 2-body T-matrix. We specifically focus on heavy-quark (HQ) interactions with thermal gluons with an input potential motivated by lattice-QCD computations of the HQ free energy. The latter is implemented into a field-theoretic ansatz for color-Coulomb and (remnants of) confining interactions. This, in particular, enables to discuss corrections to the potential approach, specifically hard-thermal-loop corrections to the vertices, relativistic corrections deduced from pertinent Feynman diagrams, and a suitable projection on transverse thermal gluons. The resulting potentials are applied to compute scattering amplitudes in different color channels and utilized for a calculation of the corresponding HQ drag coefficient in the QGP. A factor of ∼2-3 enhancement over perturbative results is obtained, mainly driven by the resummation in the attractive color-channels.
van Borm, Werner August
Electron probe X-ray microanalysis (EPXMA) in combination with an automation system and an energy-dispersive X-ray detection system was used to analyse thousands of microscopical particles, originating from the ambient atmosphere. The huge amount of data was processed by a newly developed X-ray correction method and a number of data reduction procedures. A standardless ZAF procedure for EPXMA was developed for quick semi-quantitative analysis of particles starting from simple corrections, valid for bulk samples and modified taking into account the particle finit diameter, assuming a spherical shape. Tested on a limited database of bulk and particulate samples, the compromise between calculation speed and accuracy yielded for elements with Z > 14 accuracies on concentrations less than 10% while absolute deviations remained below 4 weight%, thus being only important for low concentrations. Next, the possibilities for the use of supervised and unsupervised multivariate particle classification were investigated for source apportionment of individual particles. In a detailed study of the unsupervised cluster analysis technique several aspects were considered, that have a severe influence on the final cluster analysis results, i.e. data acquisition, X-ray peak identification, data normalization, scaling, variable selection, similarity measure, cluster strategy, cluster significance and error propagation. A supervised approach was developed using an expert system-like approach in which identification rules are builded to describe the particle classes in a unique manner. Applications are presented for particles sampled (1) near a zinc smelter (Vieille-Montagne, Balen, Belgium), analyzed for heavy metals, (2) in an urban aerosol (Antwerp, Belgium), analyzed for over 20 elements and (3) in a rural aerosol originating from a swiss mountain area (Bern). Thus is was possible to pinpoint a number of known and unknown sources and characterize their emissions in terms of particles
DEFF Research Database (Denmark)
Sugiyama, Nami; Varjosalo, Markku; Meller, Pipsa
2010-01-01
/stroma border and tumor invasion front. The strongest overall coexpression was found in prostate carcinoma. Studies with cultured prostate carcinoma cell lines showed that the FGFR4-R388 variant, which has previously been associated with poor cancer prognosis, increased MT1-MMP-dependent collagen invasion......Aberrant expression and polymorphism of fibroblast growth factor receptor 4 (FGFR4) has been linked to tumor progression and anticancer drug resistance. We describe here a novel mechanism of tumor progression by matrix degradation involving epithelial-to-mesenchymal transition in response...... to membrane-type 1 matrix metalloproteinase (MT1-MMP, MMP-14) induction at the edge of tumors expressing the FGFR4-R388 risk variant. Both FGFR4 and MT1-MMP were upregulated in tissue biopsies from several human cancer types including breast adenocarcinomas, where they were partially coexpressed at the tumor...
Directory of Open Access Journals (Sweden)
Christine F Skibola
Full Text Available BACKGROUND: Non-Hodgkin lymphoma (NHL is the fifth most common cancer in the U.S. and few causes have been identified. Genetic association studies may help identify environmental risk factors and enhance our understanding of disease mechanisms. METHODOLOGY/PRINCIPAL FINDINGS: 768 coding and haplotype tagging SNPs in 146 genes were examined using Illumina GoldenGate technology in a large population-based case-control study of NHL in the San Francisco Bay Area (1,292 cases 1,375 controls are included here. Statistical analyses were restricted to HIV- participants of white non-Hispanic origin. Genes involved in steroidogenesis, immune function, cell signaling, sunlight exposure, xenobiotic metabolism/oxidative stress, energy balance, and uptake and metabolism of cholesterol, folate and vitamin C were investigated. Sixteen SNPs in eight pathways and nine haplotypes were associated with NHL after correction for multiple testing at the adjusted q<0.10 level. Eight SNPs were tested in an independent case-control study of lymphoma in Germany (494 NHL cases and 494 matched controls. Novel associations with common variants in estrogen receptor 1 (ESR1 and in the vitamin C receptor and matrix metalloproteinase gene families were observed. Four ESR1 SNPs were associated with follicular lymphoma (FL in the U.S. study, with rs3020314 remaining associated with reduced risk of FL after multiple testing adjustments [odds ratio (OR = 0.42, 95% confidence interval (CI = 0.23-0.77 and replication in the German study (OR = 0.24, 95% CI = 0.06-0.94. Several SNPs and haplotypes in the matrix metalloproteinase-3 (MMP3 and MMP9 genes and in the vitamin C receptor genes, solute carrier family 23 member 1 (SLC23A1 and SLC23A2, showed associations with NHL risk. CONCLUSIONS/SIGNIFICANCE: Our findings suggest a role for estrogen, vitamin C and matrix metalloproteinases in the pathogenesis of NHL that will require further validation.
International Nuclear Information System (INIS)
Perry, R.T.; Wilson, W.B.; Charlton, W.S.
1998-04-01
In many systems, it is imperative to have accurate knowledge of all significant sources of neutrons due to the decay of radionuclides. These sources can include neutrons resulting from the spontaneous fission of actinides, the interaction of actinide decay α-particles in (α,n) reactions with low- or medium-Z nuclides, and/or delayed neutrons from the fission products of actinides. Numerous systems exist in which these neutron sources could be important. These include, but are not limited to, clean and spent nuclear fuel (UO 2 , ThO 2 , MOX, etc.), enrichment plant operations (UF 6 , PuF 4 , etc.), waste tank studies, waste products in borosilicate glass or glass-ceramic mixtures, and weapons-grade plutonium in storage containers. SOURCES-3A is a computer code that determines neutron production rates and spectra from (α,n) reactions, spontaneous fission, and delayed neutron emission due to the decay of radionuclides in homogeneous media (i.e., a mixture of α-emitting source material and low-Z target material) and in interface problems (i.e., a slab of α-emitting source material in contact with a slab of low-Z target material). The code is also capable of calculating the neutron production rates due to (α,n) reactions induced by a monoenergetic beam of α-particles incident on a slab of target material. Spontaneous fission spectra are calculated with evaluated half-life, spontaneous fission branching, and Watt spectrum parameters for 43 actinides. The (α,n) spectra are calculated using an assumed isotropic angular distribution in the center-of-mass system with a library of 89 nuclide decay α-particle spectra, 24 sets of measured and/or evaluated (α,n) cross sections and product nuclide level branching fractions, and functional α-particle stopping cross sections for Z < 106. The delayed neutron spectra are taken from an evaluated library of 105 precursors. The code outputs the magnitude and spectra of the resultant neutron source. It also provides an
Wei, Jianing; Bouman, Charles A; Allebach, Jan P
2014-05-01
Many imaging applications require the implementation of space-varying convolution for accurate restoration and reconstruction of images. Here, we use the term space-varying convolution to refer to linear operators whose impulse response has slow spatial variation. In addition, these space-varying convolution operators are often dense, so direct implementation of the convolution operator is typically computationally impractical. One such example is the problem of stray light reduction in digital cameras, which requires the implementation of a dense space-varying deconvolution operator. However, other inverse problems, such as iterative tomographic reconstruction, can also depend on the implementation of dense space-varying convolution. While space-invariant convolution can be efficiently implemented with the fast Fourier transform, this approach does not work for space-varying operators. So direct convolution is often the only option for implementing space-varying convolution. In this paper, we develop a general approach to the efficient implementation of space-varying convolution, and demonstrate its use in the application of stray light reduction. Our approach, which we call matrix source coding, is based on lossy source coding of the dense space-varying convolution matrix. Importantly, by coding the transformation matrix, we not only reduce the memory required to store it; we also dramatically reduce the computation required to implement matrix-vector products. Our algorithm is able to reduce computation by approximately factoring the dense space-varying convolution operator into a product of sparse transforms. Experimental results show that our method can dramatically reduce the computation required for stray light reduction while maintaining high accuracy.
Aboulbanine, Zakaria; El Khayati, Naïma
2018-04-01
The use of phase space in medical linear accelerator Monte Carlo (MC) simulations significantly improves the execution time and leads to results comparable to those obtained from full calculations. The classical representation of phase space stores directly the information of millions of particles, producing bulky files. This paper presents a virtual source model (VSM) based on a reconstruction algorithm, taking as input a compressed file of roughly 800 kb derived from phase space data freely available in the International Atomic Energy Agency (IAEA) database. This VSM includes two main components; primary and scattered particle sources, with a specific reconstruction method developed for each. Energy spectra and other relevant variables were extracted from IAEA phase space and stored in the input description data file for both sources. The VSM was validated for three photon beams: Elekta Precise 6 MV/10 MV and a Varian TrueBeam 6 MV. Extensive calculations in water and comparisons between dose distributions of the VSM and IAEA phase space were performed to estimate the VSM precision. The Geant4 MC toolkit in multi-threaded mode (Geant4-[mt]) was used for fast dose calculations and optimized memory use. Four field configurations were chosen for dose calculation validation to test field size and symmetry effects, , , and for squared fields, and for an asymmetric rectangular field. Good agreement in terms of formalism, for 3%/3 mm and 2%/3 mm criteria, for each evaluated radiation field and photon beam was obtained within a computation time of 60 h on a single WorkStation for a 3 mm voxel matrix. Analyzing the VSM’s precision in high dose gradient regions, using the distance to agreement concept (DTA), showed also satisfactory results. In all investigated cases, the mean DTA was less than 1 mm in build-up and penumbra regions. In regards to calculation efficiency, the event processing speed is six times faster using Geant4-[mt] compared to sequential
Energy Technology Data Exchange (ETDEWEB)
Shapiro, A; Lin, B I [Cincinnati Univ., Ohio (USA). Dept. of Chemical and Nuclear Engineering; Windham, J P; Kereiakes, J G
1976-07-01
..gamma.. flux density and dose rate distributions have been calculated about implantable californium-252 sources for an infinite tissue medium. Point source flux densities as a function of energy and position were obtained from a discrete-ordinates calculation, and the flux densities were multiplied by their corresponding kerma factors and added to obtain point source dose rates. The point dose rates were integrated over the line source to obtain line dose rates. Container attenuation was accounted for by evaluating the point dose rate as a function of platinum thickness. Both primary and secondary flux densities and dose rates are presented. The agreement with an independent Monte Carlo calculation was excellent. The data presented should be useful for the design of new source configurations.
q-Virasoro constraints in matrix models
Energy Technology Data Exchange (ETDEWEB)
Nedelin, Anton [Dipartimento di Fisica, Università di Milano-Bicocca and INFN, sezione di Milano-Bicocca, Piazza della Scienza 3, I-20126 Milano (Italy); Department of Physics and Astronomy, Uppsala university,Box 516, SE-75120 Uppsala (Sweden); Zabzine, Maxim [Department of Physics and Astronomy, Uppsala university,Box 516, SE-75120 Uppsala (Sweden)
2017-03-20
The Virasoro constraints play the important role in the study of matrix models and in understanding of the relation between matrix models and CFTs. Recently the localization calculations in supersymmetric gauge theories produced new families of matrix models and we have very limited knowledge about these matrix models. We concentrate on elliptic generalization of hermitian matrix model which corresponds to calculation of partition function on S{sup 3}×S{sup 1} for vector multiplet. We derive the q-Virasoro constraints for this matrix model. We also observe some interesting algebraic properties of the q-Virasoro algebra.
International Nuclear Information System (INIS)
Martini, Till; Uwer, Peter
2015-01-01
In this article we illustrate how event weights for jet events can be calculated efficiently at next-to-leading order (NLO) accuracy in QCD. This is a crucial prerequisite for the application of the Matrix Element Method in NLO. We modify the recombination procedure used in jet algorithms, to allow a factorisation of the phase space for the real corrections into resolved and unresolved regions. Using an appropriate infrared regulator the latter can be integrated numerically. As illustration, we reproduce differential distributions at NLO for two sample processes. As further application and proof of concept, we apply the Matrix Element Method in NLO accuracy to the mass determination of top quarks produced in e"+e"− annihilation. This analysis is relevant for a future Linear Collider. We observe a significant shift in the extracted mass depending on whether the Matrix Element Method is used in leading or next-to-leading order.
Schleede, Justin; Blair, Seth S
2015-10-01
The developing crossveins of the wing of Drosophila melanogaster are specified by long-range BMP signaling and are especially sensitive to loss of extracellular modulators of BMP signaling such as the Chordin homolog Short gastrulation (Sog). However, the role of the extracellular matrix in BMP signaling and Sog activity in the crossveins has been poorly explored. Using a genetic mosaic screen for mutations that disrupt BMP signaling and posterior crossvein development, we identify Gyc76C, a member of the receptor guanylyl cyclase family that includes mammalian natriuretic peptide receptors. We show that Gyc76C and the soluble cGMP-dependent kinase Foraging, likely linked by cGMP, are necessary for normal refinement and maintenance of long-range BMP signaling in the posterior crossvein. This does not occur through cell-autonomous crosstalk between cGMP and BMP signal transduction, but likely through altered extracellular activity of Sog. We identify a novel pathway leading from Gyc76C to the organization of the wing extracellular matrix by matrix metalloproteinases, and show that both the extracellular matrix and BMP signaling effects are largely mediated by changes in the activity of matrix metalloproteinases. We discuss parallels and differences between this pathway and other examples of cGMP activity in both Drosophila melanogaster and mammalian cells and tissues.
An algorithm for mass matrix calculation of internally constrained molecular geometries
International Nuclear Information System (INIS)
Aryanpour, Masoud; Dhanda, Abhishek; Pitsch, Heinz
2008-01-01
Dynamic models for molecular systems require the determination of corresponding mass matrix. For constrained geometries, these computations are often not trivial but need special considerations. Here, assembling the mass matrix of internally constrained molecular structures is formulated as an optimization problem. Analytical expressions are derived for the solution of the different possible cases depending on the rank of the constraint matrix. Geometrical interpretations are further used to enhance the solution concept. As an application, we evaluate the mass matrix for a constrained molecule undergoing an electron-transfer reaction. The preexponential factor for this reaction is computed based on the harmonic model
An algorithm for mass matrix calculation of internally constrained molecular geometries.
Aryanpour, Masoud; Dhanda, Abhishek; Pitsch, Heinz
2008-01-28
Dynamic models for molecular systems require the determination of corresponding mass matrix. For constrained geometries, these computations are often not trivial but need special considerations. Here, assembling the mass matrix of internally constrained molecular structures is formulated as an optimization problem. Analytical expressions are derived for the solution of the different possible cases depending on the rank of the constraint matrix. Geometrical interpretations are further used to enhance the solution concept. As an application, we evaluate the mass matrix for a constrained molecule undergoing an electron-transfer reaction. The preexponential factor for this reaction is computed based on the harmonic model.
Combined analysis of magnetic and gravity anomalies using normalized source strength (NSS)
Li, L.; Wu, Y.
2017-12-01
Gravity field and magnetic field belong to potential fields which lead inherent multi-solution. Combined analysis of magnetic and gravity anomalies based on Poisson's relation is used to determinate homology gravity and magnetic anomalies and decrease the ambiguity. The traditional combined analysis uses the linear regression of the reduction to pole (RTP) magnetic anomaly to the first order vertical derivative of the gravity anomaly, and provides the quantitative or semi-quantitative interpretation by calculating the correlation coefficient, slope and intercept. In the calculation process, due to the effect of remanent magnetization, the RTP anomaly still contains the effect of oblique magnetization. In this case the homology gravity and magnetic anomalies display irrelevant results in the linear regression calculation. The normalized source strength (NSS) can be transformed from the magnetic tensor matrix, which is insensitive to the remanence. Here we present a new combined analysis using NSS. Based on the Poisson's relation, the gravity tensor matrix can be transformed into the pseudomagnetic tensor matrix of the direction of geomagnetic field magnetization under the homologous condition. The NSS of pseudomagnetic tensor matrix and original magnetic tensor matrix are calculated and linear regression analysis is carried out. The calculated correlation coefficient, slope and intercept indicate the homology level, Poisson's ratio and the distribution of remanent respectively. We test the approach using synthetic model under complex magnetization, the results show that it can still distinguish the same source under the condition of strong remanence, and establish the Poisson's ratio. Finally, this approach is applied in China. The results demonstrated that our approach is feasible.
Xu, Jiao; Shi, Guo-Liang; Guo, Chang-Sheng; Wang, Hai-Ting; Tian, Ying-Ze; Huangfu, Yan-Qi; Zhang, Yuan; Feng, Yin-Chang; Xu, Jian
2018-01-01
A hybrid model based on the positive matrix factorization (PMF) model and the health risk assessment model for assessing risks associated with sources of perfluoroalkyl substances (PFASs) in water was established and applied at Dianchi Lake to test its applicability. The new method contains 2 stages: 1) the sources of PFASs were apportioned by the PMF model and 2) the contribution of health risks from each source was calculated by the new hybrid model. Two factors were extracted by PMF, with factor 1 identified as aqueous fire-fighting foams source and factor 2 as fluoropolymer manufacturing and processing and perfluorooctanoic acid production source. The health risk of PFASs in the water assessed by the health risk assessment model was 9.54 × 10 -7 a -1 on average, showing no obvious adverse effects to human health. The 2 sources' risks estimated by the new hybrid model ranged from 2.95 × 10 -10 to 6.60 × 10 -6 a -1 and from 1.64 × 10 -7 to 1.62 × 10 -6 a -1 , respectively. The new hybrid model can provide useful information on the health risks of PFAS sources, which is helpful for pollution control and environmental management. Environ Toxicol Chem 2018;37:107-115. © 2017 SETAC. © 2017 SETAC.
International Nuclear Information System (INIS)
Etemad, M.A.
1981-04-01
The one dimensional discrete ordinates code ANISN-F was used to calculate the thermal neutron flux distribution in water from a Ra-Be neutron source. The calculations were performed in order to investigate the different possibilities of the code as well as to verify the results of the calculations in terms of comparisons to corresponding experimental data. Two different group cross section libraries were used in the calculations and conclusions were drawn on the adequacy of these libraries for a fixed source type calculation. Furthermore, critically calculations were performed for an infinite homogeneous slab of multiplying material using different angular and spatial approximations. The results of these calculations were then compared to the corresponding results previously obtained at this department by a different method and a different code. (author)
Source-receptor metrology and modeling of trace amounts of atmospheric pollutants
International Nuclear Information System (INIS)
Coddeville, P.
2005-12-01
This work deals with acid pollution and with its long distance transport using the metrology of trace amounts of pollutants in rural environment and the identification of the emission sources at the origin of acid atmospheric fallouts. Several French and foreign precipitation collectors have been evaluated and tested on the field. The measurement efficiency and limitations of four sampling systems for gas and particulate sulfur, ammonia and nitrous compounds have been evaluated. The limits of methods and the measurement uncertainties have been determined and calculated. A second aspect concerns the development of oriented receptor-type statistical models with the aim of improving the research of emission sources in smaller size areas defined by the cells of a geographical mesh. The construction of these models combines the pollution data of the sites with the informations about the trajectories of air masses. Results are given as probability or concentration fields revealing the areas potentially at the origin of pollutant emissions. Areas with strong pollutant emissions have been detected at the Polish, Czech and German borders and have been identified as responsible of pollution events encountered in Morvan region. Quantitative source-receptor relations have been also established. The different atmospheric transport profiles, their related frequency and concentration have been also evaluated using a dynamical clouds classification of air mass retro-trajectories. Finally, the first medium-term exploitation results (14 years) of precipitation data from measurement stations allow to perfectly identify the different meteorological regimes of the French territory by establishing a relation with the chemical composition of rainfalls. A west-east oriented increase of rainfall acidity is observed over the French territory. The pluviometry of the north-east area being among the highest of France, it generates more important deposits of acidifying compounds. The analysis
Energy Technology Data Exchange (ETDEWEB)
Kays, W M; Hossaini-Hashemi, F [Stanford Univ., Palo Alto, CA (USA). Dept. of Mechanical Engineering; Busch, J S [Kaiser Engineers, Oakland, CA (USA)
1982-02-01
A linearized transient thermal conduction model was developed to economically determine media temperatures in geologic repositories for nuclear wastes. Individual canisters containing either high-level waste or spent fuel assemblies are represented as finite-length line sources in a continuous medium. The combined effects of multiple canisters in a representative storage pattern can be established in the medium at selected point of interest by superposition of the temperature rises calculated for each canister. A mathematical solution of the calculation for each separate source is given in this article, permitting a slow hand calculation. The full report, ONWI-94, contains the details of the computer code FLLSSM and its use, yielding the total solution in one computer output.
Receptor model-based source apportionment of particulate pollution in Hyderabad, India.
Guttikunda, Sarath K; Kopakka, Ramani V; Dasari, Prasad; Gertler, Alan W
2013-07-01
Air quality in Hyderabad, India, often exceeds the national ambient air quality standards, especially for particulate matter (PM), which, in 2010, averaged 82.2 ± 24.6, 96.2 ± 12.1, and 64.3 ± 21.2 μg/m(3) of PM10, at commercial, industrial, and residential monitoring stations, respectively, exceeding the national ambient standard of 60 μg/m(3). In 2005, following an ordinance passed by the Supreme Court of India, a source apportionment study was conducted to quantify source contributions to PM pollution in Hyderabad, using the chemical mass balance (version 8.2) receptor model for 180 ambient samples collected at three stations for PM10 and PM2.5 size fractions for three seasons. The receptor modeling results indicated that the PM10 pollution is dominated by the direct vehicular exhaust and road dust (more than 60%). PM2.5 with higher propensity to enter the human respiratory tracks, has mixed sources of vehicle exhaust, industrial coal combustion, garbage burning, and secondary PM. In order to improve the air quality in the city, these findings demonstrate the need to control emissions from all known sources and particularly focus on the low-hanging fruits like road dust and waste burning, while the technological and institutional advancements in the transport and industrial sectors are bound to enhance efficiencies. Andhra Pradesh Pollution Control Board utilized these results to prepare an air pollution control action plan for the city.
Automatic fission source convergence criteria for Monte Carlo criticality calculations
International Nuclear Information System (INIS)
Shim, Hyung Jin; Kim, Chang Hyo
2005-01-01
The Monte Carlo criticality calculations for the multiplication factor and the power distribution in a nuclear system require knowledge of stationary or fundamental-mode fission source distribution (FSD) in the system. Because it is a priori unknown, so-called inactive cycle Monte Carlo (MC) runs are performed to determine it. The inactive cycle MC runs should be continued until the FSD converges to the stationary FSD. Obviously, if one stops them prematurely, the MC calculation results may have biases because the followup active cycles may be run with the non-stationary FSD. Conversely, if one performs the inactive cycle MC runs more than necessary, one is apt to waste computing time because inactive cycle MC runs are used to elicit the fundamental-mode FSD only. In the absence of suitable criteria for terminating the inactive cycle MC runs, one cannot but rely on empiricism in deciding how many inactive cycles one should conduct for a given problem. Depending on the problem, this may introduce biases into Monte Carlo estimates of the parameters one tries to calculate. The purpose of this paper is to present new fission source convergence criteria designed for the automatic termination of inactive cycle MC runs
International Nuclear Information System (INIS)
Mowlavi, A. A.; Binesh, A.; Moslehitabar, H.
2006-01-01
Palladium-103 ( 103 Pd) is a brachytherapy source for cancer treatment. The Monte Carlo codes are usually applied for dose distribution and effect of shieldings. Monte Carlo calculation of dose distribution in water phantom due to a MED3633 103 Pd source is presented in this work. Materials and Methods: The dose distribution around the 10 3Pd Model MED3633 located in the center of 30*30*30 m 3 water phantom cube was calculated using MCNP code by the Monte Carlo method. The percentage depth dose variation along the different axis parallel and perpendicular to the source was also calculated. Then, the isodose curves for 100%, 75%, 50% and 25% percentage depth dose and dosimetry parameters of TG-43 protocol were determined. Results: The results show that the Monte Carlo Method could calculate dose deposition in high gradient region, near the source, accurately. The isodose curves and dosimetric characteristics obtained for MED3633 103 Pd source are in good agreement with published results. Conclusion: The isodose curves of the MED3633 103 Pd source have been derived form dose calculation by MCNP code. The calculated dosimetry parameters for the source agree quite well with their Monte Carlo calculated and experimental measurement values
Source apportionment and location by selective wind sampling and Positive Matrix Factorization.
Venturini, Elisa; Vassura, Ivano; Raffo, Simona; Ferroni, Laura; Bernardi, Elena; Passarini, Fabrizio
2014-10-01
In order to determine the pollution sources in a suburban area and identify the main direction of their origin, PM2.5 was collected with samplers coupled with a wind select sensor and then subjected to Positive Matrix Factorization (PMF) analysis. In each sample, soluble ions, organic carbon, elemental carbon, levoglucosan, metals, and Polycyclic Aromatic Hydrocarbons (PAHs) were determined. PMF results identified six main sources affecting the area: natural gas home appliances, motor vehicles, regional transport, biomass combustion, manufacturing activities, and secondary aerosol. The connection of factor temporal trends with other parameters (i.e., temperature, PM2.5 concentration, and photochemical processes) confirms factor attributions. PMF analysis indicated that the main source of PM2.5 in the area is secondary aerosol. This should be mainly due to regional contributions, owing to both the secondary nature of the source itself and the higher concentration registered in inland air masses. The motor vehicle emission source contribution is also important. This source likely has a prevalent local origin. The most toxic determined components, i.e., PAHs, Cd, Pb, and Ni, are mainly due to vehicular traffic. Even if this is not the main source in the study area, it is the one of greatest concern. The application of PMF analysis to PM2.5 collected with this new sampling technique made it possible to obtain more detailed results on the sources affecting the area compared to a classical PMF analysis.
International Nuclear Information System (INIS)
Dimitrova, S.S.; Gaidarov, M.K.; Antonov, A.N.; Stoitsov, M.V.; Hodgson, P.E; Lukyanov, V.K.; Zemlyanaya, E.V.; Krumova, G.Z.
1997-01-01
Overlap functions and spectroscopic factors extracted from a model one-body density matrix (OBDM) accounting for short-range nucleon-nucleon correlations are used to calculate differential cross sections of (p, d) reactions and the momentum distributions of transitions to single-particle states in 16 O and 40 Ca. A comparison between the experimental (p, d) and (e, e'p) data, their DWBA and CDWIA analyses and the OBDM calculations is made. Our theoretical predictions for the spectroscopic factors are compared with the empirically extracted ones. It is shown that the overlap functions obtained within the Jastrow correlation method are applicable to the description of the quantities considered. (author)
Energy Technology Data Exchange (ETDEWEB)
Martini, Till; Uwer, Peter [Humboldt-Universität zu Berlin, Institut für Physik,Newtonstraße 15, 12489 Berlin (Germany)
2015-09-14
In this article we illustrate how event weights for jet events can be calculated efficiently at next-to-leading order (NLO) accuracy in QCD. This is a crucial prerequisite for the application of the Matrix Element Method in NLO. We modify the recombination procedure used in jet algorithms, to allow a factorisation of the phase space for the real corrections into resolved and unresolved regions. Using an appropriate infrared regulator the latter can be integrated numerically. As illustration, we reproduce differential distributions at NLO for two sample processes. As further application and proof of concept, we apply the Matrix Element Method in NLO accuracy to the mass determination of top quarks produced in e{sup +}e{sup −} annihilation. This analysis is relevant for a future Linear Collider. We observe a significant shift in the extracted mass depending on whether the Matrix Element Method is used in leading or next-to-leading order.
Measurement and calculation of radiation sources in the primary cooling system of JOYO
International Nuclear Information System (INIS)
Suzuki, S.; Iizawa, K.; Ohtani, N.; Kobayashi, T.; Horie, J.; Handa, H.
1987-01-01
Production and transfer of radiation sources in the primary cooling system are important consideration in the LMFBR plant from the viewpoint of radiation protection and shielding design. These items were evaluated with calculations and/or measurements in the Japanese experimental fast reactor JOYO. In this study, calculations were made with the DOT3.5 0 two-dimensional discrete ordinate transport code to determine the neutron flux and production rate distributions of radiation sources in the reactor vessel. Using the DOT results, the behavior in primary coolant sodium of the CP (radioactive corrosion products) which were released from the reactor structural material was also calculationally analyzed with the PSYCHE code developed by PNC. These analytical results were compared with the measured results to get the verification of analysis methods and to estimate the accuracy of calculations
Analytic matrix elements with shifted correlated Gaussians
DEFF Research Database (Denmark)
Fedorov, D. V.
2017-01-01
Matrix elements between shifted correlated Gaussians of various potentials with several form-factors are calculated analytically. Analytic matrix elements are of importance for the correlated Gaussian method in quantum few-body physics.......Matrix elements between shifted correlated Gaussians of various potentials with several form-factors are calculated analytically. Analytic matrix elements are of importance for the correlated Gaussian method in quantum few-body physics....
International Nuclear Information System (INIS)
Yamamoto, Toshihiro; Miyoshi, Yoshinori
2004-01-01
A new algorithm of Monte Carlo criticality calculations for implementing Wielandt's method, which is one of acceleration techniques for deterministic source iteration methods, is developed, and the algorithm can be successfully implemented into MCNP code. In this algorithm, part of fission neutrons emitted during random walk processes are tracked within the current cycle, and thus a fission source distribution used in the next cycle spread more widely. Applying this method intensifies a neutron interaction effect even in a loosely-coupled array where conventional Monte Carlo criticality methods have difficulties, and a converged fission source distribution can be obtained with fewer cycles. Computing time spent for one cycle, however, increases because of tracking fission neutrons within the current cycle, which eventually results in an increase of total computing time up to convergence. In addition, statistical fluctuations of a fission source distribution in a cycle are worsened by applying Wielandt's method to Monte Carlo criticality calculations. However, since a fission source convergence is attained with fewer source iterations, a reliable determination of convergence can easily be made even in a system with a slow convergence. This acceleration method is expected to contribute to prevention of incorrect Monte Carlo criticality calculations. (author)
International Nuclear Information System (INIS)
Christoforou, S.; Hoogenboom, J. E.
2009-01-01
We have used Boltzmann entropy in order to test whether a zero-variance based scheme can speed up the fission source convergence in a Monte Carlo calculation. It is shown that the choice of the initial source distribution significantly influences the evolution of the source, even leading to cases where the source does not converge at all throughout the calculation. The results from a loosely coupled system based on the NEA/OECD source convergence benchmarks indicate that, when using a biasing scheme such as the one we have developed, there can be significant improvement in the convergence, up to 3 times faster, which coupled with an figure of merit improvement of 1.5 leads to more efficient calculations. (authors)
International Nuclear Information System (INIS)
Resler, D.A.
1987-03-01
The specific purpose of this work is to provide a better understanding of the 14 C level structure; the general purpose is to provide the details for using shell model calculations in R-matrix analyses. Using the TOF facilities of the Ohio University Accelerator Laboratory, the elastic and first 3 inelastic differential scattering cross sections for 13 C + n were measured at 69 energies for 4.5 ≤ E/sub n/ ≤ 11 MeV. A multiple scattering code was developed which provided a simulation of the experimental scattering process allowing accurate corrections to the small inelastic data. The integrated 13 C(n,α) 10 Be cross section is estimated. The sequential 2n-decay of 14 C states populated by 13 C + n was observed. A shell model code was developed. Normal and nonnormal parity calculations were made for the lithium isotopes using a new two-body interaction. The results for 5 Li predict the 2s/sub 1/2/ and 1d/sub 5/2/ single-particle states to be located below the 3/2 + state. Similar calculations were made for 13 C, 13 N, and 14 C. Results for 13 C and 13 N show for E/sub x/ 7 Li and 14 C, 2 h-barω calculations were done. Shell model calculations generated the R-matrix parameters for the elastic and first 3 inelastic channels of 13 C + n. After adjusting some energies, the predicted structure generally agrees with experiment for E/sub n/ 13 C + n data were refit to replace R 0 background terms by more realistic broad states and to get better agreement with model calculations. R-matrix fitting of the full data set produced new 14 C level information. For E/sub n/ > 4 MeV (E/sub x/ > 12 MeV), 5 states are given definite J/sup π/ assignments; 3, tentative assignments. 122 refs., 91 figs., 30 tabs
Bodewig, E
1959-01-01
Matrix Calculus, Second Revised and Enlarged Edition focuses on systematic calculation with the building blocks of a matrix and rows and columns, shunning the use of individual elements. The publication first offers information on vectors, matrices, further applications, measures of the magnitude of a matrix, and forms. The text then examines eigenvalues and exact solutions, including the characteristic equation, eigenrows, extremum properties of the eigenvalues, bounds for the eigenvalues, elementary divisors, and bounds for the determinant. The text ponders on approximate solutions, as well
Xue, Jian-long; Zhi, Yu-you; Yang, Li-ping; Shi, Jia-chun; Zeng, Ling-zao; Wu, Lao-sheng
2014-06-01
Chemical compositions of soil samples are multivariate in nature and provide datasets suitable for the application of multivariate factor analytical techniques. One of the analytical techniques, the positive matrix factorization (PMF), uses a weighted least square by fitting the data matrix to determine the weights of the sources based on the error estimates of each data point. In this research, PMF was employed to apportion the sources of heavy metals in 104 soil samples taken within a 1-km radius of a lead battery plant contaminated site in Changxing County, Zhejiang Province, China. The site is heavily contaminated with high concentrations of lead (Pb) and cadmium (Cd). PMF successfully partitioned the variances into sources related to soil background, agronomic practices, and the lead battery plants combined with a geostatistical approach. It was estimated that the lead battery plants and the agronomic practices contributed 55.37 and 29.28%, respectively, for soil Pb of the total source. Soil Cd mainly came from the lead battery plants (65.92%), followed by the agronomic practices (21.65%), and soil parent materials (12.43%). This research indicates that PMF combined with geostatistics is a useful tool for source identification and apportionment.
Wang, Hua; Huang, Heng; Ding, Chris; Nie, Feiping
2013-04-01
Protein interactions are central to all the biological processes and structural scaffolds in living organisms, because they orchestrate a number of cellular processes such as metabolic pathways and immunological recognition. Several high-throughput methods, for example, yeast two-hybrid system and mass spectrometry method, can help determine protein interactions, which, however, suffer from high false-positive rates. Moreover, many protein interactions predicted by one method are not supported by another. Therefore, computational methods are necessary and crucial to complete the interactome expeditiously. In this work, we formulate the problem of predicting protein interactions from a new mathematical perspective--sparse matrix completion, and propose a novel nonnegative matrix factorization (NMF)-based matrix completion approach to predict new protein interactions from existing protein interaction networks. Through using manifold regularization, we further develop our method to integrate different biological data sources, such as protein sequences, gene expressions, protein structure information, etc. Extensive experimental results on four species, Saccharomyces cerevisiae, Drosophila melanogaster, Homo sapiens, and Caenorhabditis elegans, have shown that our new methods outperform related state-of-the-art protein interaction prediction methods.
Directory of Open Access Journals (Sweden)
Justin Schleede
2015-10-01
Full Text Available The developing crossveins of the wing of Drosophila melanogaster are specified by long-range BMP signaling and are especially sensitive to loss of extracellular modulators of BMP signaling such as the Chordin homolog Short gastrulation (Sog. However, the role of the extracellular matrix in BMP signaling and Sog activity in the crossveins has been poorly explored. Using a genetic mosaic screen for mutations that disrupt BMP signaling and posterior crossvein development, we identify Gyc76C, a member of the receptor guanylyl cyclase family that includes mammalian natriuretic peptide receptors. We show that Gyc76C and the soluble cGMP-dependent kinase Foraging, likely linked by cGMP, are necessary for normal refinement and maintenance of long-range BMP signaling in the posterior crossvein. This does not occur through cell-autonomous crosstalk between cGMP and BMP signal transduction, but likely through altered extracellular activity of Sog. We identify a novel pathway leading from Gyc76C to the organization of the wing extracellular matrix by matrix metalloproteinases, and show that both the extracellular matrix and BMP signaling effects are largely mediated by changes in the activity of matrix metalloproteinases. We discuss parallels and differences between this pathway and other examples of cGMP activity in both Drosophila melanogaster and mammalian cells and tissues.
Singh, Nandita; Murari, Vishnu; Kumar, Manish; Barman, S C; Banerjee, Tirthankar
2017-04-01
Fine particulates (PM 2.5 ) constitute dominant proportion of airborne particulates and have been often associated with human health disorders, changes in regional climate, hydrological cycle and more recently to food security. Intrinsic properties of particulates are direct function of sources. This initiates the necessity of conducting a comprehensive review on PM 2.5 sources over South Asia which in turn may be valuable to develop strategies for emission control. Particulate source apportionment (SA) through receptor models is one of the existing tool to quantify contribution of particulate sources. Review of 51 SA studies were performed of which 48 (94%) were appeared within a span of 2007-2016. Almost half of SA studies (55%) were found concentrated over few typical urban stations (Delhi, Dhaka, Mumbai, Agra and Lahore). Due to lack of local particulate source profile and emission inventory, positive matrix factorization and principal component analysis (62% of studies) were the primary choices, followed by chemical mass balance (CMB, 18%). Metallic species were most regularly used as source tracers while use of organic molecular markers and gas-to-particle conversion were minimum. Among all the SA sites, vehicular emissions (mean ± sd: 37 ± 20%) emerged as most dominating PM 2.5 source followed by industrial emissions (23 ± 16%), secondary aerosols (22 ± 12%) and natural sources (20 ± 15%). Vehicular emissions (39 ± 24%) also identified as dominating source for highly polluted sites (PM 2.5 >100 μgm -3 , n = 15) while site specific influence of either or in combination of industrial, secondary aerosols and natural sources were recognized. Source specific trends were considerably varied in terms of region and seasonality. Both natural and industrial sources were most influential over Pakistan and Afghanistan while over Indo-Gangetic plain, vehicular, natural and industrial emissions appeared dominant. Influence of vehicular emission was
Bates, Kevin R.; Daniels, Andrew D.; Scuseria, Gustavo E.
1998-01-01
We report a comparison of two linear-scaling methods which avoid the diagonalization bottleneck of traditional electronic structure algorithms. The Chebyshev expansion method (CEM) is implemented for carbon tight-binding calculations of large systems and its memory and timing requirements compared to those of our previously implemented conjugate gradient density matrix search (CG-DMS). Benchmark calculations are carried out on icosahedral fullerenes from C60 to C8640 and the linear scaling memory and CPU requirements of the CEM demonstrated. We show that the CPU requisites of the CEM and CG-DMS are similar for calculations with comparable accuracy.
Jayalakshmi, V; Krishna, N Rama
2002-03-01
A couple of recent applications of intermolecular NOE (INOE) experiments as applied to biomolecular systems involve the (i) saturation transfer difference NMR (STD-NMR) method and (ii) the intermolecular cross-saturation NMR (ICS-NMR) experiment. STD-NMR is a promising tool for rapid screening of a large library of compounds to identify bioactive ligands binding to a target protein. Additionally, it is also useful in mapping the binding epitopes presented by a bioactive ligand to its target protein. In this latter application, the STD-NMR technique is essentially similar to the ICS-NMR experiment, which is used to map protein-protein or protein-nucleic acid contact surfaces in complexes. In this work, we present a complete relaxation and conformational exchange matrix (CORCEMA) theory (H. N. B. Moseley et al., J. Magn. Reson. B 108, 243-261 (1995)) applicable for these two closely related experiments. As in our previous work, we show that when exchange is fast on the relaxation rate scale, a simplified CORCEMA theory can be formulated using a generalized average relaxation rate matrix. Its range of validity is established by comparing its predictions with those of the exact CORCEMA theory which is valid for all exchange rates. Using some ideal model systems we have analyzed the factors that influence the ligand proton intensity changes when the resonances from some protons on the receptor protein are saturated. The results show that the intensity changes in the ligand signals in an intermolecular NOE experiment are very much dependent upon: (1) the saturation time, (2) the location of the saturated receptor protons with respect to the ligand protons, (3) the conformation of the ligand-receptor interface, (4) the rotational correlation times for the molecular species, (5) the kinetics of the reversibly forming complex, and (6) the ligand/receptor ratio. As an example of a typical application of the STD-NMR experiment we have also simulated the STD effects for a
International Nuclear Information System (INIS)
Dolgolenko, A.P.; Fishchuk, I.I.
1981-01-01
Pulled n-Si samples with rho approximately 40 Ωcm are investigated after irradiation with different doses of fast-pile neutrons. It is known that the simple defects are created not only in the conductive matrix but also in the region of the space charge of defect clusters. Then the charge state, for example, of A-centres in the region of the space charge is defined by both, the temperature and the value of the electrostatical potential. If this circumstance is not taken into account the calculation of the conductive volume is not precise enough. In the present paper the temperature dependence of the volume fraction is calculated, in which the space charge of defect clusters occurs, taking into account the recharges of A-centres in the region of the space charge. Using the expression obtained the A-centres build-up kinetics in the conductive matrix of pulled n-type silicon is calculated. (author)
Neutronic calculations for a subcritical system with external source
International Nuclear Information System (INIS)
Cintas, A; Lopasso, E.M; Marquez Damian, J. I
2006-01-01
We present a neutronic study on an A D S, systems capable of transmute minor actinides and fission products in order to reduce their radiotoxicity and mean-life.We compare neutronic parameters obtained with Scale/Tort and M C N P modelling a sub-critical system with source from a N E A Benchmark.Due to lack of nuclear data at the temperature of the system, we perform calculations at available temperature of libraries (300 K); to compensate the reactivity insertion due to the temperature change we reduce the size of the fuel zone in order to get a sub-critical system that allow u s to evaluate neutronic parameters of the system with source.We have found that the numerical results (neutron spectrum, neutron flux distributions and other neutronic parameters) are in agreement with the M C N P and with those of the benchmark participants even though the geometric models used are not exactly the same. We conclude that with the real temperature cross sections, the calculation scheme developed (Scale/Tort and M C N P) will give reliable results in A D S evaluations [es
International Nuclear Information System (INIS)
Zhang, L.
1981-08-01
A method based on the tight-binding approximation is developed to calculate the electron-phonon matrix element for the disordered transition metals. With the method as a basis the experimental Tsub(c) data of the amorphous transition metal superconductors are re-analysed. Some comments on the superconductivity of the disordered materials are given
Verification test calculations for the Source Term Code Package
International Nuclear Information System (INIS)
Denning, R.S.; Wooton, R.O.; Alexander, C.A.; Curtis, L.A.; Cybulskis, P.; Gieseke, J.A.; Jordan, H.; Lee, K.W.; Nicolosi, S.L.
1986-07-01
The purpose of this report is to demonstrate the reasonableness of the Source Term Code Package (STCP) results. Hand calculations have been performed spanning a wide variety of phenomena within the context of a single accident sequence, a loss of all ac power with late containment failure, in the Peach Bottom (BWR) plant, and compared with STCP results. The report identifies some of the limitations of the hand calculation effort. The processes involved in a core meltdown accident are complex and coupled. Hand calculations by their nature must deal with gross simplifications of these processes. Their greatest strength is as an indicator that a computer code contains an error, for example that it doesn't satisfy basic conservation laws, rather than in showing the analysis accurately represents reality. Hand calculations are an important element of verification but they do not satisfy the need for code validation. The code validation program for the STCP is a separate effort. In general the hand calculation results show that models used in the STCP codes (e.g., MARCH, TRAP-MELT, VANESA) obey basic conservation laws and produce reasonable results. The degree of agreement and significance of the comparisons differ among the models evaluated. 20 figs., 26 tabs
Developing a source-receptor methodology for the characterization of VOC sources in ambient air
International Nuclear Information System (INIS)
Borbon, A.; Badol, C.; Locoge, N.
2005-01-01
Since 2001, in France, a continuous monitoring of about thirty ozone precursor non-methane hydrocarbons (NMHC) is led in some urban areas. The automated system for NMHC monitoring consists of sub-ambient preconcentration on a cooled multi-sorbent trap followed by thermal desorption and bidimensional Gas Chromatography/Flame Ionisation Detection analysis.The great number of data collected and their exploitation should provide a qualitative and quantitative assessment of hydrocarbon sources. This should help in the definition of relevant strategies of emission regulation as stated by the European Directive relative to ozone in ambient air (2002/3/EC). The purpose of this work is to present the bases and the contributions of an original methodology known as source-receptor in the characterization of NMHC sources. It is a statistical and diagnostic approach, adaptable and transposable in all urban sites, which integrates the spatial and temporal dynamics of the emissions. The methods for source identification combine descriptive or more complex complementary approaches: 1) univariate approach through the analysis of NMHC time series and concentration roses, 2) bivariate approach through a Graphical Ratio Analysis and a characterization of scatterplot distributions of hydrocarbon pairs, 3) multivariate approach with Principal Component Analyses on various time basis. A linear regression model is finally developed to estimate the spatial and temporal source contributions. Apart from vehicle exhaust emissions, sources of interest are: combustion and fossil fuel-related activities, petrol and/or solvent evaporation, the double anthropogenic and biogenic origin of isoprene and other industrial activities depending on local parameters. (author)
International Nuclear Information System (INIS)
Lopes, Antonio Augusto; Miranda, Rogerio dos Anjos; Goncalves, Rilvani Cavalcante; Thomaz, Ana Maria
2009-01-01
In patients with congenital heart disease undergoing cardiac catheterization for hemodynamic purposes, parameter estimation by the indirect Fick method using a single predicted value of oxygen consumption has been a matter of criticism. We developed a computer-based routine for rapid estimation of replicate hemodynamic parameters using multiple predicted values of oxygen consumption. Using Microsoft Excel facilities, we constructed a matrix containing 5 models (equations) for prediction of oxygen consumption, and all additional formulas needed to obtain replicate estimates of hemodynamic parameters. By entering data from 65 patients with ventricular septal defects, aged 1 month to 8 years, it was possible to obtain multiple predictions for oxygen consumption, with clear between-age groups ( P <.001) and between-methods ( P <.001) differences. Using these predictions in the individual patient, it was possible to obtain the upper and lower limits of a likely range for any given parameter, which made estimation more realistic. The organized matrix allows for rapid obtainment of replicate parameter estimates, without error due to exhaustive calculations. (author)
A large-scale R-matrix calculation for electron-impact excitation of the Ne2 +, O-like ion
McLaughlin , B M; Lee , Teck-Ghee; Ludlow , J A; Landi , E; Loch , S D; Pindzola , M S; Ballance , C P
2011-01-01
Abstract The five J? levels within a np2 or np4 ground state complex provide an excellent testing ground for the comparison of theoretical line ratios with astrophysically observed values, in addition to providing valuable electron temperature and density diagnostics. The low temperature nature of the line ratios ensure that the theoretically derived values are sensitive to the underlying atomic structure and electron-impact excitation rates. Previous R- matrix calculations for the O-like ...
ARC: An open-source library for calculating properties of alkali Rydberg atoms
Šibalić, N.; Pritchard, J. D.; Adams, C. S.; Weatherill, K. J.
2017-11-01
We present an object-oriented Python library for the computation of properties of highly-excited Rydberg states of alkali atoms. These include single-body effects such as dipole matrix elements, excited-state lifetimes (radiative and black-body limited) and Stark maps of atoms in external electric fields, as well as two-atom interaction potentials accounting for dipole and quadrupole coupling effects valid at both long and short range for arbitrary placement of the atomic dipoles. The package is cross-referenced to precise measurements of atomic energy levels and features extensive documentation to facilitate rapid upgrade or expansion by users. This library has direct application in the field of quantum information and quantum optics which exploit the strong Rydberg dipolar interactions for two-qubit gates, robust atom-light interfaces and simulating quantum many-body physics, as well as the field of metrology using Rydberg atoms as precise microwave electrometers. Program Files doi:http://dx.doi.org/10.17632/hm5n8w628c.1 Licensing provisions: BSD-3-Clause Programming language: Python 2.7 or 3.5, with C extension External Routines: NumPy [1], SciPy [1], Matplotlib [2] Nature of problem: Calculating atomic properties of alkali atoms including lifetimes, energies, Stark shifts and dipole-dipole interaction strengths using matrix elements evaluated from radial wavefunctions. Solution method: Numerical integration of radial Schrödinger equation to obtain atomic wavefunctions, which are then used to evaluate dipole matrix elements. Properties are calculated using second order perturbation theory or exact diagonalisation of the interaction Hamiltonian, yielding results valid even at large external fields or small interatomic separation. Restrictions: External electric field fixed to be parallel to quantisation axis. Supplementary material: Detailed documentation (.html), and Jupyter notebook with examples and benchmarking runs (.html and .ipynb). [1] T.E. Oliphant
Making LULUCF matrix of Korea by Approach 2&3
Hwang, J.; Jang, R.; Seong, M.; Yim, J.; Jeon, S. W.
2017-12-01
To establish and implement policies in response to climate change, it is very important to identify domestic greenhouse gas emission sources and sinks, and accurately calculate emissions and removals from each source and sink. The IPCC Guideline requires the establishment of six sectors of energy, industrial processes, solvents and other product use, agriculture, Land-Use Change and Forestry (LULUCF) and waste in estimating GHG inventories. LULUCF is divided into 6 categories according to land use, purpose, and type, and then it calculates greenhouse gas emission/absorption amount due to artificial activities according to each land use category and greenhouse gas emission/absorption amount according to land use change. The IPCC Guideline provides three approaches to how to create a LULUCF discipline matrix. According to the IPCC Guidelines, it is a principle to divide into the land use that is maintained and the land use area changed to other lands. However, Korea currently uses Approach 1, which is based on statistical data, it is difficult to detect changed area. Therefore, in this study, we are going to do a preliminary work for constructing the LULUCF matrix at Approach 2 & 3 level. NFI data, GIS, and RS data were used to build the matrix of Approach 2 method by Sampling method. For used for Approach 3, we analyzed the four thematic maps - Cadastral Map, Land Cover Map, Forest Type Map, and Biotope Map - representing land cover and utilization in terms of legal, property, quantitative and qualitative aspects. There is a difference between these maps because their purpose, resolution, timing and spatial range are different. Comparing these maps is important because it can help for decide map which is suitable for constructing the LULUCF matrix.Keywords: LULUCF, GIS/RS, IPCC Guideline, Approach 2&3, Thematic Maps
Gómez-Zavaglia, Andrea; Fausto, R.
2003-01-01
Sarcosine (N-methylglycine) has been studied by matrix-isolation FT-IR spectroscopy and molecular orbital calculations undertaken at the DFT/B3LYP and MP2 levels of theory with the 6-311++G(d, p) and 6-31++G(d, p) basis set, respectively. Eleven different conformers were located in the potential energy surface (PES) of sarcosine, with the ASC conformer being the ground conformational state. This form is analogous to the glycine most stable conformer and is characterized by a NH...O= intramole...
Directory of Open Access Journals (Sweden)
Thorsten Pauly
Full Text Available Functional and structural alterations of clustered postsynaptic ligand gated ion channels in neuronal cells are thought to contribute to synaptic plasticity and memory formation in the human brain. Here, we describe a novel molecular mechanism for structural alterations of NR1 subunits of the NMDA receptor. In cultured rat spinal cord neurons, chronic NMDA receptor stimulation induces disappearance of extracellular epitopes of NMDA receptor NR1 subunits, which was prevented by inhibiting matrix metalloproteinases (MMPs. Immunoblotting revealed the digestion of solubilized NR1 subunits by MMP-3 and identified a fragment of about 60 kDa as MMPs-activity-dependent cleavage product of the NR1 subunit in cultured neurons. The expression of MMP-3 in the spinal cord culture was shown by immunoblotting and immunofluorescence microscopy. Recombinant NR1 glycine binding protein was used to identify MMP-3 cleavage sites within the extracellular S1 and S2-domains. N-terminal sequencing and site-directed mutagenesis revealed S542 and L790 as two putative major MMP-3 cleavage sites of the NR1 subunit. In conclusion, our data indicate that MMPs, and in particular MMP-3, are involved in the activity dependent alteration of NMDA receptor structure at postsynaptic membrane specializations in the CNS.
Pauly, Thorsten; Ratliff, Miriam; Pietrowski, Eweline; Neugebauer, Rainer; Schlicksupp, Andrea; Kirsch, Joachim; Kuhse, Jochen
2008-07-16
Functional and structural alterations of clustered postsynaptic ligand gated ion channels in neuronal cells are thought to contribute to synaptic plasticity and memory formation in the human brain. Here, we describe a novel molecular mechanism for structural alterations of NR1 subunits of the NMDA receptor. In cultured rat spinal cord neurons, chronic NMDA receptor stimulation induces disappearance of extracellular epitopes of NMDA receptor NR1 subunits, which was prevented by inhibiting matrix metalloproteinases (MMPs). Immunoblotting revealed the digestion of solubilized NR1 subunits by MMP-3 and identified a fragment of about 60 kDa as MMPs-activity-dependent cleavage product of the NR1 subunit in cultured neurons. The expression of MMP-3 in the spinal cord culture was shown by immunoblotting and immunofluorescence microscopy. Recombinant NR1 glycine binding protein was used to identify MMP-3 cleavage sites within the extracellular S1 and S2-domains. N-terminal sequencing and site-directed mutagenesis revealed S542 and L790 as two putative major MMP-3 cleavage sites of the NR1 subunit. In conclusion, our data indicate that MMPs, and in particular MMP-3, are involved in the activity dependent alteration of NMDA receptor structure at postsynaptic membrane specializations in the CNS.
Nucleon matrix elements using the variational method in lattice QCD
International Nuclear Information System (INIS)
Dragos, J.; Kamleh, W.; Leinweber, D.B.; Zanotti, J.M.; Rakow, P.E.L.; Young, R.D.; Adelaide Univ., SA
2016-06-01
The extraction of hadron matrix elements in lattice QCD using the standard two- and threepoint correlator functions demands careful attention to systematic uncertainties. One of the most commonly studied sources of systematic error is contamination from excited states. We apply the variational method to calculate the axial vector current g_A, the scalar current g_S and the quark momentum fraction left angle x right angle of the nucleon and we compare the results to the more commonly used summation and two-exponential fit methods. The results demonstrate that the variational approach offers a more efficient and robust method for the determination of nucleon matrix elements.
Processing of the glycosomal matrix-protein import receptor PEX5 of Trypanosoma brucei
International Nuclear Information System (INIS)
Gualdrón-López, Melisa; Michels, Paul A.M.
2013-01-01
Highlights: ► Most eukaryotic cells have a single gene for the peroxin PEX5. ► PEX5 is sensitive to in vitro proteolysis in distantly related organisms. ► TbPEX5 undergoes N-terminal truncation in vitro and possibly in vivo. ► Truncated TbPEX5 is still capable of binding PTS1-containing proteins. ► PEX5 truncation is physiologically relevant or an evolutionary conserved artifact. -- Abstract: Glycolysis in kinetoplastid protists such as Trypanosoma brucei is compartmentalized in peroxisome-like organelles called glycosomes. Glycosomal matrix-protein import involves a cytosolic receptor, PEX5, which recognizes the peroxisomal-targeting signal type 1 (PTS1) present at the C-terminus of the majority of matrix proteins. PEX5 appears generally susceptible to in vitro proteolytic processing. On western blots of T. brucei, two PEX5 forms are detected with apparent M r of 100 kDa and 72 kDa. 5′-RACE-PCR showed that TbPEX5 is encoded by a unique transcript that can be translated into a protein of maximally 72 kDa. However, recombinant PEX5 migrates aberrantly in SDS–PAGE with an apparent M r of 100 kDa, similarly as observed for the native peroxin. In vitro protease susceptibility analysis of native and 35 S-labelled PEX5 showed truncation of the 100 kDa form at the N-terminal side by unknown parasite proteases, giving rise to the 72 kDa form which remains functional for PTS1 binding. The relevance of these observations is discussed
Processing of the glycosomal matrix-protein import receptor PEX5 of Trypanosoma brucei
Energy Technology Data Exchange (ETDEWEB)
Gualdrón-López, Melisa [Research Unit for Tropical Diseases, de Duve Institute, Université catholique de Louvain, Brussels (Belgium); Michels, Paul A.M., E-mail: paul.michels@uclouvain.be [Research Unit for Tropical Diseases, de Duve Institute, Université catholique de Louvain, Brussels (Belgium)
2013-02-01
Highlights: ► Most eukaryotic cells have a single gene for the peroxin PEX5. ► PEX5 is sensitive to in vitro proteolysis in distantly related organisms. ► TbPEX5 undergoes N-terminal truncation in vitro and possibly in vivo. ► Truncated TbPEX5 is still capable of binding PTS1-containing proteins. ► PEX5 truncation is physiologically relevant or an evolutionary conserved artifact. -- Abstract: Glycolysis in kinetoplastid protists such as Trypanosoma brucei is compartmentalized in peroxisome-like organelles called glycosomes. Glycosomal matrix-protein import involves a cytosolic receptor, PEX5, which recognizes the peroxisomal-targeting signal type 1 (PTS1) present at the C-terminus of the majority of matrix proteins. PEX5 appears generally susceptible to in vitro proteolytic processing. On western blots of T. brucei, two PEX5 forms are detected with apparent M{sub r} of 100 kDa and 72 kDa. 5′-RACE-PCR showed that TbPEX5 is encoded by a unique transcript that can be translated into a protein of maximally 72 kDa. However, recombinant PEX5 migrates aberrantly in SDS–PAGE with an apparent M{sub r} of 100 kDa, similarly as observed for the native peroxin. In vitro protease susceptibility analysis of native and {sup 35}S-labelled PEX5 showed truncation of the 100 kDa form at the N-terminal side by unknown parasite proteases, giving rise to the 72 kDa form which remains functional for PTS1 binding. The relevance of these observations is discussed.
Argyropoulos, G; Samara, C; Diapouli, E; Eleftheriadis, K; Papaoikonomou, K; Kungolos, A
2017-12-01
A hybrid source-receptor modeling process was assembled, to apportion and infer source locations of PM 10 and PM 2.5 in three heavily-impacted urban areas of Greece, during the warm period of 2011, and the cold period of 2012. The assembled process involved application of an advanced computational procedure, the so-called Robotic Chemical Mass Balance (RCMB) model. Source locations were inferred using two well-established probability functions: (a) the Conditional Probability Function (CPF), to correlate the output of RCMB with local wind directional data, and (b) the Potential Source Contribution Function (PSCF), to correlate the output of RCMB with 72h air-mass back-trajectories, arriving at the receptor sites, during sampling. Regarding CPF, a higher-level conditional probability function was defined as well, from the common locus of CPF sectors derived for neighboring receptor sites. With respect to PSCF, a non-parametric bootstrapping method was applied to discriminate the statistically significant values. RCMB modeling showed that resuspended dust is actually one of the main barriers for attaining the European Union (EU) limit values in Mediterranean urban agglomerations, where the drier climate favors build-up. The shift in the energy mix of Greece (caused by the economic recession) was also evidenced, since biomass burning was found to contribute more significantly to the sampling sites belonging to the coldest climatic zone, particularly during the cold period. The CPF analysis showed that short-range transport of anthropogenic emissions from urban traffic to urban background sites was very likely to have occurred, within all the examined urban agglomerations. The PSCF analysis confirmed that long-range transport of primary and/or secondary aerosols may indeed be possible, even from distances over 1000km away from study areas. Copyright © 2017 Elsevier B.V. All rights reserved.
Salinas, Armando G.; Davis, Margaret I.; Lovinger, David M.; Mateo, Yolanda
2016-01-01
The striatum is typically classified according to its major output pathways, which consist of dopamine D1 and D2 receptor-expressing neurons. The striatum is also divided into striosome and matrix compartments, based on the differential expression of a number of proteins, including the mu opioid receptor, dopamine transporter (DAT), and Nr4a1 (nuclear receptor subfamily 4, group A, member 1). Numerous functional differences between the striosome and matrix compartments are implicated in dopamine-related neurological disorders including Parkinson’s disease and addiction. Using Nr4a1-eGFP mice, we provide evidence that electrically evoked dopamine release differs between the striosome and matrix compartments in a regionally-distinct manner. We further demonstrate that this difference is not due to differences in inhibition of dopamine release by dopamine autoreceptors or nicotinic acetylcholine receptors. Furthermore, cocaine enhanced extracellular dopamine in striosomes to a greater degree than in the matrix and concomitantly inhibited dopamine uptake in the matrix to a greater degree than in striosomes. Importantly, these compartment differences in cocaine sensitivity were limited to the dorsal striatum. These findings demonstrate a level of exquisite microanatomical regulation of dopamine by the DAT in striosomes relative to the matrix. PMID:27036891
Controlling excited-state contamination in nucleon matrix elements
Energy Technology Data Exchange (ETDEWEB)
Yoon, Boram; Gupta, Rajan; Bhattacharya, Tanmoy; Engelhardt, Michael; Green, Jeremy; Joó, Bálint; Lin, Huey-Wen; Negele, John; Orginos, Kostas; Pochinsky, Andrew; Richards, David; Syritsyn, Sergey; Winter, Frank
2016-06-01
We present a detailed analysis of methods to reduce statistical errors and excited-state contamination in the calculation of matrix elements of quark bilinear operators in nucleon states. All the calculations were done on a 2+1 flavor ensemble with lattices of size $32^3 \\times 64$ generated using the rational hybrid Monte Carlo algorithm at $a=0.081$~fm and with $M_\\pi=312$~MeV. The statistical precision of the data is improved using the all-mode-averaging method. We compare two methods for reducing excited-state contamination: a variational analysis and a two-state fit to data at multiple values of the source-sink separation $t_{\\rm sep}$. We show that both methods can be tuned to significantly reduce excited-state contamination and discuss their relative advantages and cost-effectiveness. A detailed analysis of the size of source smearing used in the calculation of quark propagators and the range of values of $t_{\\rm sep}$ needed to demonstrate convergence of the isovector charges of the nucleon to the $t_{\\rm sep} \\to \\infty $ estimates is presented.
Selection of models to calculate the LLW source term
International Nuclear Information System (INIS)
Sullivan, T.M.
1991-10-01
Performance assessment of a LLW disposal facility begins with an estimation of the rate at which radionuclides migrate out of the facility (i.e., the source term). The focus of this work is to develop a methodology for calculating the source term. In general, the source term is influenced by the radionuclide inventory, the wasteforms and containers used to dispose of the inventory, and the physical processes that lead to release from the facility (fluid flow, container degradation, wasteform leaching, and radionuclide transport). In turn, many of these physical processes are influenced by the design of the disposal facility (e.g., infiltration of water). The complexity of the problem and the absence of appropriate data prevent development of an entirely mechanistic representation of radionuclide release from a disposal facility. Typically, a number of assumptions, based on knowledge of the disposal system, are used to simplify the problem. This document provides a brief overview of disposal practices and reviews existing source term models as background for selecting appropriate models for estimating the source term. The selection rationale and the mathematical details of the models are presented. Finally, guidance is presented for combining the inventory data with appropriate mechanisms describing release from the disposal facility. 44 refs., 6 figs., 1 tab
Matrix Metalloproteinases in Myasthenia Gravis
Helgeland, G.; Petzold, A.F.S.; Luckman, S.P.; Gilhus, N.E.; Plant, G.T.; Romi, F.R.
2011-01-01
Introduction: Myasthenia gravis (MG) is an autoimmune disease with weakness in striated musculature due to anti-acetylcholine receptor (AChR) antibodies or muscle specific kinase at the neuromuscular junction. A subgroup of patients has periocular symptoms only; ocular MG (OMG). Matrix
International Nuclear Information System (INIS)
Baron, Jorge H.; Rivera, S.S.
2000-01-01
The so-called vulnerability matrix is used in the evaluation part of the probabilistic safety assessment for a nuclear power plant, during the containment event trees calculations. This matrix is established from what is knows as Numerical Categories for Engineering Judgement. This matrix is usually established with numerical values obtained with traditional arithmetic using the set theory. The representation of this matrix with fuzzy numbers is much more adequate, due to the fact that the Numerical Categories for Engineering Judgement are better represented with linguistic variables, such as 'highly probable', 'probable', 'impossible', etc. In the present paper a methodology to obtain a Fuzzy Vulnerability Matrix is presented, starting from the recommendations on the Numerical Categories for Engineering Judgement. (author)
Parallel scalability of Hartree-Fock calculations
Chow, Edmond; Liu, Xing; Smelyanskiy, Mikhail; Hammond, Jeff R.
2015-03-01
Quantum chemistry is increasingly performed using large cluster computers consisting of multiple interconnected nodes. For a fixed molecular problem, the efficiency of a calculation usually decreases as more nodes are used, due to the cost of communication between the nodes. This paper empirically investigates the parallel scalability of Hartree-Fock calculations. The construction of the Fock matrix and the density matrix calculation are analyzed separately. For the former, we use a parallelization of Fock matrix construction based on a static partitioning of work followed by a work stealing phase. For the latter, we use density matrix purification from the linear scaling methods literature, but without using sparsity. When using large numbers of nodes for moderately sized problems, density matrix computations are network-bandwidth bound, making purification methods potentially faster than eigendecomposition methods.
Development of a matrix approach to estimate soil clean-up levels for BTEX compounds
International Nuclear Information System (INIS)
Erbas-White, I.; San Juan, C.
1993-01-01
A draft state-of-the-art matrix approach has been developed for the State of Washington to estimate clean-up levels for benzene, toluene, ethylbenzene and xylene (BTEX) in deep soils based on an endangerment approach to groundwater. Derived soil clean-up levels are estimated using a combination of two computer models, MULTIMED and VLEACH. The matrix uses a simple scoring system that is used to assign a score at a given site based on the parameters such as depth to groundwater, mean annual precipitation, type of soil, distance to potential groundwater receptor and the volume of contaminated soil. The total score is then used to obtain a soil clean-up level from a table. The general approach used involves the utilization of computer models to back-calculate soil contaminant levels in the vadose zone that would create that particular contaminant concentration in groundwater at a given receptor. This usually takes a few iterations of trial runs to estimate the clean-up levels since the models use the soil clean-up levels as ''input'' and the groundwater levels as ''output.'' The selected contaminant levels in groundwater are Model Toxic control Act (MTCA) values used in the State of Washington
International Nuclear Information System (INIS)
Descouvemont, P; Baye, D
2010-01-01
The different facets of the R-matrix method are presented pedagogically in a general framework. Two variants have been developed over the years: (i) The 'calculable' R-matrix method is a calculational tool to derive scattering properties from the Schroedinger equation in a large variety of physical problems. It was developed rather independently in atomic and nuclear physics with too little mutual influence. (ii) The 'phenomenological' R-matrix method is a technique to parametrize various types of cross sections. It was mainly (or uniquely) used in nuclear physics. Both directions are explained by starting from the simple problem of scattering by a potential. They are illustrated by simple examples in nuclear and atomic physics. In addition to elastic scattering, the R-matrix formalism is applied to inelastic and radiative-capture reactions. We also present more recent and more ambitious applications of the theory in nuclear physics.
Calculating depths to shallow magnetic sources using aeromagnetic data from the Tucson Basin
Casto, Daniel W.
2001-01-01
Using gridded high-resolution aeromagnetic data, the performance of several automated 3-D depth-to-source methods was evaluated over shallow control sources based on how close their depth estimates came to the actual depths to the tops of the sources. For all three control sources, only the simple analytic signal method, the local wavenumber method applied to the vertical integral of the magnetic field, and the horizontal gradient method applied to the pseudo-gravity field provided median depth estimates that were close (-11% to +14% error) to the actual depths. Careful attention to data processing was required in order to calculate a sufficient number of depth estimates and to reduce the occurrence of false depth estimates. For example, to eliminate sampling bias, high-frequency noise and interference from deeper sources, it was necessary to filter the data before calculating derivative grids and subsequent depth estimates. To obtain smooth spatial derivative grids using finite differences, the data had to be gridded at intervals less than one percent of the anomaly wavelength. Before finding peak values in the derived signal grids, it was necessary to remove calculation noise by applying a low-pass filter in the grid-line directions and to re-grid at an interval that enabled the search window to encompass only the peaks of interest. Using the methods that worked best over the control sources, depth estimates over geologic sites of interest suggested the possible occurrence of volcanics nearly 170 meters beneath a city landfill. Also, a throw of around 2 kilometers was determined for a detachment fault that has a displacement of roughly 6 kilometers.
On calculation of detection efficiency of gamma spectrometers with germanium detection
International Nuclear Information System (INIS)
Sima, O.
2001-01-01
High resolution gamma spectrometer represents a powerful analysis technique of use in various fields from basic research to the study of environmental radioactivity, from medical investigations to geological surveys. Direct experimental calibration cannot cover the large range of measurement configurations of interest. Actually, it can be appropriately applied in an only limited number of cases, as for instance, in case of point-like sources or liquid phase volume sources. To assist the treatment of experimental calibration of germanium detectors, in the frame of Atomic and Nuclear Physics Chair of Department of Physics, a number of calculation methods were developed. These methods are generally based on Monte Carlo simulation but simplified and fast analytical methods were also worked out. Initially, these studies were dedicated to application in the field of environmental activity and radiation protection, but later on these were extended also to other fields as, for instance, the neutron activation or radionuclide metrology. First, the effects of matrices were calculated for the case of volume sources. Applying the matrix corrections allows obtaining the source calibration curves on the basis of experimental calibration data obtained with liquid sources, in the same geometry. An algorithm based on Monte Carlo calculation and using techniques of correlated selection was obtained. This algorithm can be implemented in the gamma analysis programs giving for the first time the possibility of correct evaluation of matrix effects even during the analysis of gamma spectra. We used a set of additive relations applicable in case of volume sources with negligible self-absorption and obtained a number of linear relations useful in calibrating the large volume sources in presence of self-absorption, based on small volume standard sources. Also, we proposed analytical relations useful in the case of measurements of large volume samples, in case of Marinelli geometry. To
Klausner, Z; Klement, E; Fattal, E
2018-02-01
Viruses that affect the health of humans and farm animals can spread over long distances via atmospheric mechanisms. The phenomenon of atmospheric long-distance dispersal (LDD) is associated with severe consequences because it may introduce pathogens into new areas. The introduction of new pathogens to Israel was attributed to LDD events numerous times. This provided the motivation for this study which is aimed to identify all the locations in the eastern Mediterranean that may serve as sources for pathogen incursion into Israel via LDD. This aim was achieved by calculating source-receptor relationship probability maps. These maps describe the probability that an infected vector or viral aerosol, once airborne, will have an atmospheric route that can transport it to a distant location. The resultant probability maps demonstrate a seasonal tendency in the probability of specific areas to serve as sources for pathogen LDD into Israel. Specifically, Cyprus' season is the summer; southern Turkey and the Greek islands of Crete, Karpathos and Rhodes are associated with spring and summer; lower Egypt and Jordan may serve as sources all year round, except the summer months. The method used in this study can easily be implemented to any other geographic region. The importance of this study is the ability to provide a climatologically valid and accurate risk assessment tool to support long-term decisions regarding preparatory actions for future outbreaks long before a specific outbreak occurs. © 2017 Blackwell Verlag GmbH.
International Nuclear Information System (INIS)
Heggarty, J.W.
1999-06-01
For almost thirty years, sequential R-matrix computation has been used by atomic physics research groups, from around the world, to model collision phenomena involving the scattering of electrons or positrons with atomic or molecular targets. As considerable progress has been made in the understanding of fundamental scattering processes, new data, obtained from more complex calculations, is of current interest to experimentalists. Performing such calculations, however, places considerable demands on the computational resources to be provided by the target machine, in terms of both processor speed and memory requirement. Indeed, in some instances the computational requirements are so great that the proposed R-matrix calculations are intractable, even when utilising contemporary classic supercomputers. Historically, increases in the computational requirements of R-matrix computation were accommodated by porting the problem codes to a more powerful classic supercomputer. Although this approach has been successful in the past, it is no longer considered to be a satisfactory solution due to the limitations of current (and future) Von Neumann machines. As a consequence, there has been considerable interest in the high performance multicomputers, that have emerged over the last decade which appear to offer the computational resources required by contemporary R-matrix research. Unfortunately, developing codes for these machines is not as simple a task as it was to develop codes for successive classic supercomputers. The difficulty arises from the considerable differences in the computing models that exist between the two types of machine and results in the programming of multicomputers to be widely acknowledged as a difficult, time consuming and error-prone task. Nevertheless, unless parallel R-matrix computation is realised, important theoretical and experimental atomic physics research will continue to be hindered. This thesis describes work that was undertaken in
Tong, Daniel Q; Muller, Nicholas Z; Kan, Haidong; Mendelsohn, Robert O
2009-11-01
Human exposure to ambient ozone (O(3)) has been linked to a variety of adverse health effects. The ozone level at a location is contributed by local production, regional transport, and background ozone. This study combines detailed emission inventory, air quality modeling, and census data to investigate the source-receptor relationships between nitrogen oxides (NO(x)) emissions and population exposure to ambient O(3) in 48 states over the continental United States. By removing NO(x) emissions from each state one at a time, we calculate the change in O(3) exposures by examining the difference between the base and the sensitivity simulations. Based on the 49 simulations, we construct state-level and census region-level source-receptor matrices describing the relationships among these states/regions. We find that, for 43 receptor states, cumulative NO(x) emissions from upwind states contribute more to O(3) exposures than the state's own emissions. In-state emissions are responsible for less than 15% of O(3) exposures in 90% of U.S. states. A state's NO(x) emissions can influence 2 to 40 downwind states by at least a 0.1 ppbv change in population-averaged O(3) exposure. The results suggest that the U.S. generally needs a regional strategy to effectively reduce O(3) exposures. But the current regional emission control program in the U.S. is a cap-and-trade program that assumes the marginal damage of every ton of NO(x) is equal. In this study, the average O(3) exposures caused by one ton of NO(x) emissions ranges from -2.0 to 2.3 ppm-people-hours depending on the state. The actual damage caused by one ton of NO(x) emissions varies considerably over space.
International Nuclear Information System (INIS)
Sanchez de Alsina, O.L.; Scaricabarozzi, R.A.
1982-01-01
A matrix non-iterative method to calculate the periodical distribution in reactors with thermal regeneration is presented. In case of exothermic reaction, a source term will be included. A computer code was developed to calculate the final temperature distribution in solids and in the outlet temperatures of the gases. The results obtained from ethane oxidation calculation in air, using the Dietrich kinetic data are presented. This method is more advantageous than iterative methods. (E.G.) [pt
Energy Technology Data Exchange (ETDEWEB)
Aktulga, Hasan Metin [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Buluc, Aydin [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Williams, Samuel [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Yang, Chao [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)
2014-08-14
Obtaining highly accurate predictions on the properties of light atomic nuclei using the configuration interaction (CI) approach requires computing a few extremal Eigen pairs of the many-body nuclear Hamiltonian matrix. In the Many-body Fermion Dynamics for nuclei (MFDn) code, a block Eigen solver is used for this purpose. Due to the large size of the sparse matrices involved, a significant fraction of the time spent on the Eigen value computations is associated with the multiplication of a sparse matrix (and the transpose of that matrix) with multiple vectors (SpMM and SpMM-T). Existing implementations of SpMM and SpMM-T significantly underperform expectations. Thus, in this paper, we present and analyze optimized implementations of SpMM and SpMM-T. We base our implementation on the compressed sparse blocks (CSB) matrix format and target systems with multi-core architectures. We develop a performance model that allows us to understand and estimate the performance characteristics of our SpMM kernel implementations, and demonstrate the efficiency of our implementation on a series of real-world matrices extracted from MFDn. In particular, we obtain 3-4 speedup on the requisite operations over good implementations based on the commonly used compressed sparse row (CSR) matrix format. The improvements in the SpMM kernel suggest we may attain roughly a 40% speed up in the overall execution time of the block Eigen solver used in MFDn.
The TRUPACT-II Matrix Depleton Program
International Nuclear Information System (INIS)
Connolly, M.J.; Djordjevic, S.M.; Loehr, C.A.; Smith, M.C.; Banjac, V.; Lyon, W.F.
1995-01-01
Contact-handled transuranic (CH-TRU) wastes will be shipped and disposed at the Waste Isolation Pilot Plant (WIPP) repository in the Transuranic Package Transporter-II (TRUPACT-II) shipping package. A primary transportation requirement for the TRUPACT-II is that the concentration of potentially flammable gases (i.e., hydrogen and methane) must not exceed 5 percent by volume in the package or the payload during a 60-day shipping period. Decomposition of waste materials by radiation, or radiolysis, is the predominant mechanism of gas generation during transport. The gas generation potential of a target waste material is characterized by a G-value, which is the number of molecules of gas generated per 100 eV of ionizing radiation absorbed by the target material. To demonstrate compliance with the flammable gas concentration requirement, theoretical worst-case calculations were performed to establish allowable wattage (decay heat) limits for waste containers. The calculations were based on the G-value for the waste material with the highest potential for flammable gas generation. The calculations also made no allowances for decreases of the G-value over time due to matrix depletion phenomena that have been observed by many experimenters. Matrix depletion occurs over time when an alpha-generating source particle alters the target material (by evaporation, reaction, or decomposition) into a material of lower gas generating potential. The net effect of these alterations is represented by the ''effective G-value.''
Energy Technology Data Exchange (ETDEWEB)
Laureau, A., E-mail: laureau.axel@gmail.com; Heuer, D.; Merle-Lucotte, E.; Rubiolo, P.R.; Allibert, M.; Aufiero, M.
2017-05-15
Highlights: • Neutronic ‘Transient Fission Matrix’ approach coupled to the CFD OpenFOAM code. • Fission Matrix interpolation model for fast spectrum homogeneous reactors. • Application for coupled calculations of the Molten Salt Fast Reactor. • Load following, over-cooling and reactivity insertion transient studies. • Validation of the reactor intrinsic stability for normal and accidental transients. - Abstract: In this paper we present transient studies of the Molten Salt Fast Reactor (MSFR). This generation IV reactor is characterized by a liquid fuel circulating in the core cavity, requiring specific simulation tools. An innovative neutronic approach called “Transient Fission Matrix” is used to perform spatial kinetic calculations with a reduced computational cost through a pre-calculation of the Monte Carlo spatial and temporal response of the system. Coupled to this neutronic approach, the Computational Fluid Dynamics code OpenFOAM is used to model the complex flow pattern in the core. An accurate interpolation model developed to take into account the thermal hydraulics feedback on the neutronics including reactivity and neutron flux variation is presented. Finally different transient studies of the reactor in normal and accidental operating conditions are detailed such as reactivity insertion and load following capacities. The results of these studies illustrate the excellent behavior of the MSFR during such transients.
International Nuclear Information System (INIS)
Laureau, A.; Heuer, D.; Merle-Lucotte, E.; Rubiolo, P.R.; Allibert, M.; Aufiero, M.
2017-01-01
Highlights: • Neutronic ‘Transient Fission Matrix’ approach coupled to the CFD OpenFOAM code. • Fission Matrix interpolation model for fast spectrum homogeneous reactors. • Application for coupled calculations of the Molten Salt Fast Reactor. • Load following, over-cooling and reactivity insertion transient studies. • Validation of the reactor intrinsic stability for normal and accidental transients. - Abstract: In this paper we present transient studies of the Molten Salt Fast Reactor (MSFR). This generation IV reactor is characterized by a liquid fuel circulating in the core cavity, requiring specific simulation tools. An innovative neutronic approach called “Transient Fission Matrix” is used to perform spatial kinetic calculations with a reduced computational cost through a pre-calculation of the Monte Carlo spatial and temporal response of the system. Coupled to this neutronic approach, the Computational Fluid Dynamics code OpenFOAM is used to model the complex flow pattern in the core. An accurate interpolation model developed to take into account the thermal hydraulics feedback on the neutronics including reactivity and neutron flux variation is presented. Finally different transient studies of the reactor in normal and accidental operating conditions are detailed such as reactivity insertion and load following capacities. The results of these studies illustrate the excellent behavior of the MSFR during such transients.
International Nuclear Information System (INIS)
Wang Degao; Tian Fulin; Yang Meng; Liu Chenlin; Li Yifan
2009-01-01
Soil derived sources of polycyclic aromatic hydrocarbons (PAHs) in the region of Dalian, China were investigated using positive matrix factorization (PMF). Three factors were separated based on PMF for the statistical investigation of the datasets both in summer and winter. These factors were dominated by the pattern of single sources or groups of similar sources, showing seasonal and regional variations. The main sources of PAHs in Dalian soil in summer were the emissions from coal combustion average (46%), diesel engine (30%), and gasoline engine (24%). In winter, the main sources were the emissions from coal-fired boiler (72%), traffic average (20%), and gasoline engine (8%). These factors with strong seasonality indicated that coal combustion in winter and traffic exhaust in summer dominated the sources of PAHs in soil. These results suggested that PMF model was a proper approach to identify the sources of PAHs in soil. - PMF model is a proper approach to identify potential sources of PAHs in soil based on the PAH profiles measured in the field and those published in the literature.
Collagen Type I as a Ligand for Receptor-Mediated Signaling
Directory of Open Access Journals (Sweden)
Iris Boraschi-Diaz
2017-05-01
Full Text Available Collagens form the fibrous component of the extracellular matrix in all multi-cellular animals. Collagen type I is the most abundant collagen present in skin, tendons, vasculature, as well as the organic portion of the calcified tissue of bone and teeth. This review focuses on numerous receptors for which collagen acts as a ligand, including integrins, discoidin domain receptors DDR1 and 2, OSCAR, GPVI, G6b-B, and LAIR-1 of the leukocyte receptor complex (LRC and mannose family receptor uPARAP/Endo180. We explore the process of collagen production and self-assembly, as well as its degradation by collagenases and gelatinases in order to predict potential temporal and spatial sites of action of different collagen receptors. While the interactions of the mature collagen matrix with integrins and DDR are well-appreciated, potential signals from immature matrix as well as collagen degradation products are possible but not yet described. The role of multiple collagen receptors in physiological processes and their contribution to pathophysiology of diseases affecting collagen homeostasis require further studies.
Directory of Open Access Journals (Sweden)
D. A. Thornhill
2010-04-01
Full Text Available The goal of this research is to quantify diesel- and gasoline-powered motor vehicle emissions within the Mexico City Metropolitan Area (MCMA using on-road measurements captured by a mobile laboratory combined with positive matrix factorization (PMF receptor modeling. During the MCMA-2006 ground-based component of the MILAGRO field campaign, the Aerodyne Mobile Laboratory (AML measured many gaseous and particulate pollutants, including carbon dioxide, carbon monoxide (CO, nitrogen oxides (NO_{x}, benzene, toluene, alkylated aromatics, formaldehyde, acetaldehyde, acetone, ammonia, particle number, fine particulate mass (PM_{2.5}, and black carbon (BC. These serve as inputs to the receptor model, which is able to resolve three factors corresponding to gasoline engine exhaust, diesel engine exhaust, and the urban background. Using the source profiles, we calculate fuel-based emission factors for each type of exhaust. The MCMA's gasoline-powered vehicles are considerably dirtier, on average, than those in the US with respect to CO and aldehydes. Its diesel-powered vehicles have similar emission factors of NO_{x} and higher emission factors of aldehydes, particle number, and BC. In the fleet sampled during AML driving, gasoline-powered vehicles are found to be responsible for 97% of total vehicular emissions of CO, 22% of NO_{x}, 95–97% of each aromatic species, 72–85% of each carbonyl species, 74% of ammonia, negligible amounts of particle number, 26% of PM_{2.5}, and 2% of BC; diesel-powered vehicles account for the balance. Because the mobile lab spent 17% of its time waiting at stoplights, the results may overemphasize idling conditions, possibly resulting in an underestimate of NO_{x} and overestimate of CO emissions. On the other hand, estimates of the inventory that do not correctly account for emissions during idling are likely to produce bias in the opposite direction.The resulting fuel
Wang, R; Li, X A
2001-02-01
The dose parameters for the beta-particle emitting 90Sr/90Y source for intravascular brachytherapy (IVBT) have been calculated by different investigators. At a distant distance from the source, noticeable differences are seen in these parameters calculated using different Monte Carlo codes. The purpose of this work is to quantify as well as to understand these differences. We have compared a series of calculations using an EGS4, an EGSnrc, and the MCNP Monte Carlo codes. Data calculated and compared include the depth dose curve for a broad parallel beam of electrons, and radial dose distributions for point electron sources (monoenergetic or polyenergetic) and for a real 90Sr/90Y source. For the 90Sr/90Y source, the doses at the reference position (2 mm radial distance) calculated by the three code agree within 2%. However, the differences between the dose calculated by the three codes can be over 20% in the radial distance range interested in IVBT. The difference increases with radial distance from source, and reaches 30% at the tail of dose curve. These differences may be partially attributed to the different multiple scattering theories and Monte Carlo models for electron transport adopted in these three codes. Doses calculated by the EGSnrc code are more accurate than those by the EGS4. The two calculations agree within 5% for radial distance <6 mm.
Directory of Open Access Journals (Sweden)
M. H. Sowlat
2016-04-01
Full Text Available In this study, the positive matrix factorization (PMF receptor model (version 5.0 was used to identify and quantify major sources contributing to particulate matter (PM number concentrations, using PM number size distributions in the range of 13 nm to 10 µm combined with several auxiliary variables, including black carbon (BC, elemental and organic carbon (EC/OC, PM mass concentrations, gaseous pollutants, meteorological, and traffic counts data, collected for about 9 months between August 2014 and 2015 in central Los Angeles, CA. Several parameters, including particle number and volume size distribution profiles, profiles of auxiliary variables, contributions of different factors in different seasons to the total number concentrations, diurnal variations of each of the resolved factors in the cold and warm phases, weekday/weekend analysis for each of the resolved factors, and correlation between auxiliary variables and the relative contribution of each of the resolved factors, were used to identify PM sources. A six-factor solution was identified as the optimum for the aforementioned input data. The resolved factors comprised nucleation, traffic 1, traffic 2 (with a larger mode diameter than traffic 1 factor, urban background aerosol, secondary aerosol, and soil/road dust. Traffic sources (1 and 2 were the major contributor to PM number concentrations, collectively making up to above 60 % (60.8–68.4 % of the total number concentrations during the study period. Their contribution was also significantly higher in the cold phase compared to the warm phase. Nucleation was another major factor significantly contributing to the total number concentrations (an overall contribution of 17 %, ranging from 11.7 to 24 %, with a larger contribution during the warm phase than in the cold phase. The other identified factors were urban background aerosol, secondary aerosol, and soil/road dust, with relative contributions of approximately 12
Molecular characterization of opioid receptors
Energy Technology Data Exchange (ETDEWEB)
Howard, A.D.
1986-01-01
The aim of this research was to purify and characterize active opioid receptors and elucidate molecular aspects of opioid receptor heterogeneity. Purification to apparent homogeneity of an opioid binding protein from bovine caudate was achieved by solubilization in the non-ionic detergent, digitonin, followed by sequential chromatography on the opiate affinity matrix, ..beta..-naltrexylethylenediamine-CH-Sepharose 4B, and on the lectine affinity matrix, wheat germ agglutinin-agarose. Polyacrylamide gel electrophoresis in the presence of sodium dodecyl sulfate (SDS-PAGE) followed by autoradiography revealed that radioiodinated purified receptor gave a single band. Purified receptor preparations showed a specific activity of 12,000-15,000 fmol of opiate bound per mg of protein. Radioiodinated human beta-endorphin (/sup 125/I-beta-end/sub H/) was used as a probe to investigate the ligand binding subunits of mu and delta opioid receptors. /sup 125/I-beta-end/sub H/ was shown to bind to a variety of opioid receptor-containing tissues with high affinity and specificity with preference for mu and delta sites, and with little, if any, binding to kappa sites. Affinity crosslinking techniques were employed to covalently link /sup 125/I-beta-end/sub H/ to opioid receptors, utilizing derivatives of bis-succinimidyl esters that are bifunctional crosslinkers with specificities for amino and sulfhydryl groups. This, and competition experiments with high type-selective ligands, permitted the assignment of two labeled peptides to their receptor types, namely a peptide of M/sub r/ = 65,000 for mu receptors and one of M/sub r/ = 53,000 for delta receptors.
International Nuclear Information System (INIS)
Nangia, Shivangi; Garrison, Barbara J.
2011-01-01
There is synergy between matrix assisted laser desorption ionization (MALDI) experiments and molecular dynamics (MD) simulations. To understand analyte ejection from the matrix, MD simulations have been employed. Prior calculations show that the ejected analyte molecules remain solvated by the matrix molecules in the ablated plume. In contrast, the experimental data show free analyte ions. The main idea of this work is that analyte molecule ejection may depend on the microscopic details of analyte interaction with the matrix. Intermolecular matrix-analyte interactions have been studied by focusing on 2,5-dihydroxybenzoic acid (DHB; matrix) and amino acids (AA; analyte) using Chemistry at HARvard Molecular Mechanics (CHARMM) force field. A series of AA molecules have been studied to analyze the DHB-AA interaction. A relative scale of AA molecule affinity towards DHB has been developed.
Callén, M S; Iturmendi, A; López, J M; Mastral, A M
2014-02-01
In order to perform a study of the carcinogenic potential of polycyclic aromatic hydrocarbons (PAH), benzo(a)pyrene equivalent (BaP-eq) concentration was calculated and modelled by a receptor model based on positive matrix factorization (PMF). Nineteen PAH associated to airborne PM10 of Zaragoza, Spain, were quantified during the sampling period 2001-2009 and used as potential variables by the PMF model. Afterwards, multiple linear regression analysis was used to quantify the potential sources of BaP-eq. Five sources were obtained as the optimal solution and vehicular emission was identified as the main carcinogenic source (35 %) followed by heavy-duty vehicles (28 %), light-oil combustion (18 %), natural gas (10 %) and coal combustion (9 %). Two of the most prevailing directions contributing to this carcinogenic character were the NE and N directions associated with a highway, industrial parks and a paper factory. The lifetime lung cancer risk exceeded the unit risk of 8.7 x 10(-5) per ng/m(3) BaP in both winter and autumn seasons and the most contributing source was the vehicular emission factor becoming an important issue in control strategies.
P-matrix approach and three-nucleon problem
International Nuclear Information System (INIS)
Babenko, V.A.; Petrov, N.M.; Teneva, G.N.
1993-01-01
The paper deals with the P-matrix approach application to the three strongly interacting particles systems description. On the basis of the obtained off-energy-shell scattering amplitude separable expansion in the P-matrix approach the low-energy three-particle quantities were calculated in the case of square-well potential. The results of calculations show good convergence of the calculated three-particle quantities. (author). 12 refs., 1 tab
Directory of Open Access Journals (Sweden)
Avramović Ivana
2007-01-01
Full Text Available The H5B is a concept of an accelerator-driven sub-critical research facility (ADSRF being developed over the last couple of years at the Vinča Institute of Nuclear Sciences, Belgrade, Serbia. Using well-known computer codes, the MCNPX and MCNP, this paper deals with the results of a tar get study and neutron flux calculations in the sub-critical core. The neutron source is generated by an interaction of a proton or deuteron beam with the target placed inside the sub-critical core. The results of the total neutron flux density escaping the target and calculations of neutron yields for different target materials are also given here. Neutrons escaping the target volume with the group spectra (first step are used to specify a neutron source for further numerical simulations of the neutron flux density in the sub-critical core (second step. The results of the calculations of the neutron effective multiplication factor keff and neutron generation time L for the ADSRF model have also been presented. Neutron spectra calculations for an ADSRF with an uranium tar get (highest values of the neutron yield for the selected sub-critical core cells for both beams have also been presented in this paper.
Receptor modeling studies for the characterization of PM10 pollution sources in Belgrade
Directory of Open Access Journals (Sweden)
Mijić Zoran
2012-01-01
Full Text Available The objective of this study is to determine the major sources and potential source regions of PM10 over Belgrade, Serbia. The PM10 samples were collected from July 2003 to December 2006 in very urban area of Belgrade and concentrations of Al, V, Cr, Mn, Fe, Ni, Cu, Zn, Cd and Pb were analyzed by atomic absorption spectrometry. The analysis of seasonal variations of PM10 mass and some element concentrations reported relatively higher concentrations in winter, what underlined the importance of local emission sources. The Unmix model was used for source apportionment purpose and the four main source profiles (fossil fuel combustion; traffic exhaust/regional transport from industrial centers; traffic related particles/site specific sources and mineral/crustal matter were identified. Among the resolved factors the fossil fuel combustion was the highest contributor (34% followed by traffic/regional industry (26%. Conditional probability function (CPF results identified possible directions of local sources. The potential source contribution function (PSCF and concentration weighted trajectory (CWT receptor models were used to identify spatial source distribution and contribution of regional-scale transported aerosols. [Projekat Ministarstva nauke Republike Srbije, br. III43007 i br. III41011
Fast spectral source integration in black hole perturbation calculations
Hopper, Seth; Forseth, Erik; Osburn, Thomas; Evans, Charles R.
2015-08-01
This paper presents a new technique for achieving spectral accuracy and fast computational performance in a class of black hole perturbation and gravitational self-force calculations involving extreme mass ratios and generic orbits. Called spectral source integration (SSI), this method should see widespread future use in problems that entail (i) a point-particle description of the small compact object, (ii) frequency domain decomposition, and (iii) the use of the background eccentric geodesic motion. Frequency domain approaches are widely used in both perturbation theory flux-balance calculations and in local gravitational self-force calculations. Recent self-force calculations in Lorenz gauge, using the frequency domain and method of extended homogeneous solutions, have been able to accurately reach eccentricities as high as e ≃0.7 . We show here SSI successfully applied to Lorenz gauge. In a double precision Lorenz gauge code, SSI enhances the accuracy of results and makes a factor of 3 improvement in the overall speed. The primary initial application of SSI—for us its the raison d'être—is in an arbitrary precision mathematica code that computes perturbations of eccentric orbits in the Regge-Wheeler gauge to extraordinarily high accuracy (e.g., 200 decimal places). These high-accuracy eccentric orbit calculations would not be possible without the exponential convergence of SSI. We believe the method will extend to work for inspirals on Kerr and will be the subject of a later publication. SSI borrows concepts from discrete-time signal processing and is used to calculate the mode normalization coefficients in perturbation theory via sums over modest numbers of points around an orbit. A variant of the idea is used to obtain spectral accuracy in a solution of the geodesic orbital motion.
International Nuclear Information System (INIS)
Bruno, J.; Casas, I.; Cera, E.; Ewing, R.C.; Finch, R.J.
1995-01-01
The long term behavior of spent nuclear fuel is discussed in the light of recent thermodynamic and kinetic data on mineralogical analogues related to the key phases in the oxidative alteration of uraninite. The implications for the safety assessment of a repository of the established oxidative alteration sequence of the spent fuel matrix are illustrated with Pagoda calculations. The application to the kinetic and thermodynamic data to source term calculations indicates that the appearance and duration of the U(VI) oxyhydroxide transient is critical for the stability of the fuel matrix
Receptor Model Source Apportionment of Nonmethane Hydrocarbons in Mexico City
Directory of Open Access Journals (Sweden)
V. Mugica
2002-01-01
Full Text Available With the purpose of estimating the source contributions of nonmethane hydrocarbons (NMHC to the atmosphere at three different sites in the Mexico City Metropolitan Area, 92 ambient air samples were measured from February 23 to March 22 of 1997. Light- and heavy-duty vehicular profiles were determined to differentiate the NMHC contribution of diesel and gasoline to the atmosphere. Food cooking source profiles were also determined for chemical mass balance receptor model application. Initial source contribution estimates were carried out to determine the adequate combination of source profiles and fitting species. Ambient samples of NMHC were apportioned to motor vehicle exhaust, gasoline vapor, handling and distribution of liquefied petroleum gas (LP gas, asphalt operations, painting operations, landfills, and food cooking. Both gasoline and diesel motor vehicle exhaust were the major NMHC contributors for all sites and times, with a percentage of up to 75%. The average motor vehicle exhaust contributions increased during the day. In contrast, LP gas contribution was higher during the morning than in the afternoon. Apportionment for the most abundant individual NMHC showed that the vehicular source is the major contributor to acetylene, ethylene, pentanes, n-hexane, toluene, and xylenes, while handling and distribution of LP gas was the major source contributor to propane and butanes. Comparison between CMB estimates of NMHC and the emission inventory showed a good agreement for vehicles, handling and distribution of LP gas, and painting operations; nevertheless, emissions from diesel exhaust and asphalt operations showed differences, and the results suggest that these emissions could be underestimated.
Pathak, Amit
2018-04-12
Motile cells sense the stiffness of their extracellular matrix (ECM) through adhesions and respond by modulating the generated forces, which in turn lead to varying mechanosensitive migration phenotypes. Through modeling and experiments, cell migration speed is known to vary with matrix stiffness in a biphasic manner, with optimal motility at an intermediate stiffness. Here, we present a two-dimensional cell model defined by nodes and elements, integrated with subcellular modeling components corresponding to mechanotransductive adhesion formation, force generation, protrusions and node displacement. On 2D matrices, our calculations reproduce the classic biphasic dependence of migration speed on matrix stiffness and predict that cell types with higher force-generating ability do not slow down on very stiff matrices, thus disabling the biphasic response. We also predict that cell types defined by lower number of total receptors require stiffer matrices for optimal motility, which also limits the biphasic response. For a cell type with robust biphasic migration on 2D surface, simulations in channel-like confined environments of varying width and height predict faster migration in more confined matrices. Simulations performed in shallower channels predict that the biphasic mechanosensitive cell migration response is more robust on 2D micro-patterns as compared to the channel-like 3D confinement. Thus, variations in the dimensionality of matrix confinement alters the way migratory cells sense and respond to the matrix stiffness. Our calculations reveal new phenotypes of stiffness- and topography-sensitive cell migration that critically depend on both cell-intrinsic and matrix properties. These predictions may inform our understanding of various mechanosensitive modes of cell motility that could enable tumor invasion through topographically heterogeneous microenvironments. © 2018 IOP Publishing Ltd.
Directory of Open Access Journals (Sweden)
Jong-Shiaw Jin
2006-01-01
Full Text Available Aim: Extracellular matrix metalloprotease inducer (EMMPRIN expression was demonstrated in several cancers, but its expression profile in colorectal cancers remains unclear. Epidermal growth factor receptor (EGFR was reported to regulate EMMPRIN expression in human epithelial cancers. Our purpose was to determine EMMPRIN expression and its relationship with EGFR in colorectal cancers.
International Nuclear Information System (INIS)
Sheffield, A.E.
1988-01-01
The radiocarbon tracer technique was used to demonstrate that polycyclic aromatic hydrocarbons (PAHs) can be used for quantitative receptor modeling of air pollution. Fine-particle samples were collected during December, 1985, in Albuquerque, NM. Motor vehicles (fossil) and residential wood combustion (RWC, modern) were the major PAH-sources. For each sample, the PAH-fraction was solvent-extracted, isolated by liquid chromatography, and analyzed by GC-FID and GC-MS. The PAH-fractions from sixteen samples were analyzed for 14 C by Accelerator Mass Spectrometry. Radiocarbon data were used to calculate the relative RWC contribution (f RWC ) for samples analyzed for 14 C. Normalized concentrations of a prospective motor vehicle tracer, benzo(ghi)perylene (BGP) had a strong, negative correlation with f RWC . Normalized BGP concentrations were used to apportion sources for samples not analyzed for 14 C. Multiple Linear Regression (MLR) vs. ADCS and BGP was used to estimate source profiles for use in Target Factor Analysis (TFA). Profiles predicted by TFA were used in Chemical Mass Balances (CMBs). For non-volatile, stable PAHs, agreement between observed and predicted concentrations was excellent. The worst fits were observed for the most volatile PAHs and for coronene. The total RWC contributions predicted by CMBs correlated well with the radiocarbon data
Nonlinear response matrix methods for radiative transfer
International Nuclear Information System (INIS)
Miller, W.F. Jr.; Lewis, E.E.
1987-01-01
A nonlinear response matrix formalism is presented for the solution of time-dependent radiative transfer problems. The essential feature of the method is that within each computational cell the temperature is calculated in response to the incoming photons from all frequency groups. Thus the updating of the temperature distribution is placed within the iterative solution of the spaceangle transport problem, instead of being placed outside of it. The method is formulated for both grey and multifrequency problems and applied in slab geometry. The method is compared to the more conventional source iteration technique. 7 refs., 1 fig., 4 tabs
Reweighting QCD matrix-element and parton-shower calculations
Energy Technology Data Exchange (ETDEWEB)
Bothmann, Enrico; Schumann, Steffen [Universitaet Goettingen, II. Physikalisches Institut, Goettingen (Germany); Schoenherr, Marek [Universitaet Zuerich, Physik-Institut, Zuerich (Switzerland)
2016-11-15
We present the implementation and validation of the techniques used to efficiently evaluate parametric and perturbative theoretical uncertainties in matrix-element plus parton-shower simulations within the Sherpa event-generator framework. By tracing the full α{sub s} and PDF dependences, including the parton-shower component, as well as the fixed-order scale uncertainties, we compute variational event weights on-the-fly, thereby greatly reducing the computational costs to obtain theoretical-uncertainty estimates. (orig.)
Brachytherapy dosimetry parameters calculated for a 131Cs source
International Nuclear Information System (INIS)
Rivard, Mark J.
2007-01-01
A comprehensive analysis of the IsoRay Medical model CS-1 Rev2 131 Cs brachytherapy source was performed. Dose distributions were simulated using Monte Carlo methods (MCNP5) in liquid water, Solid TM , and Virtual Water TM spherical phantoms. From these results, the in-water brachytherapy dosimetry parameters have been determined, and were compared with those of Murphy et al. [Med. Phys. 31, 1529-1538 (2004)] using measurements and simulations. Our results suggest that calculations obtained using erroneous cross-section libraries should be discarded as recommended by the 2004 AAPM TG-43U1 report. Our MC Λ value of 1.046±0.019 cGy h -1 U -1 is within 1.3% of that measured by Chen et al. [Med. Phys. 32, 3279-3285 (2005)] using TLDs and the calculated results of Wittman and Fisher [Med. Phys. 34, 49-54 (2007)] using MCNP5. Using the discretized energy approach of Rivard [Appl. Radiat. Isot. 55, 775-782 (2001)] to ascertain the impact of individual 131 Cs photons on radial dose function and anisotropy functions, there was virtual equivalence of results for 29.461≤E γ ≤34.419 keV and for a mono-energetic 30.384 keV photon source. Comparisons of radial dose function and 2D anisotropy function data are also included, and an analysis of material composition and cross-section libraries was performed
Calculation of Rydberg interaction potentials
International Nuclear Information System (INIS)
Weber, Sebastian; Büchler, Hans Peter; Tresp, Christoph; Urvoy, Alban; Hofferberth, Sebastian; Menke, Henri; Firstenberg, Ofer
2017-01-01
The strong interaction between individual Rydberg atoms provides a powerful tool exploited in an ever-growing range of applications in quantum information science, quantum simulation and ultracold chemistry. One hallmark of the Rydberg interaction is that both its strength and angular dependence can be fine-tuned with great flexibility by choosing appropriate Rydberg states and applying external electric and magnetic fields. More and more experiments are probing this interaction at short atomic distances or with such high precision that perturbative calculations as well as restrictions to the leading dipole–dipole interaction term are no longer sufficient. In this tutorial, we review all relevant aspects of the full calculation of Rydberg interaction potentials. We discuss the derivation of the interaction Hamiltonian from the electrostatic multipole expansion, numerical and analytical methods for calculating the required electric multipole moments and the inclusion of electromagnetic fields with arbitrary direction. We focus specifically on symmetry arguments and selection rules, which greatly reduce the size of the Hamiltonian matrix, enabling the direct diagonalization of the Hamiltonian up to higher multipole orders on a desktop computer. Finally, we present example calculations showing the relevance of the full interaction calculation to current experiments. Our software for calculating Rydberg potentials including all features discussed in this tutorial is available as open source. (tutorial)
International Nuclear Information System (INIS)
Li Chunjuan; Liu Yi'na; Zhang Weihua; Wang Zhiqiang
2014-01-01
The manganese bath method for measuring the neutron emission rate of radionuclide sources requires corrections to be made for emitted neutrons which are not captured by manganese nuclei. The Monte Carlo particle transport code MCNP was used to simulate the manganese bath system of the standards for the measurement of neutron source intensity. The correction factors were calculated and the reliability of the model was demonstrated through the key comparison for the radionuclide neutron source emission rate measurements organized by BIPM. The uncertainties in the calculated values were evaluated by considering the sensitivities to the solution density, the density of the radioactive material, the positioning of the source, the radius of the bath, and the interaction cross-sections. A new method for the evaluation of the uncertainties in Monte Carlo calculation was given. (authors)
Involvement of extracellular matrix constituents in breast cancer
Energy Technology Data Exchange (ETDEWEB)
Lochter, Andre; Bissell, Mina J
1995-06-01
It has recently been established that the extracellular matrix is required for normal functional differentiation of mammary epithelia not only in culture, but also in vivo. The mechanisms by which extracellular matrix affects differentiation, as well as the nature of extracellular matrix constituents which have major impacts on mammary gland function, have only now begun to be dissected. The intricate variety of extracellular matrix-mediated events and the remarkable degree of plasticity of extracellular matrix structure and composition at virtually all times during ontogeny, make such studies difficult. Similarly, during carcinogenesis, the extracellular matrix undergoes gross alterations, the consequences of which are not yet precisely understood. Nevertheless, an increasing amount of data suggests that the extracellular matrix and extracellular matrix-receptors might participate in the control of most, if not all, of the successive stages of breast tumors, from appearance to progression and metastasis.
International Nuclear Information System (INIS)
Petrov, Eh.E.; Fadeev, I.A.
1979-01-01
A possibility to use displaced sampling from a bulk gamma source in calculating the secondary gamma fields by the Monte Carlo method is discussed. The algorithm proposed is based on the concept of conjugate functions alongside the dispersion minimization technique. For the sake of simplicity a plane source is considered. The algorithm has been put into practice on the M-220 computer. The differential gamma current and flux spectra in 21cm-thick lead have been calculated. The source of secondary gamma-quanta was assumed to be a distributed, constant and isotropic one emitting 4 MeV gamma quanta with the rate of 10 9 quanta/cm 3 xs. The calculations have demonstrated that the last 7 cm of lead are responsible for the whole gamma spectral pattern. The spectra practically coincide with the ones calculated by the ROZ computer code. Thus the algorithm proposed can be offectively used in the calculations of secondary gamma radiation transport and reduces the computation time by 2-4 times
Energy Technology Data Exchange (ETDEWEB)
Kang, M. Y.; Kim, J. H.; Choi, H. D.; Sun, G. M. [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
2014-10-15
To calculate the full energy (FE) absorption peak efficiency for arbitrary volume sample, we developed and verified the Effective Solid Angle (ESA) Code. The procedure for semi-empirical determination of the FE efficiency for the arbitrary volume sources and the calculation principles and processes about ESA code is referred to, and the code was validated with a HPGe detector (relative efficiency 32%, n-type) in previous studies. In this study, we use different type and efficiency of HPGe detectors, in order to verify the performance of the ESA code for the various detectors. We calculated the efficiency curve of voluminous source and compared with experimental data. We will carry out additional validation by measurement of various medium, volume and shape of CRM volume sources with detector of different efficiency and type. And we will reflect the effect of the dead layer of p-type HPGe detector and coincidence summing correction technique in near future.
Global unitary fixing and matrix-valued correlations in matrix models
International Nuclear Information System (INIS)
Adler, Stephen L.; Horwitz, Lawrence P.
2003-01-01
We consider the partition function for a matrix model with a global unitary invariant energy function. We show that the averages over the partition function of global unitary invariant trace polynomials of the matrix variables are the same when calculated with any choice of a global unitary fixing, while averages of such polynomials without a trace define matrix-valued correlation functions, that depend on the choice of unitary fixing. The unitary fixing is formulated within the standard Faddeev-Popov framework, in which the squared Vandermonde determinant emerges as a factor of the complete Faddeev-Popov determinant. We give the ghost representation for the FP determinant, and the corresponding BRST invariance of the unitary-fixed partition function. The formalism is relevant for deriving Ward identities obeyed by matrix-valued correlation functions
Linear models in matrix form a hands-on approach for the behavioral sciences
Brown, Jonathon D
2014-01-01
This textbook is an approachable introduction to statistical analysis using matrix algebra. Prior knowledge of matrix algebra is not necessary. Advanced topics are easy to follow through analyses that were performed on an open-source spreadsheet using a few built-in functions. These topics include ordinary linear regression, as well as maximum likelihood estimation, matrix decompositions, nonparametric smoothers and penalized cubic splines. Each data set (1) contains a limited number of observations to encourage readers to do the calculations themselves, and (2) tells a coherent story based on statistical significance and confidence intervals. In this way, students will learn how the numbers were generated and how they can be used to make cogent arguments about everyday matters. This textbook is designed for use in upper level undergraduate courses or first year graduate courses. The first chapter introduces students to linear equations, then covers matrix algebra, focusing on three essential operations: sum ...
Source apportionment of airborne particulates through receptor modeling: Indian scenario
Banerjee, Tirthankar; Murari, Vishnu; Kumar, Manish; Raju, M. P.
2015-10-01
Airborne particulate chemistry mostly governed by associated sources and apportionment of specific sources is extremely essential to delineate explicit control strategies. The present submission initially deals with the publications (1980s-2010s) of Indian origin which report regional heterogeneities of particulate concentrations with reference to associated species. Such meta-analyses clearly indicate the presence of reservoir of both primary and secondary aerosols in different geographical regions. Further, identification of specific signatory molecules for individual source category was also evaluated in terms of their scientific merit and repeatability. Source signatures mostly resemble international profile while, in selected cases lack appropriateness. In India, source apportionment (SA) of airborne particulates was initiated way back in 1985 through factor analysis, however, principal component analysis (PCA) shares a major proportion of applications (34%) followed by enrichment factor (EF, 27%), chemical mass balance (CMB, 15%) and positive matrix factorization (PMF, 9%). Mainstream SA analyses identify earth crust and road dust resuspensions (traced by Al, Ca, Fe, Na and Mg) as a principal source (6-73%) followed by vehicular emissions (traced by Fe, Cu, Pb, Cr, Ni, Mn, Ba and Zn; 5-65%), industrial emissions (traced by Co, Cr, Zn, V, Ni, Mn, Cd; 0-60%), fuel combustion (traced by K, NH4+, SO4-, As, Te, S, Mn; 4-42%), marine aerosols (traced by Na, Mg, K; 0-15%) and biomass/refuse burning (traced by Cd, V, K, Cr, As, TC, Na, K, NH4+, NO3-, OC; 1-42%). In most of the cases, temporal variations of individual source contribution for a specific geographic region exhibit radical heterogeneity possibly due to unscientific orientation of individual tracers for specific source and well exaggerated by methodological weakness, inappropriate sample size, implications of secondary aerosols and inadequate emission inventories. Conclusively, a number of challenging
International Nuclear Information System (INIS)
Tanaka, Yuho; Uruma, Kazunori; Furukawa, Toshihiro; Nakao, Tomoki; Izumi, Kenya; Utsumi, Hiroaki
2017-01-01
This paper deals with an analysis problem for diffusion-ordered NMR spectroscopy (DOSY). DOSY is formulated as a matrix factorization problem of a given observed matrix. In order to solve this problem, a direct exponential curve resolution algorithm (DECRA) is well known. DECRA is based on singular value decomposition; the advantage of this algorithm is that the initial value is not required. However, DECRA requires a long calculating time, depending on the size of the given observed matrix due to the singular value decomposition, and this is a serious problem in practical use. Thus, this paper proposes a new analysis algorithm for DOSY to achieve a short calculating time. In order to solve matrix factorization for DOSY without using singular value decomposition, this paper focuses on the size of the given observed matrix. The observed matrix in DOSY is also a rectangular matrix with more columns than rows, due to limitation of the measuring time; thus, the proposed algorithm transforms the given observed matrix into a small observed matrix. The proposed algorithm applies the eigenvalue decomposition and the difference approximation to the small observed matrix, and the matrix factorization problem for DOSY is solved. The simulation and a data analysis show that the proposed algorithm achieves a lower calculating time than DECRA as well as similar analysis result results to DECRA. (author)
Guo, Xinyue; Li, Weihong; Ma, Minghui; Lu, Xin; Zhang, Haiyan
2017-11-01
The extracellular matrix (ECM) microenvironment is involved in the regulation of hepatocyte phenotype and function. Recently, the cell-derived extracellular matrix has been proposed to represent the bioactive and biocompatible materials of the native ECM. Here, we show that the endothelial cell-derived matrix (EC matrix) promotes the metabolic maturation of human adipose stem cell-derived hepatocyte-like cells (hASC-HLCs) through the activation of the transcription factor forkhead box protein A2 (FOXA2) and the nuclear receptors hepatocyte nuclear factor 4 alpha (HNF4α) and pregnane X receptor (PXR). Reducing the fibronectin content in the EC matrix or silencing the expression of α5 integrin in the hASC-HLCs inhibited the effect of the EC matrix on Src phosphorylation and hepatocyte maturation. The inhibition of Src phosphorylation using the inhibitor PP2 or silencing the expression of Src in hASC-HLCs also attenuated the up-regulation of the metabolic function of hASC-HLCs in a nuclear receptor-dependent manner. These data elucidate integrin-Src signalling linking the extrinsic EC matrix signals and metabolic functional maturation of hepatocyte. This study provides a model for studying the interaction between hepatocytes and non-parenchymal cell-derived matrix. © 2017 The Authors. Journal of Cellular and Molecular Medicine published by John Wiley & Sons Ltd and Foundation for Cellular and Molecular Medicine.
Directory of Open Access Journals (Sweden)
Seyed Sina Sebtahmadi
2016-11-01
Full Text Available A rotational d-q current control scheme based on a Particle Swarm Optimization- Proportional-Integral (PSO-PI controller, is used to drive an induction motor (IM through an Ultra Sparse Z-source Matrix Converter (USZSMC. To minimize the overall size of the system, the lowest feasible values of Z-source elements are calculated by considering the both timing and aspects of the circuit. A meta-heuristic method is integrated to the control system in order to find optimal coefficient values in a single multimodal problem. Henceforth, the effect of all coefficients in minimizing the total harmonic distortion (THD and balancing the stator current are considered simultaneously. Through changing the reference point of magnitude or frequency, the modulation index can be automatically adjusted and respond to changes without heavy computational cost. The focus of this research is on a reliable and lightweight system with low computational resources. The proposed scheme is validated through both simulation and experimental results.
Directory of Open Access Journals (Sweden)
S. Reimann
2008-05-01
Full Text Available Hourly measurements of 13 volatile hydrocarbons (C2–C7 were performed at an urban background site in Zurich (Switzerland in the years 1993–1994 and again in 2005–2006. For the separation of the volatile organic compounds by gas-chromatography (GC, an identical chromatographic column was used in both campaigns. Changes in hydrocarbon profiles and source strengths were recovered by positive matrix factorization (PMF. Eight and six factors could be related to hydrocarbon sources in 1993–1994 and in 2005–2006, respectively. The modeled source profiles were verified by hydrocarbon profiles reported in the literature. The source strengths were validated by independent measurements, such as inorganic trace gases (NOx, CO, SO2, methane (CH4, oxidized hydrocarbons (OVOCs and meteorological data (temperature, wind speed etc.. Our analysis suggests that the contribution of most hydrocarbon sources (i.e. road traffic, solvents use and wood burning decreased by a factor of about two to three between the early 1990s and 2005–2006. On the other hand, hydrocarbon losses from natural gas leakage remained at relatively constant levels (−20%. The estimated emission trends are in line with the results from different receptor-based approaches reported for other European cities. Their differences to national emission inventories are discussed.
Monte Carlo Simulation of stepping source in afterloading intracavitary brachytherapy for GZP6 unit
International Nuclear Information System (INIS)
Toossi, M.T.B.; Abdollahi, M.; Ghorbani, M.
2010-01-01
Full text: Stepping source in brachytherapy systems is used to treat a target lesion longer than the effective treatment length of the source. Dose calculation accuracy plays a vital role in the outcome of brachytherapy treatment. In this study, the stepping source (channel 6) of GZP6 brachytherapy unit was simulated by Monte Carlo simulation and matrix shift method. The stepping source of GZP6 was simulated by Monte Carlo MCNPX code. The Mesh tally (type I) was employed for absorbed dose calculation in a cylindrical water phantom. 5 x 108 photon histories were scored and a 0.2% statistical uncertainty was obtained by Monte Carlo calculations. Dose distributions were obtained by our matrix shift method for esophageal cancer tumor lengths of 8 and 10 cm. Isodose curves produced by simulation and TPS were superimposed to estimate the differences. Results Comparison of Monte Carlo and TPS dose distributions show that in longitudinal direction (source movement direction) Monte Carlo and TPS dose distributions are comparable. [n transverse direction, the dose differences of 7 and 5% were observed for esophageal tumor lengths of 8 and 10 cm respectively. Conclusions Although, the results show that the maximum difference between Monte Carlo and TPS calculations is about 7%, but considering that the certified activity is given with ± I 0%, uncertainty, then an error of the order of 20% for Monte Carlo calculation would be reasonable. It can be suggested that accuracy of the dose distribution produced by TPS is acceptable for clinical applications. (author)
Comparison of different source calculations in two-nucleon channel at large quark mass
Yamazaki, Takeshi; Ishikawa, Ken-ichi; Kuramashi, Yoshinobu
2018-03-01
We investigate a systematic error coming from higher excited state contributions in the energy shift of light nucleus in the two-nucleon channel by comparing two different source calculations with the exponential and wall sources. Since it is hard to obtain a clear signal of the wall source correlation function in a plateau region, we employ a large quark mass as the pion mass is 0.8 GeV in quenched QCD. We discuss the systematic error in the spin-triplet channel of the two-nucleon system, and the volume dependence of the energy shift.
Directory of Open Access Journals (Sweden)
Qiang Guo
2018-01-01
Full Text Available In modern electronic warfare, multiple input multiple output (MIMO radar has become an important tool for electronic reconnaissance and intelligence transmission because of its anti-stealth, high resolution, low intercept and anti-destruction characteristics. As a common MIMO radar signal, discrete frequency coding waveform (DFCW has a serious overlap of both time and frequency, so it cannot be directly used in the current radar signal separation problems. The existing fuzzy clustering algorithms have problems in initial value selection, low convergence rate and local extreme values which will lead to the low accuracy of the mixing matrix estimation. Consequently, a novel mixing matrix estimation algorithm based on data field and improved fuzzy C-means (FCM clustering is proposed. First of all, the sparsity and linear clustering characteristics of the time–frequency domain MIMO radar signals are enhanced by using the single-source principal value of complex angular detection. Secondly, the data field uses the potential energy information to analyze the particle distribution, thus design a new clustering number selection scheme. Then the particle swarm optimization algorithm is introduced to improve the iterative clustering process of FCM, and finally get the estimated value of the mixing matrix. The simulation results show that the proposed algorithm improves both the estimation accuracy and the robustness of the mixing matrix.
International Nuclear Information System (INIS)
Stoker, C.C.; Ball, G.
2000-01-01
The ever increasing expansion of the irradiation product portfolio of the SAFARI-1 reactor leads to the need to routinely calculate the radio-isotope concentrations and source terms for the materials irradiated in the reactor accurately. In addition to this, the required shielding for the transportation and processing of these irradiation products needs to be determined. In this paper the calculational methodology applied is described with special attention given to the spectrum dependence of the one-group cross sections of selected SAFARI-1 irradiation materials and the consequent effect on the determination of the isotope concentrations and source terms. Comparisons of the calculated isotopic concentrations and dose rates with experimental analysis and measurements provide confidence in the calculational methodologies and data used. (author)
Benchmarking of Touschek Beam Lifetime Calculations for the Advanced Photon Source
Energy Technology Data Exchange (ETDEWEB)
Xiao, A.; Yang, B.
2017-06-25
Particle loss from Touschek scattering is one of the most significant issues faced by present and future synchrotron light source storage rings. For example, the predicted, Touschek-dominated beam lifetime for the Advanced Photon Source (APS) Upgrade lattice in 48-bunch, 200-mA timing mode is only ~ 2 h. In order to understand the reliability of the predicted lifetime, a series of measurements with various beam parameters was performed on the present APS storage ring. This paper first describes the entire process of beam lifetime measurement, then compares measured lifetime with the calculated one by applying the measured beam parameters. The results show very good agreement.
CDFMC: a program that calculates the fixed neutron source distribution for a BWR using Monte Carlo
International Nuclear Information System (INIS)
Gomez T, A.M.; Xolocostli M, J.V.; Palacios H, J.C.
2006-01-01
The three-dimensional neutron flux calculation using the synthesis method, it requires of the determination of the neutron flux in two two-dimensional configurations as well as in an unidimensional one. Most of the standard guides for the neutron flux calculation or fluences in the vessel of a nuclear reactor, make special emphasis in the appropriate calculation of the fixed neutron source that should be provided to the used transport code, with the purpose of finding sufficiently approximated flux values. The reactor core assemblies configuration is based on X Y geometry, however the considered problem is solved in R θ geometry for what is necessary to make an appropriate mapping to find the source term associated to the R θ intervals starting from a source distribution in rectangular coordinates. To develop the CDFMC computer program (Source Distribution calculation using Monte Carlo), it was necessary to develop a theory of independent mapping to those that have been in the literature. The method of meshes overlapping here used, is based on a technique of random points generation, commonly well-known as Monte Carlo technique. Although the 'randomness' of this technique it implies considering errors in the calculations, it is well known that when increasing the number of points randomly generated to measure an area or some other quantity of interest, the precision of the method increases. In the particular case of the CDFMC computer program, the developed technique reaches a good general behavior when it is used a considerably high number of points (bigger or equal to a hundred thousand), with what makes sure errors in the calculations of the order of 1%. (Author)
Energy Technology Data Exchange (ETDEWEB)
Davidson, Scott E., E-mail: sedavids@utmb.edu [Radiation Oncology, The University of Texas Medical Branch, Galveston, Texas 77555 (United States); Cui, Jing [Radiation Oncology, University of Southern California, Los Angeles, California 90033 (United States); Kry, Stephen; Ibbott, Geoffrey S.; Followill, David S. [Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas 77030 (United States); Deasy, Joseph O. [Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York 10065 (United States); Vicic, Milos [Department of Applied Physics, University of Belgrade, Belgrade 11000 (Serbia); White, R. Allen [Bioinformatics and Computational Biology, The University of Texas MD Anderson Cancer Center, Houston, Texas 77030 (United States)
2016-08-15
Purpose: A dose calculation tool, which combines the accuracy of the dose planning method (DPM) Monte Carlo code and the versatility of a practical analytical multisource model, which was previously reported has been improved and validated for the Varian 6 and 10 MV linear accelerators (linacs). The calculation tool can be used to calculate doses in advanced clinical application studies. One shortcoming of current clinical trials that report dose from patient plans is the lack of a standardized dose calculation methodology. Because commercial treatment planning systems (TPSs) have their own dose calculation algorithms and the clinical trial participant who uses these systems is responsible for commissioning the beam model, variation exists in the reported calculated dose distributions. Today’s modern linac is manufactured to tight specifications so that variability within a linac model is quite low. The expectation is that a single dose calculation tool for a specific linac model can be used to accurately recalculate dose from patient plans that have been submitted to the clinical trial community from any institution. The calculation tool would provide for a more meaningful outcome analysis. Methods: The analytical source model was described by a primary point source, a secondary extra-focal source, and a contaminant electron source. Off-axis energy softening and fluence effects were also included. The additions of hyperbolic functions have been incorporated into the model to correct for the changes in output and in electron contamination with field size. A multileaf collimator (MLC) model is included to facilitate phantom and patient dose calculations. An offset to the MLC leaf positions was used to correct for the rudimentary assumed primary point source. Results: Dose calculations of the depth dose and profiles for field sizes 4 × 4 to 40 × 40 cm agree with measurement within 2% of the maximum dose or 2 mm distance to agreement (DTA) for 95% of the data
A 222 energy bins response matrix for a "6Lil scintillator Bss system
International Nuclear Information System (INIS)
Lacerda, M. A. S.; Vega C, H. R.; Mendez V, R.; Lorente F, A.; Ibanez F, S.; Gallego D, E.
2016-10-01
A new response matrix was calculated for a Bonner Sphere Spectrometer (Bss) with a "6Lil(Eu) scintillator. We utilized the Monte Carlo N-particle radiation transport code MCNPX, version 2.7.0, with Endf/B-VII.0 nuclear data library to calculate the responses for 6 spheres and the bare detector, for energies varying from 9.441 E(-10) MeV to 105.9 MeV, with 20 equal-log(E)-width bins per energy decade, totalizing 222 energy groups. A Bss, like the modeled in this work, was utilized to measure the neutron spectrum generated by the "2"4"1AmBe source of the Universidad Politecnica de Madrid. From the count rates obtained with this Bss system we unfolded neutron spectrum utilizing the BUNKIUT code for 31 energy bins (UTA-4 response matrix) and the MAXED code with the new calculated response functions. We compared spectra obtained with these Bss system / unfold codes with that obtained from measurements performed with a Bss system constituted of 12 spheres with a spherical "3He Sp-9 counter (Centronic Ltd., UK) and MAXED code with the system-specific response functions (Bss-CIEMAT). A relatively good agreement was observed between our response matrix and that calculated by other authors. In general, we observed an improvement in the agreement as the energy increases. However, higher discrepancies were observed for energies close to 1-E(-8) MeV and, mainly, for energies above 20 MeV. These discrepancies were mainly attributed to the differences in cross-section libraries employed. The ambient dose equivalent (H (10)) calculated with the "6Lil-MAXED showed a good agreement with values measured with the neutron area monitor Bert hold Lb 6411 and within 12% the value obtained with another Bss system (Bss-CIEMAT). The response matrix calculated in this work can be utilized together with the MAXED code to generate neutron spectra with a good energy resolution up to 20 MeV. Some additional tests are being done to validate this response matrix and improve the results for energies
Optimising parallel R correlation matrix calculations on gene expression data using MapReduce.
Wang, Shicai; Pandis, Ioannis; Johnson, David; Emam, Ibrahim; Guitton, Florian; Oehmichen, Axel; Guo, Yike
2014-11-05
High-throughput molecular profiling data has been used to improve clinical decision making by stratifying subjects based on their molecular profiles. Unsupervised clustering algorithms can be used for stratification purposes. However, the current speed of the clustering algorithms cannot meet the requirement of large-scale molecular data due to poor performance of the correlation matrix calculation. With high-throughput sequencing technologies promising to produce even larger datasets per subject, we expect the performance of the state-of-the-art statistical algorithms to be further impacted unless efforts towards optimisation are carried out. MapReduce is a widely used high performance parallel framework that can solve the problem. In this paper, we evaluate the current parallel modes for correlation calculation methods and introduce an efficient data distribution and parallel calculation algorithm based on MapReduce to optimise the correlation calculation. We studied the performance of our algorithm using two gene expression benchmarks. In the micro-benchmark, our implementation using MapReduce, based on the R package RHIPE, demonstrates a 3.26-5.83 fold increase compared to the default Snowfall and 1.56-1.64 fold increase compared to the basic RHIPE in the Euclidean, Pearson and Spearman correlations. Though vanilla R and the optimised Snowfall outperforms our optimised RHIPE in the micro-benchmark, they do not scale well with the macro-benchmark. In the macro-benchmark the optimised RHIPE performs 2.03-16.56 times faster than vanilla R. Benefiting from the 3.30-5.13 times faster data preparation, the optimised RHIPE performs 1.22-1.71 times faster than the optimised Snowfall. Both the optimised RHIPE and the optimised Snowfall successfully performs the Kendall correlation with TCGA dataset within 7 hours. Both of them conduct more than 30 times faster than the estimated vanilla R. The performance evaluation found that the new MapReduce algorithm and its
Tsuji, Motonori; Shudo, Koichi; Kagechika, Hiroyuki
2017-03-01
Understanding and identifying the receptor subtype selectivity of a ligand is an important issue in the field of drug discovery. Using a combination of classical molecular mechanics and quantum mechanical calculations, this report assesses the receptor subtype selectivity for the human retinoid X receptor (hRXR) and retinoic acid receptor (hRAR) ligand-binding domains (LBDs) complexed with retinoid ligands. The calculated energies show good correlation with the experimentally reported binding affinities. The technique proposed here is a promising method as it reveals the origin of the receptor subtype selectivity of selective ligands.
International Nuclear Information System (INIS)
Sadeghi, Mahdi; Hosseini, Hamed; Raisali, Gholamreza
2008-01-01
Full text: The use of 103 Pd seed sources for permanent prostate implantation has become a popular brachytherapy application. As recommended by AAPM the dosimetric characteristics of the new source must be determined using experimental and Monte Carlo simulations, before its use in clinical applications thus The goal of this report is the experimental and theoretical determination of the dosimetric characteristics of this source following the recommendations in the AAPM TG-43U1 protocol. Figure 1 shows the geometry of the IRA- 103 Pd source. The source consists of a cylindrical silver core, 0.3 cm long x 0.05 cm in diameter, onto which 0.5 nm layer of 103 Pd has been uniformly adsorbed. The effective active length of source is 0.3 cm and the silver core encapsulated inside a hollow titanium tube with 0.45 cm long, 0.07 cm and 0.08 inner and outer diameters and two caps. The Monte Carlo N-Particle (MCNP) code, version 4C, was used to determine the relevant dosimetric parameters of the source. The geometry of the Monte Carlo simulation performed in this study consisted of a sphere with 30 cm diameter. Dose distributions around this source were measured in two Perspex phantom using enough TLD chips. For these measurements, slabs of Perspex material were machined to accommodate the source and TLD chips. A value of 0.67± 1% cGy.h -1 .U -1 for, Λ, was calculated as the ratio of d(r 0 ,θ 0 ) and s K , that may be compared with Λ values obtained for 103 Pd sources. Result of calculations and measurements values of dosimetric parameters of the source including radial dose function, g(r), and anisotropy function, F(r,θ), has been shown in separate figures. The radial dose function, g(r), for the IRA- 103 Pd source and other 103 Pd sources is included in Fig. 2. Comparison between measured and Monte Carlo simulated dose function, g(r), and anisotropy function, F(r,θ), of this source demonstrated that they are in good agreement with each other and The value of Λ is
Optimizing the calculation of point source count-centroid in pixel size measurement
International Nuclear Information System (INIS)
Zhou Luyi; Kuang Anren; Su Xianyu
2004-01-01
Purpose: Pixel size is an important parameter of gamma camera and SPECT. A number of Methods are used for its accurate measurement. In the original count-centroid method, where the image of a point source(PS) is acquired and its count-centroid calculated to represent PS position in the image, background counts are inevitable. Thus the measured count-centroid (Xm) is an approximation of the true count-centroid (Xp) of the PS, i.e. Xm=Xp+(Xb-Xp)/(1+Rp/Rb), where Rp is the net counting rate of the PS, Xb the background count-centroid and Rb the background counting rate. To get accurate measurement, Rp must be very big, which is unpractical, resulting in the variation of measured pixel size. Rp-independent calculation of PS count-centroid is desired. Methods: The proposed method attempted to eliminate the effect of the term (Xb-Xp)/(1+Rp/Rb) by bringing Xb closer to Xp and by reducing Rb. In the acquired PS image, a circular ROI was generated to enclose the PS, the pixel with the maximum count being the center of the ROI. To choose the diameter (D) of the ROI, a Gaussian count distribution was assumed for the PS, accordingly, K=I-(0.5)D/R percent of the total PS counts was in the ROI, R being the full width at half maximum of the PS count distribution. D was set to be 6*R to enclose most (K=98.4%) of the PS counts. The count-centroid of the ROI was calculated to represent Xp. The proposed method was tested in measuring the pixel size of a well-tuned SPECT, whose pixel size was estimated to be 3.02 mm according to its mechanical and electronic setting (128*128 matrix, 387 mm UFOV, ZOOM=1). For comparison, the original method, which was use in the former versions of some commercial SPECT software, was also tested. 12 PSs were prepared and their image acquired and stored. The net counting rate of the PSs increased from 10cps to 1183cps. Results: Using the proposed method, the measured pixel size (in mm) varied only between 3.00 and 3.01( mean= 3.01±0.00) as Rp increased
Linden, P; Dahl, B; Pázsit, I; Por, G
1999-01-01
We have performed laboratory measurements of the neutron flux and its gradient in a static model experiment, similar to a model problem proposed in Pazsit (Ann. Nucl. Energy 24 (1997) 1257). The experimental system consists of a radioactive neutron source located in a water tank. The measurements are performed using a recently developed very small optical fibre detector. The measured values of the flux and its gradient are then used to test the possibility of localising the source. The results show that it is possible to measure the flux on the circumference of a circle and from this calculate the flux gradient vector. Then, by comparison of the measured quantities with corresponding MCNP calculations, both the direction and the distance to the source are found and thus the position of the source can be determined.
How does the extracellular matrix direct gene expression
Energy Technology Data Exchange (ETDEWEB)
Bissell, M J; Hall, H G; Parry, G
1982-01-01
Based on the existing literature, a model is presented that postulates a ''dynamic reciprocity'' between the extracellular matrix (ECM) on the one hand and the cytoskeleton and the nuclear matrix on the other hand. The ECM is postulated to exert physical and chemical influences on the geometry and the biochemistry of the cell via transmembrane receptors so as to alter the pattern of gene expression by changing the association of the cytoskeleton with mRNA and the interaction of the chromatin with the nuclear matrix. This, in turn, would affect the ECM, which would affect the cell.
Chern-Simons matrix models and unoriented strings
International Nuclear Information System (INIS)
Halmagyi, Nick; Yasnov, Vadim
2004-01-01
For matrix models with measure on the Lie algebra of SO/Sp, the sub-leading free energy is given by F 1 (S) ±{1/4}({δF 0 (S)}/{δS}). Motivated by the fact that this relationship does not hold for Chern-Simons theory on S 3 , we calculate the sub-leading free energy in the matrix model for this theory, which is a Gaussian matrix model with Haar measure on the group SO/Sp. We derive a quantum loop equation for this matrix model and then find that F 1 is an integral of the leading order resolvent over the spectral curve. We explicitly calculate this integral for quadratic potential and find agreement with previous studies of SO/Sp Chern-Simons theory. (author)
International Nuclear Information System (INIS)
Park, Hong Sik; Kim, Min; Park, Seong Chan; Seo, Jong Tae; Kim, Eun Kee
2005-01-01
The SHIELD code has been used to calculate the source terms of NSSS Auxiliary System (comprising CVCS, SIS, and SCS) components of the OPR1000. Because the code had been developed based upon the SYSTEM80 design and the APR1400 NSSS Auxiliary System design is considerably changed from that of SYSTEM80 or OPR1000, the SHIELD code cannot be used directly for APR1400 radiation design. Thus the hand-calculation is needed for the portion of design changes using the results of the SHIELD code calculation. In this study, the SHIELD code is modified to incorporate the APR1400 design changes and the source term calculation is performed for the APR1400 NSSS Auxiliary System components
Calculating massive 3-loop graphs for operator matrix elements by the method of hyperlogarithms
International Nuclear Information System (INIS)
Ablinger, Jakob; Schneider, Carsten; Bluemlein, Johannes; Raab, Clemens; Wissbrock, Fabian
2014-02-01
We calculate convergent 3-loop Feynman diagrams containing a single massive loop equipped with twist τ=2 local operator insertions corresponding to spin N. They contribute to the massive operator matrix elements in QCD describing the massive Wilson coefficients for deep-inelastic scattering at large virtualities. Diagrams of this kind can be computed using an extended version to the method of hyperlogarithms, originally being designed for massless Feynman diagrams without operators. The method is applied to Benz- and V-type graphs, belonging to the genuine 3-loop topologies. In case of the V-type graphs with five massive propagators new types of nested sums and iterated integrals emerge. The sums are given in terms of finite binomially and inverse binomially weighted generalized cyclotomic sums, while the 1-dimensionally iterated integrals are based on a set of ∝30 square-root valued letters. We also derive the asymptotic representations of the nested sums and present the solution for N element of C. Integrals with a power-like divergence in N-space∝a N , a element of R, a>1, for large values of N emerge. They still possess a representation in x-space, which is given in terms of root-valued iterated integrals in the present case. The method of hyperlogarithms is also used to calculate higher moments for crossed box graphs with different operator insertions.
Calculating massive 3-loop graphs for operator matrix elements by the method of hyperlogarithms
Energy Technology Data Exchange (ETDEWEB)
Ablinger, Jakob; Schneider, Carsten [Johannes Kepler Univ., Linz (Austria). Reserach Inst. for Symbolic Computation (RISC); Bluemlein, Johannes; Raab, Clemens [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Wissbrock, Fabian [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Johannes Kepler Univ., Linz (Austria). Reserach Inst. for Symbolic Computation (RISC)
2014-02-15
We calculate convergent 3-loop Feynman diagrams containing a single massive loop equipped with twist τ=2 local operator insertions corresponding to spin N. They contribute to the massive operator matrix elements in QCD describing the massive Wilson coefficients for deep-inelastic scattering at large virtualities. Diagrams of this kind can be computed using an extended version to the method of hyperlogarithms, originally being designed for massless Feynman diagrams without operators. The method is applied to Benz- and V-type graphs, belonging to the genuine 3-loop topologies. In case of the V-type graphs with five massive propagators new types of nested sums and iterated integrals emerge. The sums are given in terms of finite binomially and inverse binomially weighted generalized cyclotomic sums, while the 1-dimensionally iterated integrals are based on a set of ∝30 square-root valued letters. We also derive the asymptotic representations of the nested sums and present the solution for N element of C. Integrals with a power-like divergence in N-space∝a{sup N}, a element of R, a>1, for large values of N emerge. They still possess a representation in x-space, which is given in terms of root-valued iterated integrals in the present case. The method of hyperlogarithms is also used to calculate higher moments for crossed box graphs with different operator insertions.
Calculating massive 3-loop graphs for operator matrix elements by the method of hyperlogarithms
Energy Technology Data Exchange (ETDEWEB)
Ablinger, Jakob [Research Institute for Symbolic Computation (RISC), Johannes Kepler University, Altenbergerstraße 69, A-4040 Linz (Austria); Blümlein, Johannes; Raab, Clemens [Deutsches Elektronen-Synchrotron, DESY, Platanenallee 6, D-15738 Zeuthen (Germany); Schneider, Carsten [Research Institute for Symbolic Computation (RISC), Johannes Kepler University, Altenbergerstraße 69, A-4040 Linz (Austria); Wißbrock, Fabian [Research Institute for Symbolic Computation (RISC), Johannes Kepler University, Altenbergerstraße 69, A-4040 Linz (Austria); Deutsches Elektronen-Synchrotron, DESY, Platanenallee 6, D-15738 Zeuthen (Germany)
2014-08-15
We calculate convergent 3-loop Feynman diagrams containing a single massive loop equipped with twist τ=2 local operator insertions corresponding to spin N. They contribute to the massive operator matrix elements in QCD describing the massive Wilson coefficients for deep-inelastic scattering at large virtualities. Diagrams of this kind can be computed using an extended version of the method of hyperlogarithms, originally being designed for massless Feynman diagrams without operators. The method is applied to Benz- and V-type graphs, belonging to the genuine 3-loop topologies. In case of the V-type graphs with five massive propagators, new types of nested sums and iterated integrals emerge. The sums are given in terms of finite binomially and inverse binomially weighted generalized cyclotomic sums, while the 1-dimensionally iterated integrals are based on a set of ∼30 square-root valued letters. We also derive the asymptotic representations of the nested sums and present the solution for N∈C. Integrals with a power-like divergence in N-space ∝a{sup N},a∈R,a>1, for large values of N emerge. They still possess a representation in x-space, which is given in terms of root-valued iterated integrals in the present case. The method of hyperlogarithms is also used to calculate higher moments for crossed box graphs with different operator insertions.
Calculating massive 3-loop graphs for operator matrix elements by the method of hyperlogarithms
International Nuclear Information System (INIS)
Ablinger, Jakob; Blümlein, Johannes; Raab, Clemens; Schneider, Carsten; Wißbrock, Fabian
2014-01-01
We calculate convergent 3-loop Feynman diagrams containing a single massive loop equipped with twist τ=2 local operator insertions corresponding to spin N. They contribute to the massive operator matrix elements in QCD describing the massive Wilson coefficients for deep-inelastic scattering at large virtualities. Diagrams of this kind can be computed using an extended version of the method of hyperlogarithms, originally being designed for massless Feynman diagrams without operators. The method is applied to Benz- and V-type graphs, belonging to the genuine 3-loop topologies. In case of the V-type graphs with five massive propagators, new types of nested sums and iterated integrals emerge. The sums are given in terms of finite binomially and inverse binomially weighted generalized cyclotomic sums, while the 1-dimensionally iterated integrals are based on a set of ∼30 square-root valued letters. We also derive the asymptotic representations of the nested sums and present the solution for N∈C. Integrals with a power-like divergence in N-space ∝a N ,a∈R,a>1, for large values of N emerge. They still possess a representation in x-space, which is given in terms of root-valued iterated integrals in the present case. The method of hyperlogarithms is also used to calculate higher moments for crossed box graphs with different operator insertions
International Nuclear Information System (INIS)
Jaekwan Kim; Jhinwung Kim; Koh, H.M.; Kwon, K.
1993-01-01
Variation of seismic wave field in a multi-layered attenuating elastic half space is studied by the propagator matrix method and a point source model with the fault slip function of Haskell type. Accelerations, displacements and their frequency contents due to a vertical dip-slip point source buried in the underlain half space are presented. Also included are responses of the same layered half space model to the plane wave obliquely incident from the half space for the purpose of comparison with those due to a dip-slip point source. (author)
Phenomenology of the CKM matrix
International Nuclear Information System (INIS)
Nir, Y.
1989-01-01
The way in which an exact determination of the CKM matrix elements tests the standard Model is demonstrated by a two-generation example. The determination of matrix elements from meson semileptonic decays is explained, with an emphasis on the respective reliability of quark level and meson level calculations. The assumptions involved in the use of loop processes are described. Finally, the state of the art of the knowledge of the CKM matrix is presented. 19 refs., 2 figs
Directory of Open Access Journals (Sweden)
I. Cheng
2012-02-01
Full Text Available Source-receptor relationships for speciated atmospheric mercury measured at the Experimental Lakes Area (ELA, northwestern Ontario, Canada were investigated using various receptor-based approaches. The data used in this study include gaseous elemental mercury (GEM, mercury bound to fine airborne particles (<2.5 μm (PHg, reactive gaseous mercury (RGM, major inorganic ions, sulphur dioxide, nitric acid gas, ozone, and meteorological variables, all of which were measured between May 2005 and December 2006. The source origins identified were related to transport of industrial and combustion emissions (associated with elevated GEM, photochemical production of RGM (associated with elevated RGM, road-salt particles with absorption of gaseous Hg (associated with elevated PHg and RGM, crustal/soil emissions, and background pollution. Back trajectory modelling illustrated that a remote site, like ELA, is affected by distant Hg point sources in Canada and the United States. The sources identified from correlation analysis, principal components analysis and K-means cluster analysis were generally consistent. The discrepancies between the K-means and Hierarchical cluster analysis were the clusters related to transport of industrial/combustion emissions, photochemical production of RGM, and crustal/soil emissions. Although it was possible to assign the clusters to these source origins, the trajectory plots for the Hierarchical clusters were similar to some of the trajectories belonging to several K-means clusters. This likely occurred because the variables indicative of transport of industrial/combustion emissions were elevated in at least two or more of the clusters, which means this Hg source was well-represented in the data.
Nodewise analytical calculation of the transfer function
International Nuclear Information System (INIS)
Makai, Mihaly
1994-01-01
The space dependence of neutron noise has so far been mostly investigated in homogeneous core models. Application of core diagnostic methods to locate a malfunction requires however that the transfer function be calculated for real, inhomogeneous cores. A code suitable for such purpose must be able to handle complex arithmetic and delta-function source. Further requirements are analytical dependence in one spatial variable and fast execution. The present work describes the TIDE program written to fulfil the above requirements. The core is subdivided into homogeneous, square assemblies. An analytical solution is given, which is a generalisation of the inhomogeneous response matrix method. (author)
Application of FIRE for the calculation of photon matrix elements
Indian Academy of Sciences (India)
to evaluate the two-loop Feynman diagrams for the photon matrix element of the ... sum of scalar Feynman integrals to a linear combination of a few master integrals. .... Then, FIRE is used to express these scalar integrals as a linear combi-.
Smyth, R. T.; Ballance, C. P.; Ramsbottom, C. A.; Johnson, C. A.; Ennis, D. A.; Loch, S. D.
2018-05-01
Neutral tungsten is the primary candidate as a wall material in the divertor region of the International Thermonuclear Experimental Reactor (ITER). The efficient operation of ITER depends heavily on precise atomic physics calculations for the determination of reliable erosion diagnostics, helping to characterize the influx of tungsten impurities into the core plasma. The following paper presents detailed calculations of the atomic structure of neutral tungsten using the multiconfigurational Dirac-Fock method, drawing comparisons with experimental measurements where available, and includes a critical assessment of existing atomic structure data. We investigate the electron-impact excitation of neutral tungsten using the Dirac R -matrix method, and by employing collisional-radiative models, we benchmark our results with recent Compact Toroidal Hybrid measurements. The resulting comparisons highlight alternative diagnostic lines to the widely used 400.88-nm line.
Pavlov, V. M.
2017-07-01
The problem of calculating complete synthetic seismograms from a point dipole with an arbitrary seismic moment tensor in a plane parallel medium composed of homogeneous elastic isotropic layers is considered. It is established that the solutions of the system of ordinary differential equations for the motion-stress vector have a reciprocity property, which allows obtaining a compact formula for the derivative of the motion vector with respect to the source depth. The reciprocity theorem for Green's functions with respect to the interchange of the source and receiver is obtained for a medium with cylindrical boundary. The differentiation of Green's functions with respect to the coordinates of the source leads to the same calculation formulas as the algorithm developed in the previous work (Pavlov, 2013). A new algorithm appears when the derivatives with respect to the horizontal coordinates of the source is replaced by the derivatives with respect to the horizontal coordinates of the receiver (with the minus sign). This algorithm is more transparent, compact, and economic than the previous one. It requires calculating the wavenumbers associated with Bessel function's roots of order 0 and order 1, whereas the previous algorithm additionally requires the second order roots.
International Nuclear Information System (INIS)
Sadeghi, Mahdi; Raisali, Gholamreza; Hosseini, S. Hamed; Shavar, Arzhang
2008-01-01
This article presents a brachytherapy source having 103 Pd adsorbed onto a cylindrical silver rod that has been developed by the Agricultural, Medical, and Industrial Research School for permanent implant applications. Dosimetric characteristics (radial dose function, anisotropy function, and anisotropy factor) of this source were experimentally and theoretically determined in terms of the updated AAPM Task group 43 (TG-43U1) recommendations. Monte Carlo simulations were used to calculate the dose rate constant. Measurements were performed using TLD-GR200A circular chip dosimeters using standard methods employing thermoluminescent dosimeters in a Perspex phantom. Precision machined bores in the phantom located the dosimeters and the source in a reproducible fixed geometry, providing for transverse-axis and angular dose profiles over a range of distances from 0.5 to 5 cm. The Monte Carlo N-particle (MCNP) code, version 4C simulation techniques have been used to evaluate the dose-rate distributions around this model 103 Pd source in water and Perspex phantoms. The Monte Carlo calculated dose rate constant of the IRA- 103 Pd source in water was found to be 0.678 cGy h -1 U -1 with an approximate uncertainty of ±0.1%. The anisotropy function, F(r,θ), and the radial dose function, g(r), of the IRA- 103 Pd source were also measured in a Perspex phantom and calculated in both Perspex and liquid water phantoms
Perlee, Caroline J.; Casasent, David P.
1990-09-01
Error sources in an optical matrix-vector processor are analyzed in terms of their effect on the performance of the algorithms used to solve a set of nonlinear and linear algebraic equations. A direct and an iterative algorithm are used to solve a nonlinear time-dependent case-study from computational fluid dynamics. A simulator which emulates the data flow and number representation of the OLAP is used to studs? these error effects. The ability of each algorithm to tolerate or correct the error sources is quantified. These results are extended to the general case of solving nonlinear and linear algebraic equations on the optical system.
A 222 energy bins response matrix for a {sup 6}Lil scintillator Bss system
Energy Technology Data Exchange (ETDEWEB)
Lacerda, M. A. S. [Centro de Desenvolvimento da Tecnologia Nuclear, Laboratorio de Calibracao de Dosimetros, Av. Pte. Antonio Carlos 6627, 31270-901 Pampulha, Belo Horizonte, Minas Gerais (Brazil); Vega C, H. R. [Universidad Autonoma de Zacatecas, Unidad Academica de Estudios Nucleares, Cipres No. 10, Fracc. La Penuela, 98068 Zacatecas, Zac. (Mexico); Mendez V, R. [Centro de Investigaciones Energeticas, Medioambientales y Tecnologicas, Laboratorio de Patrones Neutronicos, Av. Complutense 22, 28040 Madrid (Spain); Lorente F, A.; Ibanez F, S.; Gallego D, E., E-mail: masl@cdtn.br [Universidad Politecnica de Madrid, Departamento de Ingenieria Nuclear, 28006 Madrid (Spain)
2016-10-15
A new response matrix was calculated for a Bonner Sphere Spectrometer (Bss) with a {sup 6}Lil(Eu) scintillator. We utilized the Monte Carlo N-particle radiation transport code MCNPX, version 2.7.0, with Endf/B-VII.0 nuclear data library to calculate the responses for 6 spheres and the bare detector, for energies varying from 9.441 E(-10) MeV to 105.9 MeV, with 20 equal-log(E)-width bins per energy decade, totalizing 222 energy groups. A Bss, like the modeled in this work, was utilized to measure the neutron spectrum generated by the {sup 241}AmBe source of the Universidad Politecnica de Madrid. From the count rates obtained with this Bss system we unfolded neutron spectrum utilizing the BUNKIUT code for 31 energy bins (UTA-4 response matrix) and the MAXED code with the new calculated response functions. We compared spectra obtained with these Bss system / unfold codes with that obtained from measurements performed with a Bss system constituted of 12 spheres with a spherical {sup 3}He Sp-9 counter (Centronic Ltd., UK) and MAXED code with the system-specific response functions (Bss-CIEMAT). A relatively good agreement was observed between our response matrix and that calculated by other authors. In general, we observed an improvement in the agreement as the energy increases. However, higher discrepancies were observed for energies close to 1-E(-8) MeV and, mainly, for energies above 20 MeV. These discrepancies were mainly attributed to the differences in cross-section libraries employed. The ambient dose equivalent (H (10)) calculated with the {sup 6}Lil-MAXED showed a good agreement with values measured with the neutron area monitor Bert hold Lb 6411 and within 12% the value obtained with another Bss system (Bss-CIEMAT). The response matrix calculated in this work can be utilized together with the MAXED code to generate neutron spectra with a good energy resolution up to 20 MeV. Some additional tests are being done to validate this response matrix and improve the
International Nuclear Information System (INIS)
Ondra, Frantisek; Daniska, Vladimir; Rehak, Ivan; Necas, Vladimir
2009-01-01
The aim of the article is a development of analytical methodology for evaluation of input data inaccuracies impact on calculation of cost and other output decommissioning parameters. This methodology is based on analytical model calculations using the OMEGA code and taking into account the probability of input data inaccuracies occurrence also. To achieve about mentioned aim, the article identifies possible sources of input data inaccuracies and analyzes their level of impact on output parameters. Then the methodology for calculation of input parameters inaccuracies impact is developed, based on analytical model calculation. The model calculation takes into consideration output parameters impact on cost and other decommissioning output parameters in analytical way. The methodology used in model calculations is original, more over it implements the international standardized structure (IAEA, OECD/NEA, EC) [6] of decommissioning cost for the first time. A probabilistic occurrence of input data inaccuracies is taken into consideration and implemented in the methodology developed. A correction factors matrix for evaluation of input data inaccuracies impact on decommissioning output parameters is set up. The matrix contains parameters based on model calculations using the proposed methodology. Finally the methodology for application of correction factor matrix is proposed and tested; the methodology is used for calculation of contingency in the standardized structure which reflected the level of input data inaccuracies. The cost for individual decommissioning projects for common nuclear power plants are in the range 300 - 500 mil. EUR. Contingencies are from 10% to 30%, depending on the level of detailed during preparation of decommissioning projects. A implementation about mentioned methodology in the OMEGA code improves the accuracy of contingency. Consequently it makes calculated contingency more trustworthy and makes calculated decommissioning cost closer to reality
International Nuclear Information System (INIS)
Emery, L.
1999-01-01
Magnet errors and off-center orbits through sextuples perturb the dispersion and beta functions in a storage ring (SR), which affects machine performance. In a large ring such as the Advanced Photon Source (APS), the magnet errors are difficult to determine with beam-based methods. Also the non-zero orbit through sextuples result from user requests for steering at light source points. For expediency, a singular value decomposition (SVD) matrix method analogous to orbit correction was adopted to make global corrections to these functions using strengths of several quadrupoles as correcting elements. The direct response matrix is calculated from the model of the perfect lattice. The inverse is calculated by SVD with a selected number of singular vectors. Resulting improvement in the lattice functions and machine performance will be presented
Source attribution of light-absorbing impurities in seasonal snow across northern China
Zhang, R.; Hegg, D. A.; Huang, J.; Fu, Q.
2013-01-01
Seasonal snow samples obtained at 46 sites in 6 provinces of China in January and February 2010 were analyzed for a suite of chemical species and these data are combined with previously determined concentrations of light-absorbing impurities (LAI), including all particles that absorb light in the 650-700 nm wavelength interval. The LAI, together with 14 other analytes, are used as input to a positive matrix factorization (PMF) receptor model to explore the sources of the LAI in the snow. The PMF analysis for the LAI sources is augmented with backward trajectory cluster analysis and the geographic locations of major source areas for the three source types. The two analyses are consistent and indicate that three factors/sources were responsible for the measured snow light absorption: a soil dust source, an industrial pollution source, and a biomass and biofuels burning source. Soil dust was the main source of the LAI, accounting for ~ 53% of the LAI on average.
Passive Detection of Narrowband Sources Using a Sensor Array
Energy Technology Data Exchange (ETDEWEB)
Chambers, D H; Candy, J V; Guidry, B L
2007-10-24
In this report we derive a model for a highly scattering medium, implemented as a set of MATLAB functions. This model is used to analyze an approach for using time-reversal to enhance the detection of a single frequency source in a highly scattering medium. The basic approach is to apply the singular value decomposition to the multistatic response matrix for a time-reversal array system. We then use the array in a purely passive mode, measuring the response to the presence of a source. The measured response is projected onto the singular vectors, creating a time-reversal pseudo-spectrum. We can then apply standard detection techniques to the pseudo-spectrum to determine the presence of a source. If the source is close to a particular scatterer in the medium, then we would expect an enhancement of the inner product between the array response to the source with the singular vector associated with that scatterer. In this note we begin by deriving the Foldy-Lax model of a highly scattering medium, calculate both the field emitted by the source and the multistatic response matrix of a time-reversal array system in the medium, then describe the initial analysis approach.
Source localization analysis using seismic noise data acquired in exploration geophysics
Roux, P.; Corciulo, M.; Campillo, M.; Dubuq, D.
2011-12-01
Passive monitoring using seismic noise data shows a growing interest at exploration scale. Recent studies demonstrated source localization capability using seismic noise cross-correlation at observation scales ranging from hundreds of kilometers to meters. In the context of exploration geophysics, classical localization methods using travel-time picking fail when no evident first arrivals can be detected. Likewise, methods based on the intensity decrease as a function of distance to the source also fail when the noise intensity decay gets more complicated than the power-law expected from geometrical spreading. We propose here an automatic procedure developed in ocean acoustics that permits to iteratively locate the dominant and secondary noise sources. The Matched-Field Processing (MFP) technique is based on the spatial coherence of raw noise signals acquired on a dense array of receivers in order to produce high-resolution source localizations. Standard MFP algorithms permits to locate the dominant noise source by matching the seismic noise Cross-Spectral Density Matrix (CSDM) with the equivalent CSDM calculated from a model and a surrogate source position that scans each position of a 3D grid below the array of seismic sensors. However, at exploration scale, the background noise is mostly dominated by surface noise sources related to human activities (roads, industrial platforms,..), which localization is of no interest for the monitoring of the hydrocarbon reservoir. In other words, the dominant noise sources mask lower-amplitude noise sources associated to the extraction process (in the volume). Their location is therefore difficult through standard MFP technique. The Multi-Rate Adaptative Beamforming (MRABF) is a further improvement of the MFP technique that permits to locate low-amplitude secondary noise sources using a projector matrix calculated from the eigen-value decomposition of the CSDM matrix. The MRABF approach aims at cancelling the contributions of
Rational calculation accuracy in acousto-optical matrix-vector processor
Oparin, V. V.; Tigin, Dmitry V.
1994-01-01
The high speed of parallel computations for a comparatively small-size processor and acceptable power consumption makes the usage of acousto-optic matrix-vector multiplier (AOMVM) attractive for processing of large amounts of information in real time. The limited accuracy of computations is an essential disadvantage of such a processor. The reduced accuracy requirements allow for considerable simplification of the AOMVM architecture and the reduction of the demands on its components.
Dielectric matrix, dynamical matrix and phonon dispersion in hcp transition metal scandium
International Nuclear Information System (INIS)
Singh, Joginder; Singh, Natthi; Prakash, S.
1976-01-01
Complete dielectric matrix is evaluated for hcp transition metal scandium using the non-interacting s- and d-band model. The local field corrections which are consequence of the non-diagonal part of the dielectric matrix are calculated explicitly. The free electron approximation is used for the s-electrons and the simple tight-binding approximation is used for the d-electrons. The theory developed by Singh and others is used to invert the dielectric matrix and the explicit expressions for the dynamical matrix are obtained. The phonon dispersion relations are investigated by using the renormalized Animalu transition metal model potential (TMMP) for bare ion potential. The contribution due to non-central forces which arise due to local fields is found to be 20%. The results are found in resonably good agreement with the experimental values. (author)
Directory of Open Access Journals (Sweden)
Simon Mitternacht
2016-02-01
Full Text Available Calculating solvent accessible surface areas (SASA is a run-of-the-mill calculation in structural biology. Although there are many programs available for this calculation, there are no free-standing, open-source tools designed for easy tool-chain integration. FreeSASA is an open source C library for SASA calculations that provides both command-line and Python interfaces in addition to its C API. The library implements both Lee and Richards’ and Shrake and Rupley’s approximations, and is highly configurable to allow the user to control molecular parameters, accuracy and output granularity. It only depends on standard C libraries and should therefore be easy to compile and install on any platform. The library is well-documented, stable and efficient. The command-line interface can easily replace closed source legacy programs, with comparable or better accuracy and speed, and with some added functionality.
Energy Technology Data Exchange (ETDEWEB)
CARINI,G.A.; CHEN, W.; LI, Z.; REHAK, P.; SIDDONS, D.P.
2007-10-29
An X-ray Active Matrix Pixel Sensor (XAMPS) is being developed for recording data for the X-ray Pump Probe experiment at the Linac Coherent Light Source (LCLS). Special attention has to be paid to some technological challenges that this design presents. New processes were developed and refined to address problems encountered during previous productions of XAMPS. The development of these critical steps and corresponding tests results are reported here.
Sources of trace elements observed in the Arctic aerosol
International Nuclear Information System (INIS)
Hopke, P.K.; Cheng, M.D.; Landsberger, S.; Barrie, L.A.
1991-01-01
There have been many efforts made to identify the sources of the airborne particles seen in the Arctic. In this study, the Potential Source Contribution Function (PSCF), a probability function based on the air parcel trajectory data coupled with the contaminant concentrations measured in that air parcel, has been calculated for a series of week long airborne particle samples collected at Alert, N.W.T. between 1983 and 1987. These samples have been analyzed by instrumental neutron activation. Using calculated 3 level back trajectories and extended Total Potential Source Contribution methodology, the patterns of total potential source contribution probabilities can be examined for each individual species or based on the results of a principal components analysis of the elemental data to examine covarying species. Regions with high PSCF values have a higher probability of contributing pollutants to the measured concentrations at the receptor site. The implications of these results for more specific identification of source regions of the various species are discussed
International Nuclear Information System (INIS)
Wang, Y.L.; Xia, Z.H.; Liu, D.; Qiu, W.X.; Duan, X.L.; Wang, R.; Liu, W.J.; Zhang, Y.H.; Wang, D.; Tao, S.; Liu, W.X.
2013-01-01
A steady state Level III fate model was established and applied to quantify source–receptor relationship in a coking industry city in Northern China. The local emission inventory of PAHs, as the model input, was acquired based on energy consumption and emission factors. The model estimations were validated by measured data and indicated remarkable variations in the paired isomeric ratios. When a rectification factor, based on the receptor-to-source ratio, was calculated by the fate model, the quantitatively verified molecular diagnostic ratios provided reasonable results of local PAH emission sources. Due to the local ban and measures on small scale coking activities implemented from the beginning of 2004, the model calculations indicated that the local emission amount of PAHs in 2009 decreased considerably compared to that in 2003. -- Highlights: •A steady-state fate model could well elucidate the multimedia fate of PAHs. •A rectification factor for correcting the paired isomeric ratio was calculated. •The corrected isomeric ratios were successfully applied to source apportionment. -- Based on multimedia model correction, the specific isomeric ratios could provide reasonable apportionments for the local PAHs emission sources
Allen, Terrence K; Feng, Liping; Grotegut, Chad A; Murtha, Amy P
2014-02-01
Progesterone (P4) and the progestin, 17α-hydroxyprogesterone caproate, are clinically used to prevent preterm births (PTBs); however, their mechanism of action remains unclear. Cytokine-induced matrix metalloproteinase 9 (MMP-9) activity plays a key role in preterm premature rupture of the membranes and PTB. We demonstrated that the primary chorion cells and the HTR8/SVneo cells (cytotrophoblast cell line) do not express the classical progesterone receptor (PGR) but instead a novel progesterone receptor, progesterone receptor membrane component 1 (PGRMC1), whose role remains unclear. Using HTR8/SVneo cells in culture, we further demonstrated that 6 hours pretreatment with medroxyprogesterone acetate (MPA) and dexamethasone (Dex) but not P4 or 17α-hydroxyprogesterone hexanoate significantly attenuated tumor necrosis factor α-induced MMP-9 activity after a 24-hour incubation period. The inhibitory effect of MPA, but not Dex, was attenuated when PGRMC1 expression was successfully reduced by PGRMC1 small interfering RNA. Our findings highlight a possible novel role of PGRMC1 in mediating the effects of MPA and in modulating cytokine-induced MMP-9 activity in cytotrophoblast cells in vitro.
New developments in multireference and complete configuration interaction calculations
International Nuclear Information System (INIS)
Knowles, P.J.; Werner, H.J.
1987-01-01
Some recently developed techniques for the calculation of Hamiltonian matrix elements in molecular electronic structure calculations are described. These techniques allow the very rapid calculation, in any desired order, of one particle coupling coefficients between spin symmetry adapted basis functions of arbitrary structure. The matrix elements that are required, for either internally contracted multireference CI calculations, or full CI calculations, are then obtainable from suitable summations over resolutions of the identity, which has been shown previously to be rather efficient; this is especially true on vector computers, since all arithmetic can be formulated as matrix multiplications. These ideas have culminated in the preparation of a new multireference CI program which is capable of handling very large numbers of reference configurations. Application of the new techniques to full CI calculations are also presented
Evaluation of Li3N accumulation in a fused LiCl/Li salt matrix
International Nuclear Information System (INIS)
Eberle, C. S.
1998-01-01
Pyrochemical conditioning of spent nuclear fuel for the purpose of final disposal is currently being demonstrated at Argonne National Laboratory (ANL), and ongoing research in this area includes the demonstration of this process on spent oxide fuel. In conjunction with this research a pilot scale of the preprocessing stage is being designed by ANL-W to demonstrate the in situ hot cell capability of the chemical reduction stage. An impurity evaluation was completed for a Li/LiCl salt matrix in the presence of spent LWR uranium oxide fuel. A simple analysis was performed in which the sources of impurities in the salt matrix were only from the cell atmosphere. Only reactions with the lithium were considered. The levels of impurities were shown to be highly sensitive system conditions. A predominance diagram for the Li-O-N system was constructed for the device, and the general oxidation, nitridation and combined reactions were calculated as a function of oxygen and nitrogen partial pressure. These calculations and hotcell atmosphere data were used to determine the total number and type of impurities expected in the salt matrix and the mass rate for the device was determined
Source apportionment of toxic chemical pollutants at Trombay region
International Nuclear Information System (INIS)
Sahu, S.K.; Pandit, G.G.; Puranik, V.D.
2007-05-01
Anthropogenic activities like industrial production and transportation, a wide range of chemical pollutants such as trace and toxic metals, pesticides, polycyclic aromatic hydrocarbons etc. eventually find their way into various environmental compartments. One of the main issues of environmental pollution is the chemical composition of aerosols and their sources. In spite of all the efforts a considerable part of the atmospheric aerosol mass is still not accounted for. This report describes some of the activities of Environmental Assessment Division which are having direct relevance to the public health and regulatory bodies. Extensive studies were carried out in our laboratories for the Trombay site, over the years; on the organic as well as inorganic pollution in the environment to understand inter compartmental behaviour of these chemical pollutants. In this report an attempt has been made to collect different size fractionated ambient aerosols and to quantify the percentage contribution of each size fraction to the total aerosol mass. Subsequently, an effort has been made for chemical characterization (inorganic, organic and carbon content) of these particulate matter using different analytical techniques. The comprehensive data set on chemical characterization of particulate matter thus generated is being used with receptor modeling techniques to identify the possible sources contributing to the observed concentrations of the measured pollutants. The use of this comprehensive data set in receptor modeling has been helpful in distinguishing the source types in a better way. Receptor modeling techniques are powerful tools that can be used to locate sources of pollutants to the atmosphere. The major advantage of the receptor models is that actual ambient data are used to apportion source contributions, negating the need for dispersion calculations. Pollution sources affecting the sampling site were statistically identified using varimax rotated factor analysis of
Moskovets, Eugene
2015-08-30
Understanding the mechanisms of matrix-assisted laser desorption/ionization (MALDI) promises improvements in the sensitivity and specificity of many established applications in the field of mass spectrometry. This paper reports a serendipitous observation of a significant ion yield in a post-ionization experiment conducted after the sample had been removed from a standard atmospheric pressure (AP)-MALDI source. This post-ionization is interpreted in terms of collisions of microparticles moving with a hypersonic velocity into a solid surface. Calculations show that the thermal energy released during such collisions is close to that absorbed by the top matrix layer in traditional MALDI. The microparticles, containing both the matrix and analytes, could be detached from a film produced inside the inlet capillary during the sample ablation and accelerated by the flow rushing through the capillary. These observations contribute some new perspective to ion formation in both laser and laser-less matrix-assisted ionization. An AP-MALDI ion source hyphenated with a three-stage high-pressure ion funnel system was utilized for peptide mass analysis. After the laser had been turned off and the MALDI sample removed, ions were detected during a gradual reduction of the background pressure in the first funnel. The constant-rate pressure reduction led to the reproducible appearance of different singly and doubly charged peptide peaks in mass spectra taken a few seconds after the end of the MALDI analysis of a dried-droplet spot. The ion yield as well as the mass range of ions observed with a significant delay after a completion of the primary MALDI analysis depended primarily on the background pressure inside the first funnel. The production of ions in this post-ionization step was exclusively observed during the pressure drop. A lower matrix background and significant increase in relative yield of double-protonated ions are reported. The observations were partially consistent
Radiation doses from radiation sources of neutrons and photons by different computer calculation
International Nuclear Information System (INIS)
Siciliano, F.; Lippolis, G.; Bruno, S.G.
1995-12-01
In the present paper the calculation technique aspects of dose rate from neutron and photon radiation sources are covered with reference both to the basic theoretical modeling of the MERCURE-4, XSDRNPM-S and MCNP-3A codes and from practical point of view performing safety analyses of irradiation risk of two transportation casks. The input data set of these calculations -regarding the CEN 10/200 HLW container and dry PWR spent fuel assemblies shipping cask- is frequently commented as for as connecting points of input data and understanding theoric background are concerned
Directory of Open Access Journals (Sweden)
Sashaina E Fanibunda
Full Text Available BACKGROUND: During sexual transmission of HIV in women, the virus breaches the multi-layered CD4 negative stratified squamous epithelial barrier of the vagina, to infect the sub-epithelial CD4 positive immune cells. However the mechanisms by which HIV gains entry into the sub-epithelial zone is hitherto unknown. We have previously reported human mannose receptor (hMR as a CD4 independent receptor playing a role in HIV transmission on human spermatozoa. The current study was undertaken to investigate the expression of hMR in vaginal epithelial cells, its HIV gp120 binding potential, affinity constants and the induction of matrix metalloproteinases (MMPs downstream of HIV gp120 binding to hMR. PRINCIPAL FINDINGS: Human vaginal epithelial cells and the immortalized vaginal epithelial cell line Vk2/E6E7 were used in this study. hMR mRNA and protein were expressed in vaginal epithelial cells and cell line, with a molecular weight of 155 kDa. HIV gp120 bound to vaginal proteins with high affinity, (Kd = 1.2±0.2 nM for vaginal cells, 1.4±0.2 nM for cell line and the hMR antagonist mannan dose dependently inhibited this binding. Both HIV gp120 binding and hMR exhibited identical patterns of localization in the epithelial cells by immunofluorescence. HIV gp120 bound to immunopurified hMR and affinity constants were 2.9±0.4 nM and 3.2±0.6 nM for vaginal cells and Vk2/E6E7 cell line respectively. HIV gp120 induced an increase in MMP-9 mRNA expression and activity by zymography, which could be inhibited by an anti-hMR antibody. CONCLUSION: hMR expressed by vaginal epithelial cells has high affinity for HIV gp120 and this binding induces production of MMPs. We propose that the induction of MMPs in response to HIV gp120 may lead to degradation of tight junction proteins and the extracellular matrix proteins in the vaginal epithelium and basement membrane, leading to weakening of the epithelial barrier; thereby facilitating transport of HIV across the
Thorne, Lawrence R.
2011-01-01
I propose a novel approach to balancing equations that is applicable to all chemical-reaction equations; it is readily accessible to students via scientific calculators and basic computer spreadsheets that have a matrix-inversion application. The new approach utilizes the familiar matrix-inversion operation in an unfamiliar and innovative way; its purpose is not to identify undetermined coefficients as usual, but, instead, to compute a matrix null space (or matrix kernel). The null space then...
A calculation of dose distribution around 32P spherical sources and its clinical application
International Nuclear Information System (INIS)
Ohara, Ken; Tanaka, Yoshiaki; Nishizawa, Kunihide; Maekoshi, Hisashi
1977-01-01
In order to avoid the radiation hazard in radiation therapy of craniopharyngioma by using 32 P, it is helpful to prepare a detailed dose distribution in the vicinity of the source in the tissue. Valley's method is used for calculations. A problem of the method is pointed out and the method itself is refined numerically: it extends a region of xi where an approximate polynomial is available, and it determines an optimum degree of the polynomial as 9. Usefulness of the polynomial is examined by comparing with Berger's scaled absorbed dose distribution F(xi) and the Valley's result. The dose and dose rate distributions around uniformly distributed spherical sources are computed from the termwise integration of our polynomial of degree 9 over the range of xi from 0 to 1.7. The dose distributions calculated from the spherical surface to a point at 0.5 cm outside the source, are given, when the radii of sources are 0.5, 0.6, 0.7, 1.0, and 1.5 cm respectively. The therapeutic dose for a craniopharyngioma which has a spherically shaped cyst, and the absorbed dose to the normal tissue, (oculomotor nerve), are obtained from these dose rate distributions. (auth.)
Syrio. A program for the calculation of the inverse of a matrix
International Nuclear Information System (INIS)
Garcia de Viedma Alonso, L.
1963-01-01
SYRIO is a code for the inversion of a non-singular square matrix whose order is not higher than 40 for the UNIVAC-UCT (SS-90). The treatment stands from the inversion formula of sherman and Morrison, and following the Herbert S. Wilf's method for special matrices, generalize the procedure to any kind of non-singular square matrices. the limitation of the matrix order is not inherent of the program itself but imposed by the storage capacity of the computer for which it was coded. (Author)
An algorithm for calculation of the Jordan canonical form of a matrix
Sridhar, B.; Jordan, D.
1973-01-01
Jordan canonical forms are used extensively in the literature on control systems. However, very few methods are available to compute them numerically. Most numerical methods compute a set of basis vectors in terms of which the given matrix is diagonalized when such a change of basis is possible. Here, a simple and efficient method is suggested for computing the Jordan canonical form and the corresponding transformation matrix. The method is based on the definition of a generalized eigenvector, and a natural extension of Gauss elimination techniques.
Theoretical evaluation of matrix effects on trapped atomic levels
Energy Technology Data Exchange (ETDEWEB)
Das, G.P.; Gruen, D.M.
1986-06-01
We suggest a theoretical model for calculating the matrix perturbation on the spectra of atoms trapped in rare gas systems. The model requires the ''potential curves'' of the diatomic system consisting of the trapped atom interacting with one from the matrix and relies on the approximation that the total matrix perturbation is a scalar sum of the pairwise interactions with each of the lattice sites. Calculations are presented for the prototype systems Na in Ar. Attempts are made to obtain ab initio estimates of the Jahn-Teller effects for excited states. Comparison is made with our recent Matrix-Isolation Spectroscopic (MIS) data. 10 refs., 3 tabs.
Theoretical evaluation of matrix effects on trapped atomic levels
International Nuclear Information System (INIS)
Das, G.P.; Gruen, D.M.
1986-06-01
We suggest a theoretical model for calculating the matrix perturbation on the spectra of atoms trapped in rare gas systems. The model requires the ''potential curves'' of the diatomic system consisting of the trapped atom interacting with one from the matrix and relies on the approximation that the total matrix perturbation is a scalar sum of the pairwise interactions with each of the lattice sites. Calculations are presented for the prototype systems Na in Ar. Attempts are made to obtain ab initio estimates of the Jahn-Teller effects for excited states. Comparison is made with our recent Matrix-Isolation Spectroscopic (MIS) data. 10 refs., 3 tabs
Estimating Depolarization with the Jones Matrix Quality Factor
Hilfiker, James N.; Hale, Jeffrey S.; Herzinger, Craig M.; Tiwald, Tom; Hong, Nina; Schöche, Stefan; Arwin, Hans
2017-11-01
Mueller matrix (MM) measurements offer the ability to quantify the depolarization capability of a sample. Depolarization can be estimated using terms such as the depolarization index or the average degree of polarization. However, these calculations require measurement of the complete MM. We propose an alternate depolarization metric, termed the Jones matrix quality factor, QJM, which does not require the complete MM. This metric provides a measure of how close, in a least-squares sense, a Jones matrix can be found to the measured Mueller matrix. We demonstrate and compare the use of QJM to other traditional calculations of depolarization for both isotropic and anisotropic depolarizing samples; including non-uniform coatings, anisotropic crystal substrates, and beetle cuticles that exhibit both depolarization and circular diattenuation.
Source attribution of insoluble light-absorbing particles in seasonal snow across northern China
Zhang, R.; Hegg, D. A.; Huang, J.; Fu, Q.
2013-06-01
Seasonal snow samples obtained at 46 sites in 6 provinces of China in January and February 2010 were analyzed for a suite of chemical species and these data are combined with previously determined concentrations of insoluble light-absorbing particles (ILAP), including all particles that absorb light in the 650-700 nm wavelength interval. The ILAP, together with 14 other analytes, are used as input to a positive matrix factorization (PMF) receptor model to explore the sources of ILAP in the snow. The PMF analysis for ILAP sources is augmented with backward trajectory cluster analysis and the geographic locations of major source areas for the three source types. The two analyses are consistent and indicate that three factors/sources were responsible for the measured light absorption of snow: a soil dust source, an industrial pollution source, and a biomass and / or biofuel burning source. Soil dust was the main source of the ILAP, accounting for ~53% of ILAP on average.
Source attribution of insoluble light-absorbing particles in seasonal snow across northern China
Directory of Open Access Journals (Sweden)
R. Zhang
2013-06-01
Full Text Available Seasonal snow samples obtained at 46 sites in 6 provinces of China in January and February 2010 were analyzed for a suite of chemical species and these data are combined with previously determined concentrations of insoluble light-absorbing particles (ILAP, including all particles that absorb light in the 650–700 nm wavelength interval. The ILAP, together with 14 other analytes, are used as input to a positive matrix factorization (PMF receptor model to explore the sources of ILAP in the snow. The PMF analysis for ILAP sources is augmented with backward trajectory cluster analysis and the geographic locations of major source areas for the three source types. The two analyses are consistent and indicate that three factors/sources were responsible for the measured light absorption of snow: a soil dust source, an industrial pollution source, and a biomass and / or biofuel burning source. Soil dust was the main source of the ILAP, accounting for ~53% of ILAP on average.
Nonstatic, self-consistent πN t matrix in nuclear matter
International Nuclear Information System (INIS)
Van Orden, J.W.
1984-01-01
In a recent paper, a calculation of the self-consistent πN t matrix in nuclear matter was presented. In this calculation the driving term of the self-consistent equation was chosen to be a static approximation to the free πN t matrix. In the present work, the earlier calculation is extended by using a nonstatic, fully-off-shell free πN t matrix as a starting point. Right-hand pole and cut contributions to the P-wave πN amplitudes are derived using a Low expansion and include effects due to recoil of the interacting πN system as well as the transformation from the πN c.m. frame to the nuclear rest frame. The self-consistent t-matrix equation is rewritten as two integral equations which modify the pole and cut contributions to the t matrix separately. The self-consistent πN t matrix is calculated in nuclear matter and a nonlocal optical potential is constructed from it. The resonant contribution to the optical potential is found to be broadened by 20% to 50% depending on pion momentum and is shifted upward in energy by approximately 10 MeV in comparison to the first-order optical potential. Modifications to the nucleon pole contribution are found to be negligible
Matrix product state calculations for one-dimensional quantum chains and quantum impurity models
Energy Technology Data Exchange (ETDEWEB)
Muender, Wolfgang
2011-09-28
This thesis contributes to the field of strongly correlated electron systems with studies in two distinct fields thereof: the specific nature of correlations between electrons in one dimension and quantum quenches in quantum impurity problems. In general, strongly correlated systems are characterized in that their physical behaviour needs to be described in terms of a many-body description, i.e. interactions correlate all particles in a complex way. The challenge is that the Hilbert space in a many-body theory is exponentially large in the number of particles. Thus, when no analytic solution is available - which is typically the case - it is necessary to find a way to somehow circumvent the problem of such huge Hilbert spaces. Therefore, the connection between the two studies comes from our numerical treatment: they are tackled by the density matrix renormalization group (DMRG) and the numerical renormalization group (NRG), respectively, both based on matrix product states. The first project presented in this thesis addresses the problem of numerically finding the dominant correlations in quantum lattice models in an unbiased way, i.e. without using prior knowledge of the model at hand. A useful concept for this task is the correlation density matrix (CDM) which contains all correlations between two clusters of lattice sites. We show how to extract from the CDM, a survey of the relative strengths of the system's correlations in different symmetry sectors as well as detailed information on the operators carrying long-range correlations and the spatial dependence of their correlation functions. We demonstrate this by a DMRG study of a one-dimensional spinless extended Hubbard model, while emphasizing that the proposed analysis of the CDM is not restricted to one dimension. The second project presented in this thesis is motivated by two phenomena under ongoing experimental and theoretical investigation in the context of quantum impurity models: optical absorption
Matrix product state calculations for one-dimensional quantum chains and quantum impurity models
International Nuclear Information System (INIS)
Muender, Wolfgang
2011-01-01
This thesis contributes to the field of strongly correlated electron systems with studies in two distinct fields thereof: the specific nature of correlations between electrons in one dimension and quantum quenches in quantum impurity problems. In general, strongly correlated systems are characterized in that their physical behaviour needs to be described in terms of a many-body description, i.e. interactions correlate all particles in a complex way. The challenge is that the Hilbert space in a many-body theory is exponentially large in the number of particles. Thus, when no analytic solution is available - which is typically the case - it is necessary to find a way to somehow circumvent the problem of such huge Hilbert spaces. Therefore, the connection between the two studies comes from our numerical treatment: they are tackled by the density matrix renormalization group (DMRG) and the numerical renormalization group (NRG), respectively, both based on matrix product states. The first project presented in this thesis addresses the problem of numerically finding the dominant correlations in quantum lattice models in an unbiased way, i.e. without using prior knowledge of the model at hand. A useful concept for this task is the correlation density matrix (CDM) which contains all correlations between two clusters of lattice sites. We show how to extract from the CDM, a survey of the relative strengths of the system's correlations in different symmetry sectors as well as detailed information on the operators carrying long-range correlations and the spatial dependence of their correlation functions. We demonstrate this by a DMRG study of a one-dimensional spinless extended Hubbard model, while emphasizing that the proposed analysis of the CDM is not restricted to one dimension. The second project presented in this thesis is motivated by two phenomena under ongoing experimental and theoretical investigation in the context of quantum impurity models: optical absorption
Optimizing the calculation of point source count-centroid in pixel size measurement
International Nuclear Information System (INIS)
Zhou Luyi; Kuang Anren; Su Xianyu
2004-01-01
Pixel size is an important parameter of gamma camera and SPECT. A number of methods are used for its accurate measurement. In the original count-centroid method, where the image of a point source (PS) is acquired and its count-centroid calculated to represent PS position in the image, background counts are inevitable. Thus the measured count-centroid (X m ) is an approximation of the true count-centroid (X p ) of the PS, i.e. X m =X p + (X b -X p )/(1+R p /R b ), where Rp is the net counting rate of the PS, X b the background count-centroid and Rb the background counting. To get accurate measurement, R p must be very big, which is unpractical, resulting in the variation of measured pixel size. R p -independent calculation of PS count-centroid is desired. Methods: The proposed method attempted to eliminate the effect of the term (X b -X p )/(1 + R p /R b ) by bringing X b closer to X p and by reducing R b . In the acquired PS image, a circular ROI was generated to enclose the PS, the pixel with the maximum count being the center of the ROI. To choose the diameter (D) of the ROI, a Gaussian count distribution was assumed for the PS, accordingly, K=1-(0.5) D/R percent of the total PS counts was in the ROI, R being the full width at half maximum of the PS count distribution. D was set to be 6*R to enclose most (K=98.4%) of the PS counts. The count-centroid of the ROI was calculated to represent X p . The proposed method was tested in measuring the pixel size of a well-tuned SPECT, whose pixel size was estimated to be 3.02 mm according to its mechanical and electronic setting (128 x 128 matrix, 387 mm UFOV, ZOOM=1). For comparison, the original method, which was use in the former versions of some commercial SPECT software, was also tested. 12 PSs were prepared and their image acquired and stored. The net counting rate of the PSs increased from 10 cps to 1183 cps. Results: Using the proposed method, the measured pixel size (in mm) varied only between 3.00 and 3.01 (mean
Atmospheric Aerosol Source-Receptor Relationships: The Role of Coal-Fired Power Plants
Energy Technology Data Exchange (ETDEWEB)
Allen L. Robinson; Spyros N. Pandis; Cliff I. Davidson
2005-12-01
This report describes the technical progress made on the Pittsburgh Air Quality Study (PAQS) during the period of March 2005 through August 2005. Significant progress was made this project period on the source characterization, source apportionment, and deterministic modeling activities. This report highlights new data on road dust, vegetative detritus and motor vehicle emissions. For example, the results show significant differences in the composition in urban and rural road dust. A comparison of the organic of the fine particulate matter in the tunnel with the ambient provides clear evidence of the significant contribution of vehicle emissions to ambient PM. The source profiles developed from this work are being used by the source-receptor modeling activities. The report presents results on the spatial distribution of PMF-factors. The results can be grouped into three different categories: regional sources, local sources, or potentially both regional and local sources. Examples of the regional sources are the sulfate and selenium PMF-factors which most likely-represent coal fired power plants. Examples of local sources are the specialty steel and lead factors. There is reasonable correspondence between these apportionments and data from the EPA TRI and AIRS emission inventories. Detailed comparisons between PMCAMx predictions and measurements by the STN and IMPROVE measurements in the Eastern US are presented. Comparisons were made for the major aerosol components and PM{sub 2.5} mass in July 2001, October 2001, January 2002, and April 2002. The results are encouraging with average fraction biases for most species less than 0.25. The improvement of the model performance during the last two years was mainly due to the comparison of the model predictions with the continuous measurements in the Pittsburgh Supersite. Major improvements have included the descriptions: of ammonia emissions (CMU inventory), night time nitrate chemistry, EC emissions and their diurnal
Radiation transport calculations for the ANS [Advanced Neutron Source] beam tubes
International Nuclear Information System (INIS)
Engle, W.W. Jr.; Lillie, R.A.; Slater, C.O.
1988-01-01
The Advanced Neutron Source facility (ANS) will incorporate a large number of both radial and no-line-of-sight (NLS) beam tubes to provide very large thermal neutron fluxes to experimental facilities. The purpose of this work was to obtain comparisons for the ANS single- and split-core designs of the thermal and damage neutron and gamma-ray scalar fluxes in these beams tubes. For experimental locations far from the reactor cores, angular flux data are required; however, for close-in experimental locations, the scalar fluxes within each beam tube provide a credible estimate of the various signal to noise ratios. In this paper, the coupled two- and three-dimensional radiation transport calculations employed to estimate the scalar neutron and gamma-ray fluxes will be described and the results from these calculations will be discussed. 6 refs., 2 figs
Energy Technology Data Exchange (ETDEWEB)
Selim, Y S; Abbas, M I; Fawzy, M A [Physics Department, Faculty of Science, Alexandria University, Aleaxndria (Egypt)
1997-12-31
Total efficiency of clad right circular cylindrical Nal(TI) scintillation detector from a coaxial isotropic radiating circular disk source has been calculated by the of rigid mathematical expressions. Results were tabulated for various gamma energies. 2 figs., 5 tabs.
Microscopic calculation of the 4He system
International Nuclear Information System (INIS)
Hofmann, H.M.
1996-01-01
We report on a consistent, microscopic calculation of the bound and scattering states in the 4 He system employing a realistic nucleon-nucleon potential in the framework of the resonating group model (RGM). We present for comparison with these microscopic RGM calculations the results from a charge-independent, Coulomb-corrected R-matrix analysis of all types of data for reactions in the A=4 system. Comparisons are made between the phase shifts, and with a selection of measurements from each reaction, as well as between the resonance spectra obtained from both calculations. In general, the comparisons are favorable, but distinct differences are observed between the RGM calculations and some of the polarisation data. The partial-wave decomposition of the experimental data produced by the R-matrix analysis shows that these differences can be attributed to just a few S-matrix elements, for which inadequate tensor-force strength in the N-N interaction used appears to be responsible. (orig.)
Magnox fuel inventories. Experiment and calculation using a point source model
International Nuclear Information System (INIS)
Nair, S.
1978-08-01
The results of calculations of Magnox fuel inventories using the point source code RICE and associated Magnox reactor data set have been compared with experimental measurements for the actinide isotopes 234 , 235 , 236 , 238 U, 238 , 239 , 240 , 241 , 242 Pu, 241 , 243 Am and 242 , 244 Cm and the fission product isotopes 142 , 143 , 144 , 145 , 146 , 150 Nd, 95 Zr, 134 , 137 Cs, 144 Ce and daughter 144 Pr produced in four samples of spent Magnox fuel spanning the burnup range 3000 to 9000 MWd/Te. The neutron emissions from a further two samples were also measured and compared with RICE predictions. The results of the comparison were such as to justify the use of the code RICE for providing source terms for environmental impact studies, for the isotopes considered in the present work. (author)
International Nuclear Information System (INIS)
Puig, J.R.; Sandier, J.
1962-01-01
Krypton-85, a β-emitter with a long half-life and low biological hazard, has considerable industrial potentialities. It is difficult, however, to manufacture sources since the element occurs in gaseous form and cannot be chemically fixed. The authors describe a method of krypton-fixation in a macromolecular matrix formed by mass polymerization of a liquid monomer containing krypton; they also give an account of the preparation of two types of source produced in this way-one enclosed in polystyrene, the other in polyvinyl acetate. Such sources lose krypton; the activity of the first decreases by 8 % daily, that of the second by 3 % daily. These apparent decays enable the diffusion coefficients of krypton in these polymers to be calculated. Diffusion appears to be prevented by the cross-linkages which exist in the polymers. (author) [fr
Rock-matrix diffusion in transport of salinity. Implementation in CONNECTFLOW
International Nuclear Information System (INIS)
Hoch, A.R.; Jackson, C.P.
2004-07-01
One of the programs used for modelling groundwater flow in Swedish rocks for SKB is CONNECTFLOW, which combines the facilities of the programs NAMMU for modelling continuum porous-medium models and NAPSAC for modelling discrete fracture-networks. The version of CONNECTFLOW current at the start of the work described here did not have a capability to model rock-matrix diffusion for saline flows in continuum porous-medium models of fractured rocks. Possible approaches for implementing such an option were evaluated and then the approach that was considered to be the most suitable was implemented and tested and then used in calculations for a realistic example. Three main approaches for representing diffusion in the rock matrix were considered: the use of a numerical finite-difference scheme, an approach based on the use of Laplace transforms, and a so-called 'hybrid' approach which combines a series solution that gives a good representation at long times with an inverse square-root form that gives a good representation at small times. The finite-difference approach is the most flexible of the approaches considered. In particular, it is the only approach that can deal with the case in which, at each location in the continuum representing the fractures, the groundwater density varies within the rock matrix. The Laplace transform approach and the hybrid approach treat the diffusion in the rock matrix exactly, whereas the finite-difference approach involves discretisation errors. In particular, the unit response function is significantly in error for the first few time steps. The method that was considered to be the most appropriate to implement initially in CONNECTFLOW for SKB was the hybrid method. The algorithm was used to carry out calculations for a large site-scale model, based on the version 1.1 model of the Forsmark site in Sweden. Calculations of the evolution of the groundwater flow system from conditions 10,000 years ago to the present were carried out. The
t matrix of metallic wire structures
International Nuclear Information System (INIS)
Zhan, T. R.; Chui, S. T.
2014-01-01
To study the electromagnetic resonance and scattering properties of complex structures of which metallic wire structures are constituents within multiple scattering theory, the t matrix of individual structures is needed. We have recently developed a rigorous and numerically efficient equivalent circuit theory in which retardation effects are taken into account for metallic wire structures. Here, we show how the t matrix can be calculated analytically within this theory. We illustrate our method with the example of split ring resonators. The density of states and cross sections for scattering and absorption are calculated, which are shown to be remarkably enhanced at resonant frequencies. The t matrix serves as the basic building block to evaluate the interaction of wire structures within the framework of multiple scattering theory. This will open the door to efficient design and optimization of assembly of wire structures
Zhang, Yunlin; Liu, Xiaohan; Osburn, Christopher L.; Wang, Mingzhu; Qin, Boqiang; Zhou, Yongqiang
2013-01-01
CDOM biogeochemical cycle is driven by several physical and biological processes such as river input, biogeneration and photobleaching that act as primary sinks and sources of CDOM. Watershed-derived allochthonous (WDA) and phytoplankton-derived autochthonous (PDA) CDOM were exposed to 9 days of natural solar radiation to assess the photobleaching response of different CDOM sources, using absorption and fluorescence (excitation-emission matrix) spectroscopy. Our results showed a marked decrea...
Qian, Weixian; Zhou, Xiaojun; Lu, Yingcheng; Xu, Jiang
2015-09-15
Both the Jones and Mueller matrices encounter difficulties when physically modeling mixed materials or rough surfaces due to the complexity of light-matter interactions. To address these issues, we derived a matrix called the paths correlation matrix (PCM), which is a probabilistic mixture of Jones matrices of every light propagation path. Because PCM is related to actual light propagation paths, it is well suited for physical modeling. Experiments were performed, and the reflection PCM of a mixture of polypropylene and graphite was measured. The PCM of the mixed sample was accurately decomposed into pure polypropylene's single reflection, pure graphite's single reflection, and depolarization caused by multiple reflections, which is consistent with the theoretical derivation. Reflection parameters of rough surface can be calculated from PCM decomposition, and the results fit well with the theoretical calculations provided by the Fresnel equations. These theoretical and experimental analyses verify that PCM is an efficient way to physically model light-matter interactions.
DEFF Research Database (Denmark)
Johnsen, Kristinn; Yngvason, Jakob
1996-01-01
We report on a numerical study of the density matrix functional introduced by Lieb, Solovej, and Yngvason for the investigation of heavy atoms in high magnetic fields. This functional describes exactly the quantum mechanical ground state of atoms and ions in the limit when the nuclear charge Z...... and the electron number N tend to infinity with N/Z fixed, and the magnetic field B tends to infinity in such a way that B/Z4/3→∞. We have calculated electronic density profiles and ground-state energies for values of the parameters that prevail on neutron star surfaces and compared them with results obtained...... by other methods. For iron at B=1012 G the ground-state energy differs by less than 2% from the Hartree-Fock value. We have also studied the maximal negative ionization of heavy atoms in this model at various field strengths. In contrast to Thomas-Fermi type theories atoms can bind excess negative charge...
Improved determination of hadron matrix elements using the variational method
International Nuclear Information System (INIS)
Dragos, J.; Kamleh, W.; Leinweber, D.B.; Zanotti, J.M.; Rakow, P.E.L.; Young, R.D.; Adelaide Univ.
2015-11-01
The extraction of hadron form factors in lattice QCD using the standard two- and three-point correlator functions has its limitations. One of the most commonly studied sources of systematic error is excited state contamination, which occurs when correlators are contaminated with results from higher energy excitations. We apply the variational method to calculate the axial vector current g A and compare the results to the more commonly used summation and two-exponential fit methods. The results demonstrate that the variational approach offers a more efficient and robust method for the determination of nucleon matrix elements.
Energy Technology Data Exchange (ETDEWEB)
Nishioka, K.; Nakamura, Y. [Graduate School of Energy Science, Kyoto University, Gokasho, Uji, Kyoto 611-0011 (Japan); Nishimura, S. [National Institute for Fusion Science, 322-6 Oroshi-cho, Toki, Gifu 509-5292 (Japan); Lee, H. Y. [Korea Advanced Institute of Science and Technology, Daejeon 305-701 (Korea, Republic of); Kobayashi, S.; Mizuuchi, T.; Nagasaki, K.; Okada, H.; Minami, T.; Kado, S.; Yamamoto, S.; Ohshima, S.; Konoshima, S.; Sano, F. [Institute of Advanced Energy, Kyoto University, Gokasho, Uji, Kyoto 611-0011 (Japan)
2016-03-15
A moment approach to calculate neoclassical transport in non-axisymmetric torus plasmas composed of multiple ion species is extended to include the external parallel momentum sources due to unbalanced tangential neutral beam injections (NBIs). The momentum sources that are included in the parallel momentum balance are calculated from the collision operators of background particles with fast ions. This method is applied for the clarification of the physical mechanism of the neoclassical parallel ion flows and the multi-ion species effect on them in Heliotron J NBI plasmas. It is found that parallel ion flow can be determined by the balance between the parallel viscosity and the external momentum source in the region where the external source is much larger than the thermodynamic force driven source in the collisional plasmas. This is because the friction between C{sup 6+} and D{sup +} prevents a large difference between C{sup 6+} and D{sup +} flow velocities in such plasmas. The C{sup 6+} flow velocities, which are measured by the charge exchange recombination spectroscopy system, are numerically evaluated with this method. It is shown that the experimentally measured C{sup 6+} impurity flow velocities do not contradict clearly with the neoclassical estimations, and the dependence of parallel flow velocities on the magnetic field ripples is consistent in both results.
Calculation of the Cholesky factor directly from the stiffness matrix of the structural element
International Nuclear Information System (INIS)
Prates, C.L.M.; Soriano, H.L.
1978-01-01
The analysis of the structures of nuclear power plants requires the evaluation of the internal forces. This is attained by the solution of a system of equations. This solution takes most of the computing time and memory. One of the ways it can be achieved is based on the Cholesky factor. The structural matrix of the coeficients is transformed into an upper triangular matrix by the Cholesky decomposition. Cholesky factor can be obtained directly from the stiffness matrix of the structural element. The result can thus be obtained in a more precise and quick way. (Author)
Hadron matrix elements of quark operators in the relativistic quark model, 2. Model calculation
Energy Technology Data Exchange (ETDEWEB)
Arisue, H; Bando, M; Toya, M [Kyoto Univ. (Japan). Dept. of Physics; Sugimoto, H
1979-11-01
Phenomenological studies of the matrix elements of two- and four-quark operators are made on the basis of relativistic independent quark model for typical three cases of the potentials: rigid wall, linearly rising and Coulomb-like potentials. The values of the matrix elements of two-quark operators are relatively well reproduced in each case, but those of four-quark operators prove to be too small in the independent particle treatment. It is suggested that the short-range two-quark correlations must be taken into account in order to improve the values of the matrix elements of the four-quark operators.
International Nuclear Information System (INIS)
Ji Gang; Guo Yong; Luo Yisheng; Zhang Wenzhong
2001-01-01
Objective: To provide useful parameters for neutron radiotherapy, the author presents results of a Monte Carlo simulation study investigating the dosimetric characteristics of linear 252 Cf fission neutron sources. Methods: A 252 Cf fission source and tissue equivalent phantom were modeled. The dose of neutron and gamma radiations were calculated using Monte Carlo Code. Results: The dose of neutron and gamma at several positions for 252 Cf in the phantom made of equivalent materials to water, blood, muscle, skin, bone and lung were calculated. Conclusion: The results by Monte Carlo methods were compared with the data by measurement and references. According to the calculation, the method using water phantom to simulate local tissues such as muscle, blood and skin is reasonable for the calculation and measurements of dose distribution for 252 Cf
The J-Matrix Method Developments and Applications
Alhaidari, Abdulaziz D; Heller, Eric J; Abdelmonem, Mohamed S
2008-01-01
This volume aims to provide the fundamental knowledge to appreciate the advantages of the J-matrix method and to encourage its use and further development. The J-matrix method is an algebraic method of quantum scattering with substantial success in atomic and nuclear physics. The accuracy and convergence property of the method compares favourably with other successful scattering calculation methods. Despite its thirty-year long history new applications are being found for the J-matrix method. This book gives a brief account of the recent developments and some selected applications of the method in atomic and nuclear physics. New findings are reported in which experimental results are compared to theoretical calculations. Modifications, improvements and extensions of the method are discussed using the language of the J-matrix. The volume starts with a Foreword by the two co-founders of the method, E.J. Heller and H.A. Yamani and it contains contributions from 24 prominent international researchers.
Dose rates from a C-14 source using extrapolation chamber and MC calculations
International Nuclear Information System (INIS)
Borg, J.
1996-05-01
The extrapolation chamber technique and the Monte Carlo (MC) calculation technique based on the EGS4 system have been studied for application for determination of dose rates in a low-energy β radiation field e.g., that from a 14 C source. The extrapolation chamber measurement method is the basic method for determination of dose rates in β radiation fields. Applying a number of correction factors and the stopping power ratio, tissue to air, the measured dose rate in an air volume surrounded by tissue equivalent material is converted into dose to tissue. Various details of the extrapolation chamber measurement method and evaluation procedure have been studied and further developed, and a complete procedure for the experimental determination of dose rates from a 14 C source is presented. A number of correction factors and other parameters used in the evaluation procedure for the measured data have been obtained by MC calculations. The whole extrapolation chamber measurement procedure was simulated using the MC method. The measured dose rates showed an increasing deviation from the MC calculated dose rates as the absorber thickness increased. This indicates that the EGS4 code may have some limitations for transport of very low-energy electrons. i.e., electrons with estimated energies less than 10 - 20 keV. MC calculations of dose to tissue were performed using two models: a cylindrical tissue phantom and a computer model of the extrapolation chamber. The dose to tissue in the extrapolation chamber model showed an additional buildup dose compared to the dose in the tissue model. (au) 10 tabs., 11 ills., 18 refs
Takayama, Mitsuo; Osaka, Issey; Sakakura, Motoshi
2012-01-01
The susceptibility of the N-Cα bond of the peptide backbone to specific cleavage by in-source decay (ISD) in matrix-assisted laser desorption/ionization mass spectrometry (MALDI MS) was studied from the standpoint of the secondary structure of three proteins. A naphthalene derivative, 5-amino-1-naphtol (5,1-ANL), was used as the matrix. The resulting c'-ions, which originate from the cleavage at N-Cα bonds in flexible secondary structures such as turn and bend, and are free from intra-molecular hydrogen-bonded α-helix structure, gave relatively intense peaks. Furthermore, ISD spectra of the proteins showed that the N-Cα bonds of specific amino acid residues, namely Gly-Xxx, Xxx-Asp, and Xxx-Asn, were more susceptible to MALDI-ISD than other amino acid residues. This is in agreement with the observation that Gly, Asp and Asn residues usually located in turns, rather than α-helix. The results obtained indicate that protein molecules embedded into the matrix crystal in the MALDI experiments maintain their secondary structures as determined by X-ray crystallography, and that MALDI-ISD has the capability for providing information concerning the secondary structure of protein.
Data for absorbed dose calculations for external sources and for emitters within the body
International Nuclear Information System (INIS)
Hep, J.; Valenta, V.
1976-01-01
Tables give data for the calculation of absorbed doses from radioactivity sources accumulated in individual body organs. The tables are arranged in such manner that the gamma energy (J) absorbed in 1 kg of target organ (19 organs and total body) are given for 18 source organs (16 different organs, total doby and surrounding air) resulting from 1 decay event, this for more than 250 radioisotopes evenly distributed in the source organ (1 J/kg=100 rad). Also given are the energies of alpha and beta radiations related to one decay. In tables having the surrounding air as the source it is assumed that the intensity of the external source is 1 decay per 1 m 3 of surrounding air which is constant in the entire half-space. The tables are only elaborated for radioisotopes with a half-life of more than 1 min. (B.S.)
The S-matrix of superstring field theory
International Nuclear Information System (INIS)
Konopka, Sebastian
2015-01-01
We show that the classical S-matrix calculated from the recently proposed superstring field theories give the correct perturbative S-matrix. In the proof we exploit the fact that the vertices are obtained by a field redefinition in the large Hilbert space. The result extends to include the NS-NS subsector of type II superstring field theory and the recently found equations of motions for the Ramond fields. In addition, our proof implies that the S-matrix obtained from Berkovits’ WZW-like string field theory then agrees with the perturbative S-matrix to all orders.
Li, Na; Jiang, Weiwei; Rao, Kaifeng; Ma, Mei; Wang, Zijian; Kumaran, Satyanarayanan Senthik
2011-01-01
Environmental chemicals in drinking water can impact human health through nuclear receptors. Additionally, estrogen-related receptors (ERRs) are vulnerable to endocrine-disrupting effects. To date, however, ERR disruption of drinking water potency has not been reported. We used ERRgamma two-hybrid yeast assay to screen ERRgamma disrupting activities in a drinking water treatment plant (DWTP) located in north China and in source water from a reservoir, focusing on agonistic, antagonistic, and inverse agonistic activity to 4-hydroxytamoxifen (4-OHT). Water treatment processes in the DWTP consisted of pre-chlorination, coagulation, coal and sand filtration, activated carbon filtration, and secondary chlorination processes. Samples were extracted by solid phase extraction. Results showed that ERRgamma antagonistic activities were found in all sample extracts, but agonistic and inverse agonistic activity to 4-OHT was not found. When calibrated with the toxic equivalent of 4-OHT, antagonistic effluent effects ranged from 3.4 to 33.1 microg/L. In the treatment processes, secondary chlorination was effective in removing ERRgamma antagonists, but the coagulation process led to significantly increased ERRgamma antagonistic activity. The drinking water treatment processes removed 73.5% of ERRgamma antagonists. To our knowledge, the occurrence of ERRgamma disruption activities on source and drinking water in vitro had not been reported previously. It is vital, therefore, to increase our understanding of ERRy disrupting activities in drinking water.
Shifted Non-negative Matrix Factorization
DEFF Research Database (Denmark)
Mørup, Morten; Madsen, Kristoffer Hougaard; Hansen, Lars Kai
2007-01-01
Non-negative matrix factorization (NMF) has become a widely used blind source separation technique due to its part based representation and ease of interpretability. We currently extend the NMF model to allow for delays between sources and sensors. This is a natural extension for spectrometry data...
Validating criticality calculations for spent fuel with 252Cf-source-driven noise measurements
International Nuclear Information System (INIS)
Mihalczo, J.T.; Krass, A.W.; Valentine, T.E.
1992-01-01
The 252 Cf-Source-driven noise analysis method can be used for measuring the subcritical neutron multiplication factor k of arrays of spent light water reactor (LWR) fuel. This type of measurement provides a parameter that is directly related to the criticality state of arrays of LWR fuel. Measurements of this parameter can verify the criticality safety margins of spent LWR fuel configurations and thus could be a means of obtaining the information to justify burnup credit for spent LWR transportation/storage casks. The practicality of a measurement depends on the ability to install the hardware required to perform the measurement. Source chambers containing the 252 Cf at the required source intensity for this application have been constructed and have operated successfully for ∼10 years and can be fabricated to fit into control rod guide tubes of PWR fuel elements. Fission counters especially developed for spent-fuel measurements are available that would allow measurements of a special 3 x 3 spent fuel array and a typical burnup credit rail cask with spent fuel in unborated water. Adding a moderator around these fission counters would allow measurements with the typical burnup credit rail cask with borated water and the special 3 x 3 array with borated water. The recent work of Ficaro on modifying the KENO Va code to calculate by the Monte Carlo method the time sequences of pulses at two detectors near a fissile assembly from the fission chain multiplication process, initiated by a 252 Cf source in the assembly allows a direct computer calculation of the noise analysis data from this measurement method
Parallel computational in nuclear group constant calculation
International Nuclear Information System (INIS)
Su'ud, Zaki; Rustandi, Yaddi K.; Kurniadi, Rizal
2002-01-01
In this paper parallel computational method in nuclear group constant calculation using collision probability method will be discuss. The main focus is on the calculation of collision matrix which need large amount of computational time. The geometry treated here is concentric cylinder. The calculation of collision probability matrix is carried out using semi analytic method using Beckley Naylor Function. To accelerate computation speed some computer parallel used to solve the problem. We used LINUX based parallelization using PVM software with C or fortran language. While in windows based we used socket programming using DELPHI or C builder. The calculation results shows the important of optimal weight for each processor in case there area many type of processor speed
DEFF Research Database (Denmark)
Göksu, Ömer; Teodorescu, Remus; Bak-Jensen, Birgitte
2012-01-01
As more renewable energy sources, especially more wind turbines are installed in the power system, analysis of the power system with the renewable energy sources becomes more important. Short-circuit calculation is a well known fault analysis method which is widely used for early stage analysis...
International Nuclear Information System (INIS)
Gorshtein, A.I.; Matyunin, Yu.I.; Poluehktov, P.P.
2000-01-01
A mathematical model is proposed for preliminary choice of the nuclear safe matrix compositions for fissile material immobilization. The IBM PC computer software for nuclear safe matrix composition calculations is developed. The limiting concentration of fissile materials in the some used and perspective nuclear safe matrix compositions for radioactive waste immobilization is calculated [ru
Matrix elements of a hyperbolic vector operator under SO(2,1)
International Nuclear Information System (INIS)
Zettili, N.; Boukahil, A.
2003-01-01
We deal here with the use of Wigner–Eckart type arguments to calculate the matrix elements of a hyperbolic vector operator V-vector by expressing them in terms of reduced matrix elements. In particular, we focus on calculating the matrix elements of this vector operator within the basis of the hyperbolic angular momentum T-vector whose components T-vector 1 , T-vector 2 , T-vector 3 satisfy an SO(2,1) Lie algebra. We show that the commutation rules between the components of V-vector and T-vector can be inferred from the algebra of ordinary angular momentum. We then show that, by analogy to the Wigner–Eckart theorem, we can calculate the matrix elements of V-vector within a representation where T-vector 2 and T-vector 3 are jointly diagonal. (author)
Generalized perturbation theory in DRAGON: application to CANDU cell calculations
International Nuclear Information System (INIS)
Courau, T.; Marleau, G.
2001-01-01
Generalized perturbation theory (GPT) in neutron transport is a means to evaluate eigenvalue and reaction rate variations due to small changes in the reactor properties (macroscopic cross sections). These variations can be decomposed in two terms: a direct term corresponding to the changes in the cross section themselves and an indirect term that takes into account the perturbations in the neutron flux. As we will show, taking into account the indirect term using a GPT method is generally straight forward since this term is the scalar product of the unperturbed generalized adjoint with the product of the variation of the transport operator and the unperturbed flux. In the case where the collision probability (CP) method is used to solve the transport equation, evaluating the perturbed transport operator involves calculating the variations in the CP matrix for each change in the reactor properties. Because most of the computational effort is dedicated to the CP matrix calculation the gains expected form the GPT method would therefore be annihilated. Here we will present a technique to approximate the variations in the CP matrices thereby replacing the variations in the transport operator with source term variations. We will show that this approximation yields errors fully compatible with the standard generalized perturbation theory errors. Results for 2D CANDU cell calculations will be presented. (author)
International Nuclear Information System (INIS)
Wechsler, M.S.; Mansur, L.K.
1996-01-01
Radiation damage in target, container, and window materials for spallation neutron sources is am important factor in the design of target stations for accelerator-driver transmutation technologies. Calculations are described that use the LAHET and SPECTER codes to obtain displacement and helium production rates in tungsten, 316 stainless steel, and Inconel 718, which are major target, container, and window materials, respectively. Results are compared for the three materials, based on neutron spectra for NSNS and ATW spallation neutron sources, where the neutron fluxes are normalized to give the same flux of neutrons of all energies
Energy Technology Data Exchange (ETDEWEB)
Sundar, Isaac K.; Hwang, Jae-Woong [Department of Environmental Medicine, Lung Biology and Disease Program, University of Rochester Medical Center, Box 850, 601 Elmwood Avenue, Rochester, NY 14642 (United States); Wu, Shaoping [Department of Medicine, Gastroenterology and Hepatology Division, University of Rochester Medical Center, Rochester, NY (United States); Sun, Jun [Department of Medicine, Gastroenterology and Hepatology Division, University of Rochester Medical Center, Rochester, NY (United States); The Department of Microbiology and Immunology, University of Rochester Medical Center, Rochester, NY (United States); The James Wilmot Cancer Center, University of Rochester Medical Center, Rochester, NY (United States); Rahman, Irfan, E-mail: irfan_rahman@urmc.rochester.edu [Department of Environmental Medicine, Lung Biology and Disease Program, University of Rochester Medical Center, Box 850, 601 Elmwood Avenue, Rochester, NY 14642 (United States)
2011-03-04
Research highlights: {yields} Vitamin D deficiency is linked to accelerated decline in lung function. {yields} Levels of vitamin D receptor (VDR) are decreased in lungs of patients with COPD. {yields} VDR knock-out mouse showed increased lung inflammation and emphysema. {yields} This was associated with decline in lung function and increased MMPs. {yields} VDR knock-out mouse model is useful for studying the mechanisms of lung diseases. -- Abstract: Deficiency of vitamin D is associated with accelerated decline in lung function. Vitamin D is a ligand for nuclear hormone vitamin D receptor (VDR), and upon binding it modulates various cellular functions. The level of VDR is reduced in lungs of patients with chronic obstructive pulmonary disease (COPD) which led us to hypothesize that deficiency of VDR leads to significant alterations in lung phenotype that are characteristics of COPD/emphysema associated with increased inflammatory response. We found that VDR knock-out (VDR{sup -/-}) mice had increased influx of inflammatory cells, phospho-acetylation of nuclear factor-kappaB (NF-{kappa}B) associated with increased proinflammatory mediators, and up-regulation of matrix metalloproteinases (MMPs) MMP-2, MMP-9, and MMP-12 in the lung. This was associated with emphysema and decline in lung function associated with lymphoid aggregates formation compared to WT mice. These findings suggest that deficiency of VDR in mouse lung can lead to an early onset of emphysema/COPD because of chronic inflammation, immune dysregulation, and lung destruction.
International Nuclear Information System (INIS)
Sundar, Isaac K.; Hwang, Jae-Woong; Wu, Shaoping; Sun, Jun; Rahman, Irfan
2011-01-01
Research highlights: → Vitamin D deficiency is linked to accelerated decline in lung function. → Levels of vitamin D receptor (VDR) are decreased in lungs of patients with COPD. → VDR knock-out mouse showed increased lung inflammation and emphysema. → This was associated with decline in lung function and increased MMPs. → VDR knock-out mouse model is useful for studying the mechanisms of lung diseases. -- Abstract: Deficiency of vitamin D is associated with accelerated decline in lung function. Vitamin D is a ligand for nuclear hormone vitamin D receptor (VDR), and upon binding it modulates various cellular functions. The level of VDR is reduced in lungs of patients with chronic obstructive pulmonary disease (COPD) which led us to hypothesize that deficiency of VDR leads to significant alterations in lung phenotype that are characteristics of COPD/emphysema associated with increased inflammatory response. We found that VDR knock-out (VDR -/- ) mice had increased influx of inflammatory cells, phospho-acetylation of nuclear factor-kappaB (NF-κB) associated with increased proinflammatory mediators, and up-regulation of matrix metalloproteinases (MMPs) MMP-2, MMP-9, and MMP-12 in the lung. This was associated with emphysema and decline in lung function associated with lymphoid aggregates formation compared to WT mice. These findings suggest that deficiency of VDR in mouse lung can lead to an early onset of emphysema/COPD because of chronic inflammation, immune dysregulation, and lung destruction.
Interacting hadron resonance gas model in the K -matrix formalism
Dash, Ashutosh; Samanta, Subhasis; Mohanty, Bedangadas
2018-05-01
An extension of hadron resonance gas (HRG) model is constructed to include interactions using relativistic virial expansion of partition function. The noninteracting part of the expansion contains all the stable baryons and mesons and the interacting part contains all the higher mass resonances which decay into two stable hadrons. The virial coefficients are related to the phase shifts which are calculated using K -matrix formalism in the present work. We have calculated various thermodynamics quantities like pressure, energy density, and entropy density of the system. A comparison of thermodynamic quantities with noninteracting HRG model, calculated using the same number of hadrons, shows that the results of the above formalism are larger. A good agreement between equation of state calculated in K -matrix formalism and lattice QCD simulations is observed. Specifically, the lattice QCD calculated interaction measure is well described in our formalism. We have also calculated second-order fluctuations and correlations of conserved charges in K -matrix formalism. We observe a good agreement of second-order fluctuations and baryon-strangeness correlation with lattice data below the crossover temperature.
International Nuclear Information System (INIS)
Stecher, Luiza Chourkalo
2014-01-01
There has been an increasing concern about current environmental issues caused by human activity, as the world searches for development. The production of electricity is an extremely relevant factor in this scenario since it is responsible for a large portion of the emissions that cause the greenhouse effect. Due to this fact, a sustainable development with alternative energy sources, which are attractive for such purpose, must be proposed, especially in places that are not supplied by the conventional electricity grid such as many communities in the Northeast Brazil. This work aims to calculate the environmental cost for the alternative sources of energy - solar, wind and biomass - during electricity generation, and to estimate the economic feasibility of those sources in small communities of Northeast Brazil, considering the avoided costs. The externalities must be properly identified and valued so the costs or benefits can be internalized and reflect accurately the economic feasibility or infeasibility of those sources. For this, the method of avoided costs was adopted for the calculation of externalities. This variable was included in the equation developed for all considered alternative energy sources. The calculations of economic feasibility were performed taking the new configurations in consideration, and the new equation was reprogrammed in the Programa de Calculo de Custos de Energias Alternativas, Solar, Eolica e Biomassa (PEASEB). The results demonstrated that the solar photovoltaic energy in isolated systems is the most feasible and broadly applicable source for small communities of Northeast Brazil. (author)
On the atomic shell structure calculation (1)
International Nuclear Information System (INIS)
Choe Sun Chol
1986-01-01
We have considered the problem of atomic shell structure calculation using operator technique. We introduce reduced matrix elements of annihilation operators according to eg. (4). The normalized basis function is denoted as || ...>. The reduced matrix elements of the pair annihilation operators are expressed throw one-electron matrix elements. Some numerical results are represented and the problem of sign assignment is discussed. (author)
Convergence Improvement of Response Matrix Method with Large Discontinuity Factors
International Nuclear Information System (INIS)
Yamamoto, Akio
2003-01-01
In the response matrix method, a numerical divergence problem has been reported when extremely small or large discontinuity factors are utilized in the calculations. In this paper, an alternative response matrix formulation to solve the divergence problem is discussed, and properties of iteration matrixes are investigated through eigenvalue analyses. In the conventional response matrix formulation, partial currents between adjacent nodes are assumed to be discontinuous, and outgoing partial currents are converted into incoming partial currents by the discontinuity factor matrix. Namely, the partial currents of the homogeneous system (i.e., homogeneous partial currents) are treated in the conventional response matrix formulation. In this approach, the spectral radius of an iteration matrix for the partial currents may exceed unity when an extremely small or large discontinuity factor is used. Contrary to this, an alternative response matrix formulation using heterogeneous partial currents is discussed in this paper. In the latter approach, partial currents are assumed to be continuous between adjacent nodes, and discontinuity factors are directly considered in the coefficients of a response matrix. From the eigenvalue analysis of the iteration matrix for the one-group, one-dimensional problem, the spectral radius for the heterogeneous partial current formulation does not exceed unity even if an extremely small or large discontinuity factor is used in the calculation; numerical stability of the alternative formulation is superior to the conventional one. The numerical stability of the heterogeneous partial current formulation is also confirmed by the two-dimensional light water reactor core analysis. Since the heterogeneous partial current formulation does not require any approximation, the converged solution exactly reproduces the reference solution when the discontinuity factors are directly derived from the reference calculation
Laser Beam and Resonator Calculations on Desktop Computers.
Doumont, Jean-Luc
There is a continuing interest in the design and calculation of laser resonators and optical beam propagation. In particular, recently, interest has increased in developing concepts such as one-sided unstable resonators, supergaussian reflectivity profiles, diode laser modes, beam quality concepts, mode competition, excess noise factors, and nonlinear Kerr lenses. To meet these calculation needs, I developed a general-purpose software package named PARAXIA ^{rm TM}, aimed at providing optical scientists and engineers with a set of powerful design and analysis tools that provide rapid and accurate results and are extremely easy to use. PARAXIA can handle separable paraxial optical systems in cartesian or cylindrical coordinates, including complex-valued and misaligned ray matrices, with full diffraction effects between apertures. It includes the following programs:. ABCD provides complex-valued ray-matrix and gaussian -mode analyses for arbitrary paraxial resonators and optical systems, including astigmatism and misalignment in each element. This program required that I generalize the theory of gaussian beam propagation to the case of an off-axis gaussian beam propagating through a misaligned, complex -valued ray matrix. FRESNEL uses FFT and FHT methods to propagate an arbitrary wavefront through an arbitrary paraxial optical system using Huygens' integral in rectangular or radial coordinates. The wavefront can be multiplied by an arbitrary mirror profile and/or saturable gain sheet on each successive propagation through the system. I used FRESNEL to design a one-sided negative-branch unstable resonator for a free -electron laser, and to show how a variable internal aperture influences the mode competition and beam quality in a stable cavity. VSOURCE implements the virtual source analysis to calculate eigenvalues and eigenmodes for unstable resonators with both circular and rectangular hard-edged mirrors (including misaligned rectangular systems). I used VSOURCE to
Radiation field calculation in the vicinity of Russian radioisotope generator sources
Energy Technology Data Exchange (ETDEWEB)
Pretzsch, Gunter; Hummelsheim, Klemens; Bogorinski, Peter [Gesellschaft fuer Anlagen- und Reaktorsicherheit (GRS) mbH, Kurfuerstendamm 200, 10719 Berlin (Germany)
2005-07-01
Germany supports the Russian Federation in the framework of the G8 Global Partnership programme to secure nuclear and radioactive materials against misuse and proliferation. In this context, GRS, on behalf of the German Foreign Office, is coordinating activities to remove disused radioisotope thermoelectric generators (RITEG) from the Baltic Sea which serve as power supply for marine lighthouses and their replacement by alternative energy sources. Further the planned project includes transportation to an interim storage, the storage equipped with radiation monitoring and physical protection measures, later transportation for reprocessing to the Mayak Production Association, where the RITEG will be dismantled in a hot cell and encapsulated radioactive source will be vitrified and stored as radioactive waste. For the whole project safety analyses are to be performed e.g. to meet radiation protection requirements. In the present paper modelling and calculation of radiation fields in the vicinity of RITEG as a basis for safety analyses is reported. (authors)
Siwik Deborah A; Kuster Gabriela M; Brahmbhatt Jamin V; Zaidi Zaheer; Malik Julia; Ooi Henry; Ghorayeb Ghassan
2008-01-01
Extracellular matrix metalloproteinase inducer (EMMPRIN) expression is increased in myocardium from patients with dilated cardiomyopathy and animal models of heart failure. However little is known about the regulated expression or functional role of EMMPRIN in the myocardium. In rat cardiac cells EMMPRIN is expressed on myocytes but not endothelial cells or fibroblasts. Therefore we tested the hypothesis that EMMPRIN expression regulates matrix metalloproteinase (MMP) activity in rat ventricu...
Three-body forces for electrons by the S-matrix method
International Nuclear Information System (INIS)
Margaritelli, R.
1989-01-01
A electromagnetic three-body potential between eletrons is derived by the S-matrix method. This potential can be compared up to a certain point with other electromagnetic potentials (obtained by other methods) encountered in the literature. However, since the potential derived here is far more complete than others, this turns direct comparison with the potentials found in the literature somewhat difficult. These calculations allow a better understanding of the S-matrix method as applied to problems which involve the calculations of three-body nuclear forces (these calculations are performed in order to understand the 3 He form factor). Furthermore, these results enable us to decide between two discrepant works which derive the two-pion exchange three-body potential, both by the S-matrix method. (author) [pt
An Empirical State Error Covariance Matrix Orbit Determination Example
Frisbee, Joseph H., Jr.
2015-01-01
State estimation techniques serve effectively to provide mean state estimates. However, the state error covariance matrices provided as part of these techniques suffer from some degree of lack of confidence in their ability to adequately describe the uncertainty in the estimated states. A specific problem with the traditional form of state error covariance matrices is that they represent only a mapping of the assumed observation error characteristics into the state space. Any errors that arise from other sources (environment modeling, precision, etc.) are not directly represented in a traditional, theoretical state error covariance matrix. First, consider that an actual observation contains only measurement error and that an estimated observation contains all other errors, known and unknown. Then it follows that a measurement residual (the difference between expected and observed measurements) contains all errors for that measurement. Therefore, a direct and appropriate inclusion of the actual measurement residuals in the state error covariance matrix of the estimate will result in an empirical state error covariance matrix. This empirical state error covariance matrix will fully include all of the errors in the state estimate. The empirical error covariance matrix is determined from a literal reinterpretation of the equations involved in the weighted least squares estimation algorithm. It is a formally correct, empirical state error covariance matrix obtained through use of the average form of the weighted measurement residual variance performance index rather than the usual total weighted residual form. Based on its formulation, this matrix will contain the total uncertainty in the state estimate, regardless as to the source of the uncertainty and whether the source is anticipated or not. It is expected that the empirical error covariance matrix will give a better, statistical representation of the state error in poorly modeled systems or when sensor performance
A GIS-based atmospheric dispersion model for pollutants emitted by complex source areas.
Teggi, Sergio; Costanzini, Sofia; Ghermandi, Grazia; Malagoli, Carlotta; Vinceti, Marco
2018-01-01
Gaussian dispersion models are widely used to simulate the concentrations and deposition fluxes of pollutants emitted by source areas. Very often, the calculation time limits the number of sources and receptors and the geometry of the sources must be simple and without holes. This paper presents CAREA, a new GIS-based Gaussian model for complex source areas. CAREA was coded in the Python language, and is largely based on a simplified formulation of the very popular and recognized AERMOD model. The model allows users to define in a GIS environment thousands of gridded or scattered receptors and thousands of complex sources with hundreds of vertices and holes. CAREA computes ground level, or near ground level, concentrations and dry deposition fluxes of pollutants. The input/output and the runs of the model can be completely managed in GIS environment (e.g. inside a GIS project). The paper presents the CAREA formulation and its applications to very complex test cases. The tests shows that the processing time are satisfactory and that the definition of sources and receptors and the output retrieval are quite easy in a GIS environment. CAREA and AERMOD are compared using simple and reproducible test cases. The comparison shows that CAREA satisfactorily reproduces AERMOD simulations and is considerably faster than AERMOD. Copyright © 2017 Elsevier B.V. All rights reserved.
International Nuclear Information System (INIS)
Inomata, Yayoi; Kajino, Mizuo; Sato, Keiichi; Ohara, Toshimasa; Kurokawa, Jun-ichi; Ueda, Hiromasa; Tang, Ning; Hayakawa, Kazuichi; Ohizumi, Tsuyoshi; Akimoto, Hajime
2013-01-01
We analyzed the source–receptor relationships for particulate polycyclic aromatic hydrocarbon (PAH) concentrations in northeastern Asia using an aerosol chemical transport model. The model successfully simulated the observed concentrations. In Beijing (China) benzo[a]pyren (BaP) concentrations are due to emissions from its own domain. In Noto, Oki and Tsushima (Japan), transboundary transport from northern China (>40°N, 40–60%) and central China (30–40°N, 10–40%) largely influences BaP concentrations from winter to spring, whereas the relative contribution from central China is dominant (90%) in Hedo. In the summer, the contribution from Japanese domestic sources increases (40–80%) at the 4 sites. Contributions from Japan and Russia are additional source of BaP over the northwestern Pacific Ocean in summer. The contribution rates for the concentrations from each domain are different among PAH species depending on their particulate phase oxidation rates. Reaction with O 3 on particulate surfaces may be an important component of the PAH oxidation processes. -- Highlights: •Source–receptor analysis was conducted for investigating PAHs in northeast Asia. •In winter, transboundary transport from China is large contribution in leeward. •Relative contribution from Korea, Japan, and eastern Russia is increased in summer. •This seasonal variation is strongly controlled by the meteorological conditions. •The transport distance is different among PAH species. -- Transboundary transport of PAHs in northeast Asia was investigated by source–receptor analysis
2011 Radioactive Materials Usage Survey for Unmonitored Point Sources
Energy Technology Data Exchange (ETDEWEB)
Sturgeon, Richard W. [Los Alamos National Laboratory
2012-06-27
organized. The RMUS Interview Form with the attached RMUS Process Form(s) provides the radioactive materials survey data by technical area (TA) and building number. The survey data for each release point includes information such as: exhaust stack identification number, room number, radioactive material source type (i.e., potential source or future potential source of air emissions), radionuclide, usage (in curies) and usage basis, physical state (gas, liquid, particulate, solid, or custom), release fraction (from Appendix D to 40 CFR 61, Subpart H), and process descriptions. In addition, the interview form also calculates emissions (in curies), lists mrem/Ci factors, calculates PEDEs, and states the location of the critical receptor for that release point. [The critical receptor is the maximum exposed off-site member of the public, specific to each individual facility.] Each of these data fields is described in this section. The Tier classification of release points, which was first introduced with the 1999 usage survey, is also described in detail in this section. Section 4 includes a brief discussion of the dose estimate methodology, and includes a discussion of several release points of particular interest in the CY 2011 usage survey report. It also includes a table of the calculated PEDEs for each release point at its critical receptor. Section 5 describes ES's approach to Quality Assurance (QA) for the usage survey. Satisfactory completion of the survey requires that team members responsible for Rad-NESHAP (National Emissions Standard for Hazardous Air Pollutants) compliance accurately collect and process several types of information, including radioactive materials usage data, process information, and supporting information. They must also perform and document the QA reviews outlined in Section 5.2.6 (Process Verification and Peer Review) of ES-RN, 'Quality Assurance Project Plan for the Rad-NESHAP Compliance Project' to verify that all information is
Energy Technology Data Exchange (ETDEWEB)
Konheiser, Joerg [Helmholtz-Zentrum Dresden-Rossendorf e.V., Dresden (Germany). Reactor Safety; Ferrari, A. [Helmholtz-Zentrum Dresden-Rossendorf e.V., Dresden (Germany). Inst. of Radiation Physics; Magin, A. [Karlsruher Institut fuer Technologie (KIT), Karlsruhe (Germany); Naumann, B. [Helmholtz-Zentrum Dresden-Rossendorf e.V., Dresden (Germany). Dept. of Radiation Protection and Safety; Mueller, S.E.
2017-06-01
The neutron source terms for a proton beam hitting an {sup 18}O-enriched water target were calculated with the radiation transport programs MCNP6 and FLUKA and were compared to source terms for exclusive {sup 18}O(p,n){sup 18}F production. To validate the radiation fields obtained in the simulations, an experimental program has been started using activation samples originally used in reactor dosimetry.
A source term and risk calculations using level 2+PSA methodology
International Nuclear Information System (INIS)
Park, S. I.; Jea, M. S.; Jeon, K. D.
2002-01-01
The scope of Level 2+ PSA includes the assessment of dose risk which is associated with the exposures of the radioactive nuclides escaping from nuclear power plants during severe accidents. The establishment of data base for the exposure dose in Korea nuclear power plants may contribute to preparing the accident management programs and periodic safety reviews. In this study the ORIGEN, MELCOR and MACCS code were employed to produce a integrated framework to assess the radiation source term risk. The framework was applied to a reference plant. Using IPE results, the dose rate for the reference plant was calculated quantitatively
Directory of Open Access Journals (Sweden)
Silva A.A.
1999-01-01
Full Text Available Alterations in extracellular matrix (ECM expression in the central nervous system (CNS usually associated with inflammatory lesions have been described in several pathological situations including neuroblastoma and demyelinating diseases. The participation of fibronectin (FN and its receptor, the VLA-4 molecule, in the migration of inflammatory cells into the CNS has been proposed. In Trypanosoma cruzi infection encephalitis occurs during the acute phase, whereas in Toxoplasma infection encephalitis is a chronic persisting process. In immunocompromised individuals such as AIDS patients, T. cruzi or T. gondii infection can lead to severe CNS damage. At the moment, there are no data available regarding the molecules involved in the entrance of inflammatory cells into the CNS during parasitic encephalitis. Herein, we characterized the expression of the ECM components FN and laminin (LN and their receptors in the CNS of T. gondii- and T. cruzi-infected mice. An increased expression of FN and LN was detected in the meninges, leptomeninges, choroid plexus and basal lamina of blood vessels. A fine FN network was observed involving T. gondii-free and T. gondii-containing inflammatory infiltrates. Moreover, perivascular spaces presenting a FN-containing filamentous network filled with a4+ and a5+ cells were observed. Although an increased expression of LN was detected in the basal lamina of blood vessels, the CNS inflammatory cells were a6-negative. Taken together, our results suggest that FN and its receptors VLA-4 and VLA-5 might be involved in the entrance, migration and retention of inflammatory cells into the CNS during parasitic infections.
Calculation of source term in spent PWR fuel assemblies for dry storage and shipping cask design
International Nuclear Information System (INIS)
Fernandez, J. L.; Lopez, J.
1986-01-01
Using the ORIGEN-2 Coda, the decay heat and neutron and photon sources for an irradiated PWR fuel element have been calculated. Also, parametric studies on the behaviour of the magnitudes with the burn-up, linear heat power and irradiation and cooling times were performed. Finally, a comparison between our results and other design calculations shows a good agreement and confirms the validity of the used method. (Author) 6 refs
Axial-Current Matrix Elements in Light Nuclei from Lattice QCD
Energy Technology Data Exchange (ETDEWEB)
Savage, Martin [Univ. of Washington, Seattle, WA (United States); Shanahan, Phiala E. [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States); Tiburzi, Brian C. [Univ. of Maryland, College Park, MD (United States); Wagman, Michael L. [Univ. of Washington, Seattle, WA (United States); Winter, Frank T. [Thomas Jefferson National Accelerator Facility (TJNAF), Newport News, VA (United States); Beane, Silas [Univ. of New Hampshire, Durham, NH (United States); Chang, Emmanuel [Univ. of Washington, Seattle, WA (United States); Davoudi, Zohreh; Detmold, William [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States); Orginos, Konstantinos [Thomas Jefferson National Accelerator Facility (TJNAF), Newport News, VA (United States); College of William and Mary, Williamsburg, VA (United States)
2016-12-01
I present results from the first lattice QCD calculations of axial-current matrix elements in light nuclei, performed by the NPLQCD collaboration. Precision calculations of these matrix elements, and the subsequent extraction of multi-nucleon axial-current operators, are essential in refining theoretical predictions of the proton-proton fusion cross section, neutrino-nucleus cross sections and $\\beta\\beta$-decay rates of nuclei. In addition, they are expected to shed light on the phenomenological quenching of $g_A$ that is required in nuclear many-body calculations.
Automated calculation of matrix elements and physics motivated observables
Was, Z.
2017-11-01
The central aspect of my personal scientific activity, has focused on calculations useful for interpretation of High Energy accelerator experimental results, especially in a domain of precision tests of the Standard Model. My activities started in early 80’s, when computer support for algebraic manipulations was in its infancy. But already then it was important for my work. It brought a multitude of benefits, but at the price of some inconvenience for physics intuition. Calculations became more complex, work had to be distributed over teams of researchers and due to automatization, some aspects of the intermediate results became more difficult to identify. In my talk I will not be very exhaustive, I will present examples from my personal research only: (i) calculations of spin effects for the process e + e - → τ + τ - γ at Petra/PEP energies, calculations (with the help of the Grace system of Minami-tateya group) and phenomenology of spin amplitudes for (ii) e + e - → 4f and for (iii) e + e - → νeν¯eγγ processes, (iv) phenomenology of CP-sensitive observables for Higgs boson parity in H → τ + τ -, τ ± → ν2(3)π cascade decays.
Aggarwal, Kanti M.
2018-03-01
The paper "Electron impact excitation of N-like ions from the ICFT R-matrix calculation" by Wang et al. [1] lacks details of calculations, presents only limited data, and has a few anomalies, as listed below.
Sparse-matrix factorizations for fast symmetric Fourier transforms
International Nuclear Information System (INIS)
Sequel, J.
1987-01-01
This work proposes new fast algorithms computing the discrete Fourier transform of certain families of symmetric sequences. Sequences commonly found in problems of structure determination by x-ray crystallography and in numerical solutions of boundary-value problems in partial differential equations are dealt with. In the algorithms presented, the redundancies in the input and output data, due to the presence of symmetries in the input data sequence, were eliminated. Using ring-theoretical methods a matrix representation is obtained for the remaining calculations; which factors as the product of a complex block-diagonal matrix times as integral matrix. A basic two-step algorithm scheme arises from this factorization with a first step consisting of pre-additions and a second step containing the calculations involved in computing with the blocks in the block-diagonal factor. These blocks are structured as block-Hankel matrices, and two sparse-matrix factoring formulas are developed in order to diminish their arithmetic complexity
Goel, Peeyush N; Gude, Rajiv P
2014-02-01
Pentoxifylline (PTX) is a methylxanthine derivative that improves blood flow by decreasing its viscosity. Being an inhibitor of platelet aggregation, it can thus reduce the adhesiveness of cancer cells prolonging their circulation time. This delay in forming secondary tumours makes them more prone to immunological surveillance. Recently, we have evaluated its anti-metastatic efficacy against breast cancer, using MDA-MB-231 model system. In view of this, we had ascertained the effect of PTX on adhesion of MDA-MB-231 cells to extracellular matrix components (ECM) and its allied receptors such as the integrins. PTX affected adhesion of breast cancer cells to matrigel, collagen type IV, fibronectin and laminin in a dose dependent manner. Further, PTX showed a differential effect on integrin expression profile. The experimental metastasis model using NOD-SCID mice showed lesser tumour island formation when treated with PTX compared to the control. These findings further substantiate the anti-adhesive potential of PTX in breast cancer and warrant further insights into the functional regulation. Copyright © 2013 Elsevier Masson SAS. All rights reserved.
Directory of Open Access Journals (Sweden)
Samin Poudel
2017-11-01
Full Text Available The refractive index (RI is an important parameter in describing the radiative impacts of aerosols. It is important to constrain the RI of aerosol components, since there is still significant uncertainty regarding the RI of biomass burning aerosols. Experimentally measured extinction cross-sections, scattering cross-sections, and single scattering albedos for white pine biomass burning (BB aerosols under two different burning and sampling conditions were modeled using T-matrix theory. The refractive indices were extracted from these calculations. Experimental measurements were conducted using a cavity ring-down spectrometer to measure the extinction, and a nephelometer to measure the scattering of size-selected aerosols. BB aerosols were obtained by burning white pine using (1 an open fire in a burn drum, where the aerosols were collected in distilled water using an impinger, and then re-aerosolized after several days, and (2 a tube furnace to directly introduce the BB aerosols into an indoor smog chamber, where BB aerosols were then sampled directly. In both cases, filter samples were also collected, and electron microscopy images were used to obtain the morphology and size information used in the T-matrix calculations. The effective radius of the particles collected on filter media from the open fire was approximately 245 nm, whereas it was approximately 76 nm for particles from the tube furnace burns. For samples collected in distilled water, the real part of the RI increased with increasing particle size, and the imaginary part decreased. The imaginary part of the RI was also significantly larger than the reported values for fresh BB aerosol samples. For the particles generated in the tube furnace, the real part of the RI decreased with particle size, and the imaginary part was much smaller and nearly constant. The RI is sensitive to particle size and sampling method, but there was no wavelength dependence over the range considered (500
Matrix analysis of electrical machinery
Hancock, N N
2013-01-01
Matrix Analysis of Electrical Machinery, Second Edition is a 14-chapter edition that covers the systematic analysis of electrical machinery performance. This edition discusses the principles of various mathematical operations and their application to electrical machinery performance calculations. The introductory chapters deal with the matrix representation of algebraic equations and their application to static electrical networks. The following chapters describe the fundamentals of different transformers and rotating machines and present torque analysis in terms of the currents based on the p
A transilient matrix for moist convection
Energy Technology Data Exchange (ETDEWEB)
Romps, D.; Kuang, Z.
2011-08-15
A method is introduced for diagnosing a transilient matrix for moist convection. This transilient matrix quantifies the nonlocal transport of air by convective eddies: for every height z, it gives the distribution of starting heights z{prime} for the eddies that arrive at z. In a cloud-resolving simulation of deep convection, the transilient matrix shows that two-thirds of the subcloud air convecting into the free troposphere originates from within 100 m of the surface. This finding clarifies which initial height to use when calculating convective available potential energy from soundings of the tropical troposphere.
International Nuclear Information System (INIS)
Ablinger, J.; Schneider, C.; Manteuffel, A. von
2015-09-01
Three loop ladder and V-topology diagrams contributing to the massive operator matrix element A Qg are calculated. The corresponding objects can all be expressed in terms of nested sums and recurrences depending on the Mellin variable N and the dimensional parameter ε. Given these representations, the desired Laurent series expansions in ε can be obtained with the help of our computer algebra toolbox. Here we rely on generalized hypergeometric functions and Mellin-Barnes representations, on difference ring algorithms for symbolic summation, on an optimized version of the multivariate Almkvist-Zeilberger algorithm for symbolic integration, and on new methods to calculate Laurent series solutions of coupled systems of differential equations. The solutions can be computed for general coefficient matrices directly for any basis also performing the expansion in the dimensional parameter in case it is expressible in terms of indefinite nested product-sum expressions. This structural result is based on new results of our difference ring theory. In the cases discussed we deal with iterative sum- and integral-solutions over general alphabets. The final results are expressed in terms of special sums, forming quasi-shuffle algebras, such as nested harmonic sums, generalized harmonic sums, and nested binomially weighted (cyclotomic) sums. Analytic continuations to complex values of N are possible through the recursion relations obeyed by these quantities and their analytic asymptotic expansions. The latter lead to a host of new constants beyond the multiple zeta values, the infinite generalized harmonic and cyclotomic sums in the case of V-topologies.
Ablinger, J.; Behring, A.; Blümlein, J.; De Freitas, A.; von Manteuffel, A.; Schneider, C.
2016-05-01
Three loop ladder and V-topology diagrams contributing to the massive operator matrix element AQg are calculated. The corresponding objects can all be expressed in terms of nested sums and recurrences depending on the Mellin variable N and the dimensional parameter ε. Given these representations, the desired Laurent series expansions in ε can be obtained with the help of our computer algebra toolbox. Here we rely on generalized hypergeometric functions and Mellin-Barnes representations, on difference ring algorithms for symbolic summation, on an optimized version of the multivariate Almkvist-Zeilberger algorithm for symbolic integration, and on new methods to calculate Laurent series solutions of coupled systems of differential equations. The solutions can be computed for general coefficient matrices directly for any basis also performing the expansion in the dimensional parameter in case it is expressible in terms of indefinite nested product-sum expressions. This structural result is based on new results of our difference ring theory. In the cases discussed we deal with iterative sum- and integral-solutions over general alphabets. The final results are expressed in terms of special sums, forming quasi-shuffle algebras, such as nested harmonic sums, generalized harmonic sums, and nested binomially weighted (cyclotomic) sums. Analytic continuations to complex values of N are possible through the recursion relations obeyed by these quantities and their analytic asymptotic expansions. The latter lead to a host of new constants beyond the multiple zeta values, the infinite generalized harmonic and cyclotomic sums in the case of V-topologies.
International Nuclear Information System (INIS)
Chen Zhenpeng; Qi Huiquan
1990-01-01
A comprehensive R-matrix analysis code has been developed. It is based on the multichannel and multilevel R-matrix theory and runs in VAX computer with FORTRAN-77. With this code many kinds of experimental data for one nuclear system can be fitted simultaneously. The comparisions between code RAC and code EDA of LANL are made. The data show both codes produced the same calculation results when one set of R-matrix parameters was used. The differential cross section of 10 B (n, α) 7 Li for E n = 0.4 MeV and the polarization of 16 O (n,n) 16 O for E n = 2.56 MeV are presented
International Nuclear Information System (INIS)
Heeb, C.M.
1991-03-01
The ORIGEN2 computer code is the primary calculational tool for computing isotopic source terms for the Hanford Environmental Dose Reconstruction (HEDR) Project. The ORIGEN2 code computes the amounts of radionuclides that are created or remain in spent nuclear fuel after neutron irradiation and radioactive decay have occurred as a result of nuclear reactor operation. ORIGEN2 was chosen as the primary code for these calculations because it is widely used and accepted by the nuclear industry, both in the United States and the rest of the world. Its comprehensive library of over 1,600 nuclides includes any possible isotope of interest to the HEDR Project. It is important to evaluate the uncertainties expected from use of ORIGEN2 in the HEDR Project because these uncertainties may have a pivotal impact on the final accuracy and credibility of the results of the project. There are three primary sources of uncertainty in an ORIGEN2 calculation: basic nuclear data uncertainty in neutron cross sections, radioactive decay constants, energy per fission, and fission product yields; calculational uncertainty due to input data; and code uncertainties (i.e., numerical approximations, and neutron spectrum-averaged cross-section values from the code library). 15 refs., 5 figs., 5 tabs
International Nuclear Information System (INIS)
Mihalczo, J.T.; Valentine, T.E.
1995-01-01
The development of MCNP-DSP, which allows direct calculation of the measured time and frequency analysis parameters from subcritical measurements using the 252 Cf-source-driven noise analysis method, permits the validation of calculational methods for criticality safety with in-plant subcritical measurements. In addition, a method of obtaining the bias in the calculations, which is essential to the criticality safety specialist, is illustrated using the results of measurements with 17.771-cm-diam, enriched (93.15), unreflected, and unmoderated uranium metal cylinders. For these uranium metal cylinders the bias obtained using MCNP-DSP and ENDF/B-V cross-section data increased with subcriticality. For a critical experiment [height (h) = 12.629 cm], it was -0.0061 ± 0.0003. For a 10.16-cm-high cylinder (k ∼ 0.93), it was 0.0060 ± 0.0016, and for a subcritical cylinder (h = 8.13 cm, k ∼ 0.85), the bias was -0.0137 ± 0.0037, more than a factor of 2 larger in magnitude. This method allows the nuclear criticality safety specialist to establish the bias in calculational methods for criticality safety from in-plant subcritical measurements by the 252 Cf-source-driven noise analysis method
Test of Effective Solid Angle code for the efficiency calculation of volume source
Energy Technology Data Exchange (ETDEWEB)
Kang, M. Y.; Kim, J. H.; Choi, H. D. [Seoul National Univ., Seoul (Korea, Republic of); Sun, G. M. [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
2013-10-15
It is hard to determine a full energy (FE) absorption peak efficiency curve for an arbitrary volume source by experiment. That's why the simulation and semi-empirical methods have been preferred so far, and many works have progressed in various ways. Moens et al. determined the concept of effective solid angle by considering an attenuation effect of γ-rays in source, media and detector. This concept is based on a semi-empirical method. An Effective Solid Angle code (ESA code) has been developed for years by the Applied Nuclear Physics Group in Seoul National University. ESA code converts an experimental FE efficiency curve determined by using a standard point source to that for a volume source. To test the performance of ESA Code, we measured the point standard sources and voluminous certified reference material (CRM) sources of γ-ray, and compared with efficiency curves obtained in this study. 200∼1500 KeV energy region is fitted well. NIST X-ray mass attenuation coefficient data is used currently to check for the effect of linear attenuation only. We will use the interaction cross-section data obtained from XCOM code to check the each contributing factor like photoelectric effect, incoherent scattering and coherent scattering in the future. In order to minimize the calculation time and code simplification, optimization of algorithm is needed.
R-matrix calculations for electron impact excitation and their application in astrophysical plasmas
International Nuclear Information System (INIS)
Liang, G Y; Badnell, N R; Zhao, G; Del Zanna, G; Mason, H E; Storey, P J
2012-01-01
The large number of high-resolution spectra routinely recorded in the astrophysical and fusion communities leads to the need for an extensive set of accurate baseline atomic data. The advantages of the intermediate-coupling frame transformation (ICFT) R-matrix method make it feasible to provide excitation data along iso-electronic sequences (Z ≤ 36) at the high level of accuracy afforded by the R-matrix method. The resultant data helps to overcome the longstanding shortcomings in X-ray and EUV astronomy. This is one of the key goals of the UK Atomic Processes for Astrophysical Plasmas (APAP) network.
Application of the R-matrix method to photoionization of molecules.
Tashiro, Motomichi
2010-04-07
The R-matrix method has been used for theoretical calculation of electron collision with atoms and molecules for long years. The method was also formulated to treat photoionization process, however, its application has been mostly limited to photoionization of atoms. In this work, we implement the R-matrix method to treat molecular photoionization problem based on the UK R-matrix codes. This method can be used for diatomic as well as polyatomic molecules, with multiconfigurational description for electronic states of both target neutral molecule and product molecular ion. Test calculations were performed for valence electron photoionization of nitrogen (N(2)) as well as nitric oxide (NO) molecules. Calculated photoionization cross sections and asymmetry parameters agree reasonably well with the available experimental results, suggesting usefulness of the method for molecular photoionization.
Universality of correlation functions in random matrix models of QCD
International Nuclear Information System (INIS)
Jackson, A.D.; Sener, M.K.; Verbaarschot, J.J.M.
1997-01-01
We demonstrate the universality of the spectral correlation functions of a QCD inspired random matrix model that consists of a random part having the chiral structure of the QCD Dirac operator and a deterministic part which describes a schematic temperature dependence. We calculate the correlation functions analytically using the technique of Itzykson-Zuber integrals for arbitrary complex supermatrices. An alternative exact calculation for arbitrary matrix size is given for the special case of zero temperature, and we reproduce the well-known Laguerre kernel. At finite temperature, the microscopic limit of the correlation functions are calculated in the saddle-point approximation. The main result of this paper is that the microscopic universality of correlation functions is maintained even though unitary invariance is broken by the addition of a deterministic matrix to the ensemble. (orig.)
A modified receptor model for source apportionment of heavy metal pollution in soil.
Huang, Ying; Deng, Meihua; Wu, Shaofu; Japenga, Jan; Li, Tingqiang; Yang, Xiaoe; He, Zhenli
2018-07-15
Source apportionment is a crucial step toward reduction of heavy metal pollution in soil. Existing methods are generally based on receptor models. However, overestimation or underestimation occurs when they are applied to heavy metal source apportionment in soil. Therefore, a modified model (PCA-MLRD) was developed, which is based on principal component analysis (PCA) and multiple linear regression with distance (MLRD). This model was applied to a case study conducted in a peri-urban area in southeast China where soils were contaminated by arsenic (As), cadmium (Cd), mercury (Hg) and lead (Pb). Compared with existing models, PCA-MLRD is able to identify specific sources and quantify the extent of influence for each emission. The zinc (Zn)-Pb mine was identified as the most important anthropogenic emission, which affected approximately half area for Pb and As accumulation, and approximately one third for Cd. Overall, the influence extent of the anthropogenic emissions decreased in the order of mine (3 km) > dyeing mill (2 km) ≈ industrial hub (2 km) > fluorescent factory (1.5 km) > road (0.5 km). Although algorithm still needs to improved, the PCA-MLRD model has the potential to become a useful tool for heavy metal source apportionment in soil. Copyright © 2018 Elsevier B.V. All rights reserved.
Energy Technology Data Exchange (ETDEWEB)
Nogueira, Larissa Goncalves; Dedecca, Joao Gorestein; Jannuzzi, Gilberto de Martinno [Universidade Estadual de Campinas (UNICAMP), SP (Brazil). Fac. de Engenharia Mecanica; Gomes, Rodolfo Dourado [International Energy Initiative (IEI), Campinas, SP (Brazil)
2010-07-01
The Brazilian power matrix is among the cleanest in the world due to the large share of hydroelectric generation. In recent years, several efforts have been concentrated in an attempt to diversify the matrix from the insertion of other renewable alternatives sources. The aim of this study is to analyze the state of generation through biomass, wind and small hydropower sources, covered by specific auctions and the Proinfa, and solar energy (photovoltaic and thermal high temperature) in Brazil, besides trends development of these generation sources. (author)
Frequency-domain elastic full waveform inversion using encoded simultaneous sources
Jeong, W.; Son, W.; Pyun, S.; Min, D.
2011-12-01
Currently, numerous studies have endeavored to develop robust full waveform inversion and migration algorithms. These processes require enormous computational costs, because of the number of sources in the survey. To avoid this problem, the phase encoding technique for prestack migration was proposed by Romero (2000) and Krebs et al. (2009) proposed the encoded simultaneous-source inversion technique in the time domain. On the other hand, Ben-Hadj-Ali et al. (2011) demonstrated the robustness of the frequency-domain full waveform inversion with simultaneous sources for noisy data changing the source assembling. Although several studies on simultaneous-source inversion tried to estimate P- wave velocity based on the acoustic wave equation, seismic migration and waveform inversion based on the elastic wave equations are required to obtain more reliable subsurface information. In this study, we propose a 2-D frequency-domain elastic full waveform inversion technique using phase encoding methods. In our algorithm, the random phase encoding method is employed to calculate the gradients of the elastic parameters, source signature estimation and the diagonal entries of approximate Hessian matrix. The crosstalk for the estimated source signature and the diagonal entries of approximate Hessian matrix are suppressed with iteration as for the gradients. Our 2-D frequency-domain elastic waveform inversion algorithm is composed using the back-propagation technique and the conjugate-gradient method. Source signature is estimated using the full Newton method. We compare the simultaneous-source inversion with the conventional waveform inversion for synthetic data sets of the Marmousi-2 model. The inverted results obtained by simultaneous sources are comparable to those obtained by individual sources, and source signature is successfully estimated in simultaneous source technique. Comparing the inverted results using the pseudo Hessian matrix with previous inversion results
International Nuclear Information System (INIS)
Johnson, Jeffrey O.; Gallmeier, Franz X.; Popova, Irina
2002-01-01
Determining the bulk shielding requirements for accelerator environments is generally an easy task compared to analyzing the radiation transport through the complex shield configurations and penetrations typically associated with the detailed Title II design efforts of a facility. Shielding calculations for penetrations in the SNS accelerator environment are presented based on hybrid Monte Carlo and discrete ordinates particle transport methods. This methodology relies on coupling tools that map boundary surface leakage information from the Monte Carlo calculations to boundary sources for one-, two-, and three-dimensional discrete ordinates calculations. The paper will briefly introduce the coupling tools for coupling MCNPX to the one-, two-, and three-dimensional discrete ordinates codes in the DOORS code suite. The paper will briefly present typical applications of these tools in the design of complex shield configurations and penetrations in the SNS proton beam transport system
Hessian matrix approach for determining error field sensitivity to coil deviations
Zhu, Caoxiang; Hudson, Stuart R.; Lazerson, Samuel A.; Song, Yuntao; Wan, Yuanxi
2018-05-01
The presence of error fields has been shown to degrade plasma confinement and drive instabilities. Error fields can arise from many sources, but are predominantly attributed to deviations in the coil geometry. In this paper, we introduce a Hessian matrix approach for determining error field sensitivity to coil deviations. A primary cost function used for designing stellarator coils, the surface integral of normalized normal field errors, was adopted to evaluate the deviation of the generated magnetic field from the desired magnetic field. The FOCUS code (Zhu et al 2018 Nucl. Fusion 58 016008) is utilized to provide fast and accurate calculations of the Hessian. The sensitivities of error fields to coil displacements are then determined by the eigenvalues of the Hessian matrix. A proof-of-principle example is given on a CNT-like configuration. We anticipate that this new method could provide information to avoid dominant coil misalignments and simplify coil designs for stellarators.
Pae, So Hyun; Dokic, Danijela; Dettman, Robert W
2008-04-01
Formation of the epicardium requires interactions between alpha(4)beta(1) integrin, and the extracellular matrix. We investigated the role of other integrins expressed by epicardial cells. We detected transcripts for alpha(5), alpha(8), alpha(v), beta(1), beta(3), and beta(5) integrins in the chick proepicardial organ (PE). We demonstrate that alpha(5)beta(1), alpha(8)beta(1), and alpha(v)beta(3) integrins are expressed by chick epicardial mesothelial cells (EMCs). Migration of EMCs in vitro was reduced by RGD-containing peptides. Using adenoviruses expressing an antisense to chick alpha(4) (AdGFPalpha4AS), full-length (Adhalpha4V5), and C-terminal deleted alpha(4) (Adhalpha4DeltaCV5), we found that EMCs were less able to adhere to vitronectin and fibronectin(120) indicating that alpha(4)beta(1) plays a role in regulating EMC adhesion to ligands of alpha(5)beta(1), alpha(8)beta(1), and alpha(v)beta(3). In Adhalpha4DeltaCV5-infected EMCs, alpha(5)beta(1) was diminished in fibrillar adhesions and new FN matrix assembly was abnormal. We propose that cooperation between alpha(4)beta(1) and RGD integrins is important for EMC adhesion and subepicardial matrix formation. (c) 2008 Wiley-Liss, Inc.
DEFF Research Database (Denmark)
Gloriam, David Erik Immanuel; Foord, Steven M; Blaney, Frank E
2009-01-01
currently available crystal structures. This was used to characterize pharmacological relationships of Family A/Rhodopsin family GPCRs, minimizing evolutionary influence from parts of the receptor that do not generally affect ligand binding. The resultant dendogram tended to group receptors according...
Block Tridiagonal Matrices in Electronic Structure Calculations
DEFF Research Database (Denmark)
Petersen, Dan Erik
in the Landauer–Büttiker ballistic transport regime. These calculations concentrate on determining the so– called Green’s function matrix, or portions thereof, which is the inverse of a block tridiagonal general complex matrix. To this end, a sequential algorithm based on Gaussian elimination named Sweeps...
The collagen receptor uPARAP/Endo180
DEFF Research Database (Denmark)
Engelholm, Lars H; Ingvarsen, Signe; Jürgensen, Henrik J
2009-01-01
The uPAR-associated protein (uPARAP/Endo180), a type-1 membrane protein belonging to the mannose receptor family, is an endocytic receptor for collagen. Through this endocytic function, the protein takes part in a previously unrecognized mechanism of collagen turnover. uPARAP/Endo180 can bind...... and internalize both intact and partially degraded collagens. In some turnover pathways, the function of the receptor probably involves an interplay with certain matrix-degrading proteases whereas, in other physiological processes, redundant mechanisms involving both endocytic and pericellular collagenolysis seem...... in collagen breakdown seems to be involved in invasive tumor growth Udgivelsesdato: 2009...
Program package for calculating matrix elements of two-cluster structures in nuclei
International Nuclear Information System (INIS)
Krivec, R.; Mihailovic, M.V.; Kernforschungszentrum Karlsruhe G.m.b.H.
1982-01-01
Matrix elements of operators between Slater determinants of two-cluster structures must be expanded into partial waves for the purpose of angular momentum projection. The expansion coefficients contain integrals over the spherical angles theta and phi. (orig.)
Noniterative MAP reconstruction using sparse matrix representations.
Cao, Guangzhi; Bouman, Charles A; Webb, Kevin J
2009-09-01
We present a method for noniterative maximum a posteriori (MAP) tomographic reconstruction which is based on the use of sparse matrix representations. Our approach is to precompute and store the inverse matrix required for MAP reconstruction. This approach has generally not been used in the past because the inverse matrix is typically large and fully populated (i.e., not sparse). In order to overcome this problem, we introduce two new ideas. The first idea is a novel theory for the lossy source coding of matrix transformations which we refer to as matrix source coding. This theory is based on a distortion metric that reflects the distortions produced in the final matrix-vector product, rather than the distortions in the coded matrix itself. The resulting algorithms are shown to require orthonormal transformations of both the measurement data and the matrix rows and columns before quantization and coding. The second idea is a method for efficiently storing and computing the required orthonormal transformations, which we call a sparse-matrix transform (SMT). The SMT is a generalization of the classical FFT in that it uses butterflies to compute an orthonormal transform; but unlike an FFT, the SMT uses the butterflies in an irregular pattern, and is numerically designed to best approximate the desired transforms. We demonstrate the potential of the noniterative MAP reconstruction with examples from optical tomography. The method requires offline computation to encode the inverse transform. However, once these offline computations are completed, the noniterative MAP algorithm is shown to reduce both storage and computation by well over two orders of magnitude, as compared to a linear iterative reconstruction methods.
Energy Technology Data Exchange (ETDEWEB)
Dong, J; Sarkar, A; Hoffmann, P [Wayne State University, Detroit, MI (United States); Suhail, A; Fridman, R [Wayne State University School of Medicine, Detroit, MI (United States)
2016-06-15
Purpose: Discoidin domain receptors (DDR) have recently been recognized as important players in cancer progression. DDRs are cell receptors that interact with collagen, an extracellular matrix (ECM) protein. However the detailed mechanism of their interaction is unclear. Here we attempted to examine their interaction in terms of structural (surface topography), mechanical (rupture force), and kinetic (binding probability) information on the single molecular scale with the use of atomic force microscopy (AFM). Methods: The Quantitative Nano-mechanical property Mapping (QNM) mode of AFM allowed to assess the cells in liquid growth media at their optimal physiological while being viable. Human benign prostate hyperplasia (BPH-1) cell line was genetically regulated to suppress DDR expression (DDR- cells) and was compared with naturally DDR expressing cells (DDR+). Results: Binding force measurements (n = 1000) were obtained before and after the two groups were treated with fibronectin (FN), an integrin-inhibiting antibody to block the binding of integrin. The quantification indicates that cells containing DDR bind with collagen at a most probable force of 80.3–83.0 ±7.6 pN. The probability of them binding is 0.167 when other interactions (mainly due to integrin-collagen binding) are minimized. Conclusion: Together with further force measurements at different pulling speeds will determine dissociation rate, binding distance and activation barrier. These parameters in benign cells provides some groundwork in understanding DDR’s behavior in various cell microenvironments such as in malignant tumor cells. Funding supported by Richard Barber Interdisciplinary Research Program of Wayne State University.
International Nuclear Information System (INIS)
Dong, J; Sarkar, A; Hoffmann, P; Suhail, A; Fridman, R
2016-01-01
Purpose: Discoidin domain receptors (DDR) have recently been recognized as important players in cancer progression. DDRs are cell receptors that interact with collagen, an extracellular matrix (ECM) protein. However the detailed mechanism of their interaction is unclear. Here we attempted to examine their interaction in terms of structural (surface topography), mechanical (rupture force), and kinetic (binding probability) information on the single molecular scale with the use of atomic force microscopy (AFM). Methods: The Quantitative Nano-mechanical property Mapping (QNM) mode of AFM allowed to assess the cells in liquid growth media at their optimal physiological while being viable. Human benign prostate hyperplasia (BPH-1) cell line was genetically regulated to suppress DDR expression (DDR- cells) and was compared with naturally DDR expressing cells (DDR+). Results: Binding force measurements (n = 1000) were obtained before and after the two groups were treated with fibronectin (FN), an integrin-inhibiting antibody to block the binding of integrin. The quantification indicates that cells containing DDR bind with collagen at a most probable force of 80.3–83.0 ±7.6 pN. The probability of them binding is 0.167 when other interactions (mainly due to integrin-collagen binding) are minimized. Conclusion: Together with further force measurements at different pulling speeds will determine dissociation rate, binding distance and activation barrier. These parameters in benign cells provides some groundwork in understanding DDR’s behavior in various cell microenvironments such as in malignant tumor cells. Funding supported by Richard Barber Interdisciplinary Research Program of Wayne State University
Asmussen, Niels; Lin, Zhao; McClure, Michael J; Schwartz, Zvi; Boyan, Barbara D
2017-12-09
Endochondral bone formation is a precise and highly ordered process whose exact regulatory framework is still being elucidated. Multiple regulatory pathways are known to be involved. In some cases, regulation impacts gene expression, resulting in changes in chondrocyte phenotypic expression and extracellular matrix synthesis. Rapid regulatory mechanisms are also involved, resulting in release of enzymes, factors and micro RNAs stored in extracellular matrisomes called matrix vesicles. Vitamin D metabolites modulate endochondral development via both genomic and rapid membrane-associated signaling pathways. 1α,25-dihydroxyvitamin D3 [1α,25(OH) 2 D 3 ] acts through the vitamin D receptor (VDR) and a membrane associated receptor, protein disulfide isomerase A3 (PDIA3). 24R,25-dihydroxyvitamin D3 [24R,25(OH) 2 D 3 ] affects primarily chondrocytes in the resting zone (RC) of the growth plate, whereas 1α,25(OH) 2 D 3 affects cells in the prehypertrophic and upper hypertrophic cell zones (GC). This includes genomically directing the cells to produce matrix vesicles with zone specific characteristics. In addition, vitamin D metabolites produced by the cells interact directly with the matrix vesicle membrane via rapid signal transduction pathways, modulating their activity in the matrix. The matrix vesicle payload is able to rapidly impact the extracellular matrix via matrix processing enzymes as well as providing a feedback mechanism to the cells themselves via the contained micro RNAs. Copyright © 2017. Published by Elsevier Inc.
Symplectic matrix, gauge invariance and Dirac brackets for super-QED
Energy Technology Data Exchange (ETDEWEB)
Alves, D.T. [Centro Brasileiro de Pesquisas Fisicas (CBPF), Rio de Janeiro, RJ (Brazil); Cheb-Terrab, E.S. [British Columbia Univ., Vancouver, BC (Canada). Dept. of Mathematics
1999-08-01
The calculation of Dirac brackets (DB) using a symplectic matrix approach but in a Hamiltonian framework is discussed, and the calculation of the DB for the supersymmetric extension of QED (super-QED) is shown. The relation between the zero-mode of the pre-symplectic matrix and the gauge transformations admitted by the model is verified. A general description to construct Lagrangians linear in the velocities is also presented. (author)
Something different - caching applied to calculation of impedance matrix elements
CSIR Research Space (South Africa)
Lysko, AA
2012-09-01
Full Text Available of the multipliers, the approximating functions are used any required parameters, such as input impedance or gain pattern etc. The method is relatively straightforward but, especially for small to medium matrices, requires spending time on filling... of the computing the impedance matrix for the method of moments, or a similar method, such as boundary element method (BEM) [22], with the help of the flowchart shown in Figure 1. Input Parameters (a) Search the cached data for a match (b) A match found...
Emergy Algebra: Improving Matrix Methods for Calculating Tranformities
Transformity is one of the core concepts in Energy Systems Theory and it is fundamental to the calculation of emergy. Accurate evaluation of transformities and other emergy per unit values is essential for the broad acceptance, application and further development of emergy method...
International Nuclear Information System (INIS)
Santos, William S.; Carvalho Junior, Alberico B. de; Pereira, Ariana J.S.; Santos, Marcos S.; Maia, Ana F.
2011-01-01
In this paper conversion coefficients (CCs) of equivalent dose and effective in terms of kerma in the air were calculated suggested by the ICRP 74. These dose coefficients were calculated considering a plane radiation source and monoenergetic for a spectrum of energy varying from 10 keV to 2 MeV. The CCs were obtained for four geometries of irradiation, anterior-posterior, posterior-anterior, lateral right side and lateral left side. It was used the radiation transport code Visual Monte Carlo (VMC), and a anthropomorphic simulator of sit female voxel. The observed differences in the found values for the CCs at the four irradiation sceneries are direct results of the body organs disposition, and the distance of these organs to the irradiation source. The obtained CCs will be used for estimative more precise of dose in situations that the exposed individual be sit, as the normally the CCs available in the literature were calculated by using simulators always lying or on their feet
Source apportionment of trace metals in river sediments: A comparison of three methods
International Nuclear Information System (INIS)
Chen, Haiyang; Teng, Yanguo; Li, Jiao; Wu, Jin; Wang, Jinsheng
2016-01-01
Increasing trace metal pollution in river sediment poses a significant threat to watershed ecosystem health. Identifying potential sources of sediment metals and apportioning their contributions are of key importance for proposing prevention and control strategies of river pollution. In this study, three advanced multivariate receptor models, factor analysis with nonnegative constraints (FA-NNC), positive matrix factorization (PMF), and multivariate curve resolution weighted-alternating least-squares (MCR-WALS), were comparatively employed for source apportionment of trace metals in river sediments and applied to the Le'an River, a main tributary of Poyang Lake which is the largest freshwater lake in China. The pollution assessment with contamination factor and geoaccumulation index suggested that the river sediments in Le'an River were contaminated severely by trace metals due to human activities. With the three apportionment tools, similar source profiles of trace metals in sediments were extracted. Especially, the MCR-WALS and PMF models produced essentially the same results. Comparatively speaking, the weighted schemes might give better solutions than the unweighted FA-NNC because the uncertainty information of environmental data was considered by PMF and MCR-WALS. Anthropogenic sources were apportioned as the most important pollution sources influencing the sediment metals in Le'an River with contributions of about 90%. Among them, copper tailings occupied the largest contribution (38.4–42.2%), followed by mining wastewater (29.0–33.5%), and agricultural activities (18.2–18.7%). To protect the ecosystem of Le'an River and Poyang Lake, special attention should be paid to the discharges of mining wastewater and the leachates of copper tailing ponds in that region. - Highlights: • Three advanced receptor models were comparatively employed for source apportionment. • The MCR-WALS and PMF models produce essentially same source profiles. • Copper
Correlation functions of two-matrix models
International Nuclear Information System (INIS)
Bonora, L.; Xiong, C.S.
1993-11-01
We show how to calculate correlation functions of two matrix models without any approximation technique (except for genus expansion). In particular we do not use any continuum limit technique. This allows us to find many solutions which are invisible to the latter technique. To reach our goal we make full use of the integrable hierarchies and their reductions which were shown in previous papers to naturally appear in multi-matrix models. The second ingredient we use, even though to a lesser extent, are the W-constraints. In fact an explicit solution of the relevant hierarchy, satisfying the W-constraints (string equation), underlies the explicit calculation of the correlation functions. The correlation functions we compute lend themselves to a possible interpretation in terms of topological field theories. (orig.)
Calculations of dosimetric parameter and REM meter response for BE(d, n) source
International Nuclear Information System (INIS)
Chen Changmao
1988-01-01
Based on the recent data about neutron spectra, the average energy, effictive energy and conversion coefficient of fluence to dose equivalent are calculated for some Be (α, n) neutron sources which have differene types and structures. The responses of 2202D and 0075 REM meter for thses spectral neutrons are also estimated. The results indicate that the relationship between average energy and conversion coefficient or REM meter responses can be described by simple functions
International Nuclear Information System (INIS)
Gritzay, O.; Kalchenko, O.; Klimova, N.; Razbudey, V.; Sanzhur, A.
2006-01-01
Calculation results of an epithermal neutron source which can be created at the Kyiv Research Reactor (KRR) by means of placing of specially selected moderators, filters, collimators, and shielding into the 10-th horizontal experimental tube (so-called thermal column) are presented. The general Monte-Carlo radiation transport code MCNP4C [1], the Oak Ridge isotope generation code ORIGEN2 [2] and the NJOY99 [3] nuclear data processing system have been used for these calculations
Barker, S A; Long, A R
1994-01-01
The use of drugs to maintain the health and maximize the output of dairy cattle has made the monitoring of milk for such agents essential. Screening tests based on immunological, microbial inhibition, and bacterial receptor assays have been developed for the detection of violative levels of therapeutic substances. However, such assays are not infallible, and false positive or negative results can occur when contaminants bind receptors or compete for the binding of the target residues. Such effects may arise from dietary sources, diseases, or other variables. Thus, a violation by such a test is not definitive until further confirmation is obtained. Our laboratory has developed extraction procedures for several drugs used in dairy production. Our method uses matrix solid-phase dispersion (MSPD) to isolate drugs away from contaminants and to eliminate many possible interferences. MSPD can also be used to enhance the specificity of such assays by fractionating various classes of drugs that may cross-react. Similarly, such methods may be used for liquid chromatographic screening and confirmation of a suspect sample.
Absorption properties of waste matrix materials
Energy Technology Data Exchange (ETDEWEB)
Briggs, J.B. [Idaho National Engineering Lab., Idaho Falls, ID (United States)
1997-06-01
This paper very briefly discusses the need for studies of the limiting critical concentration of radioactive waste matrix materials. Calculated limiting critical concentration values for some common waste materials are listed. However, for systems containing large quantities of waste materials, differences up to 10% in calculated k{sub eff} values are obtained by changing cross section data sets. Therefore, experimental results are needed to compare with calculation results for resolving these differences and establishing realistic biases.
Hadronic matrix elements in the QCD on the lattice
International Nuclear Information System (INIS)
Altmeyer, R.
1995-01-01
The work describes a lattice simulation of full QCD with dynamical Kogut-Susskind fermions. We evaluated different hadronic matrix elements which are related to the static and low-energy behaviour of hadrons. The analysis was performed on a 16 3 x 24 lattice with a coupling constant of β = 5.35 and a quark mass of m = 0.010. The calculations are based on a set of 85 configurations created by using a Hybrid-Monte-Carlo algorithm. First we evaluated the mass and energy spectrum of the low-lying hadrons using local operators as well as non-local operators. As the complete spectrum of the different pion and ρ meson lattice representations has been calculated we were able to check the restoration of continuum flavor symmetry. Moreover, the determination of energies E of hadron states with non-vanishing momentum vector q made it possible to investigate the lattice dispersion function E( vector q). Another part of the presented work is the determination of mesonic decay constants which parameterise the weak decay of mesons. They are related to hadronic matrix elements of the respective quark currents and through the calculation of these matrix elements we were able to determine the decay constants f π and f ρ . Before doing so, we calculated non-perturbatively renormalization constants for the currents under consideration. The next part is the determination of hadronic coupling constants. These parameterise in an effective low-energy model the interactions of different hadrons. They are related to hadronic matrix elements whose lattice calculation can be dpme bu evaluating 3-point correlation functions. Thus we evaluted the hadronic coupling constants g ρππ and g NNπ . Finally, an investigation of the pion-nucleon σterm was done. The σterm is defined through a hadronic matrix element of a quark-antiquark operator and can thus be evaluated on the lattice via the calculation of a 3-point correlation function. As we determined the connected and the disconnected
A 2D/1D coupling neutron transport method based on the matrix MOC and NEM methods
International Nuclear Information System (INIS)
Zhang, H.; Zheng, Y.; Wu, H.; Cao, L.
2013-01-01
A new 2D/1D coupling method based on the matrix MOC method (MMOC) and nodal expansion method (NEM) is proposed for solving the three-dimensional heterogeneous neutron transport problem. The MMOC method, used for radial two-dimensional calculation, constructs a response matrix between source and flux with only one sweep and then solves the linear system by using the restarted GMRES algorithm instead of the traditional trajectory sweeping process during within-group iteration for angular flux update. Long characteristics are generated by using the customization of commercial software AutoCAD. A one-dimensional diffusion calculation is carried out in the axial direction by employing the NEM method. The 2D and ID solutions are coupled through the transverse leakage items. The 3D CMFD method is used to ensure the global neutron balance and adjust the different convergence properties of the radial and axial solvers. A computational code is developed based on these theories. Two benchmarks are calculated to verify the coupling method and the code. It is observed that the corresponding numerical results agree well with references, which indicates that the new method is capable of solving the 3D heterogeneous neutron transport problem directly. (authors)
A 2D/1D coupling neutron transport method based on the matrix MOC and NEM methods
Energy Technology Data Exchange (ETDEWEB)
Zhang, H.; Zheng, Y.; Wu, H.; Cao, L. [School of Nuclear Science and Technology, Xi' an Jiaotong University, No. 28, Xianning West Road, Xi' an, Shaanxi 710049 (China)
2013-07-01
A new 2D/1D coupling method based on the matrix MOC method (MMOC) and nodal expansion method (NEM) is proposed for solving the three-dimensional heterogeneous neutron transport problem. The MMOC method, used for radial two-dimensional calculation, constructs a response matrix between source and flux with only one sweep and then solves the linear system by using the restarted GMRES algorithm instead of the traditional trajectory sweeping process during within-group iteration for angular flux update. Long characteristics are generated by using the customization of commercial software AutoCAD. A one-dimensional diffusion calculation is carried out in the axial direction by employing the NEM method. The 2D and ID solutions are coupled through the transverse leakage items. The 3D CMFD method is used to ensure the global neutron balance and adjust the different convergence properties of the radial and axial solvers. A computational code is developed based on these theories. Two benchmarks are calculated to verify the coupling method and the code. It is observed that the corresponding numerical results agree well with references, which indicates that the new method is capable of solving the 3D heterogeneous neutron transport problem directly. (authors)
Neutron spectra produced by moderating an isotopic neutron source
International Nuclear Information System (INIS)
Carrillo Nunnez, Aureliano; Vega Carrillo, Hector Rene
2001-01-01
A Monte Carlo study has been carried out to determine the neutron spectra produced by an isotopic neutron source inserted in moderating media. Most devices used for radiation protection have a response strongly dependent on neutron energy. ISO recommends several neutron sources and monoenergetic neutron radiations, but actual working situations have broad spectral neutron distributions extending from thermal to MeV energies, for instance, near nuclear power plants, medical applications accelerators and cosmic neutrons. To improve the evaluation of the dosimetric quantities, is recommended to calibrate the radiation protection devices in neutron spectra which are nearly like those met in practice. In order to complete the range of neutron calibrating sources, it seems useful to develop several wide spectral distributions representative of typical spectra down to thermal energies. The aim of this investigation was to use an isotopic neutron source in different moderating media to reproduce some of the neutron fields found in practice. MCNP code has been used during calculations, in these a 239PuBe neutron source was inserted in H2O, D2O and polyethylene moderators. Moderators were modeled as spheres and cylinders of different sizes. In the case of cylindrical geometry the anisotropy of resulting neutron spectra was calculated from 0 to 2 . From neutron spectra dosimetric features were calculated. MCNP calculations were validated by measuring the neutron spectra of a 239PuBe neutron source inserted in a H2O cylindrical moderator. The measurements were carried out with a multisphere neutron spectrometer with a 6LiI(Eu) scintillator. From the measurements the neutron spectrum was unfolded using the BUNKIUT code and the UTA4 response matrix. Some of the moderators with the source produce a neutron spectrum close to spectra found in actual applications, then can be used during the calibration of radiation protection devices
Calculation of the source term for a S1B-sequence at a VVER-1000 type reactor. Part 1
International Nuclear Information System (INIS)
Sdouz, G.
1990-10-01
The behaviour of the source term in a VVER-1000 type reactor is calculated using the 'Source Term Code Package' (STCP). The input data are based on the russian plant Zaporozhye-5. The selected accident sequence is a small break LOCA in the hot leg followed by loss offsite and onsite electric power (S 1 B-sequence). According to the course of the calculation the results are presented and analyzed for each program. Except for the noble gases all release fractions are lower than 10 -4 . 18 refs., 10 tabs. (Author)
Liu, C; Liu, J; Yao, Y X; Wu, P; Wang, C Z; Ho, K M
2016-10-11
We recently proposed the correlation matrix renormalization (CMR) theory to treat the electronic correlation effects [Phys. Rev. B 2014, 89, 045131 and Sci. Rep. 2015, 5, 13478] in ground state total energy calculations of molecular systems using the Gutzwiller variational wave function (GWF). By adopting a number of approximations, the computational effort of the CMR can be reduced to a level similar to Hartree-Fock calculations. This paper reports our recent progress in minimizing the error originating from some of these approximations. We introduce a novel sum-rule correction to obtain a more accurate description of the intersite electron correlation effects in total energy calculations. Benchmark calculations are performed on a set of molecules to show the reasonable accuracy of the method.
International Nuclear Information System (INIS)
Landau-Levin, Mary; Chao, Clifford K.S.
1996-01-01
membrane receptors were used. Immunoperoxidase staining was performed. The resulting integrin expression was evaluated by two observers in a blinded fashion. The consensus score (CS) was calculated by the following equation: SC = Σ Pi (i + 1), where i = 1, 2, 3 (indicating weak, moderate, or strong staining) and Pi is the percentage of stained epithelial cells for each intensity, varying from 0 to 100%. Multiple logistic regression was performed to evaluate the association between integrin expression and invasive/metastatic propensity in these four groups. Associations were obtained using Mantel-Hazel correlation and P values were calculated by Fisher's exact test. Results and Conclusions: We found the expression of αv domain was significantly associated with the presence of LNM (P<0.01). Expression of β4 domain decreased in those tumors invading outer third cervical stroma (P<0.02). The correlation between the integrin expression adjusting for other known confounding prognosticators in predicting the aggressiveness of human cervical cancer and the potential clinical application in selecting tumors suitable for radiation therapy will be discussed
Murni, Bustamam, A.; Ernastuti, Handhika, T.; Kerami, D.
2017-07-01
Calculation of the matrix-vector multiplication in the real-world problems often involves large matrix with arbitrary size. Therefore, parallelization is needed to speed up the calculation process that usually takes a long time. Graph partitioning techniques that have been discussed in the previous studies cannot be used to complete the parallelized calculation of matrix-vector multiplication with arbitrary size. This is due to the assumption of graph partitioning techniques that can only solve the square and symmetric matrix. Hypergraph partitioning techniques will overcome the shortcomings of the graph partitioning technique. This paper addresses the efficient parallelization of matrix-vector multiplication through hypergraph partitioning techniques using CUDA GPU-based parallel computing. CUDA (compute unified device architecture) is a parallel computing platform and programming model that was created by NVIDIA and implemented by the GPU (graphics processing unit).
Calculating ε'/ε in the standard model
International Nuclear Information System (INIS)
Sharpe, S.R.
1988-01-01
The ingredients needed in order to calculate ε' and ε are described. Particular emphasis is given to the non-perturbative calculations of matrix elements by lattice methods. The status of the electromagnetic contribution to ε' is reviewed. 15 refs
International Nuclear Information System (INIS)
Timothy D. Scheibe; Eric E. Roden; Scott C. Brooks; John M. Zachara
2004-01-01
The original hypothesis: 'Radionuclides in low-permeability porous matrix regions of fractured saprolite can be effectively isolated and immobilized by stimulating localized in-situ biological activity in highly-permeable fractured and microfractured zones within the saprolite'. The revised hypothesis: 'In heterogeneous porous media, microbial activity can be stimulated at interfaces between zones of high and low groundwater flow rates in such a manner as to create a local, distributed redox barrier. Such a barrier will inhibit the transfer of contaminants from the low-flow zones that serve as long-term contaminant sources into the high-flow zones that transport contaminants to receptors'.
International Nuclear Information System (INIS)
Nikonowicz, E.P.; Meadows, R.P.; Gorenstein, D.G.
1990-01-01
Until very recently interproton distances from NOESY experiments have been derived solely from the two-spin approximation method. Unfortunately, even at short mixing times, there is a significant error in many of these distances. A complete relaxation matrix approach employing a matrix eigenvalue/eigenvector solution to the Bloch equations avoids the approximation of the two-spin method. The authors calculated the structure of an extrahelical adenosine tridecamer oligodeoxyribonucleotide duplex, d-(CGCAGAATTCGCG) 2 , by an iterative refinement approach using a hybrid relaxation matrix method combined with restrained molecular dynamics calculations. Distances from the 2D NOESY spectra have been calculated from the relaxation rate matrix which has been evaluated from a hybrid NOESY volume matrix comprising elements from the experiment and those calculated from an initial structure. The hybrid matrix derived distances have then been used in a restrained molecular dynamics procedure to obtain a new structure that better approximates the NOESY spectra. The resulting partially refined structure is then used to calculate an improved theoretical NOESY volume matrix which is once again merged with the experimental matrix until refinement is complete. Although the crystal structure of the tridecamer clearly shows the extrahelical adenosine looped out way from the duplex, the NOESY distance restrained hybrid matrix/molecular dynamics structural refinement establishes that the extrahelical adenosine stacks into the duplex
International Nuclear Information System (INIS)
Brown, T.W.
2010-11-01
The same complex matrix model calculates both tachyon scattering for the c=1 non-critical string at the self-dual radius and certain correlation functions of half-BPS operators in N=4 super- Yang-Mills. It is dual to another complex matrix model where the couplings of the first model are encoded in the Kontsevich-like variables of the second. The duality between the theories is mirrored by the duality of their Feynman diagrams. Analogously to the Hermitian Kontsevich- Penner model, the correlation functions of the second model can be written as sums over discrete points in subspaces of the moduli space of punctured Riemann surfaces. (orig.)
Energy Technology Data Exchange (ETDEWEB)
Brown, T.W.
2010-11-15
The same complex matrix model calculates both tachyon scattering for the c=1 non-critical string at the self-dual radius and certain correlation functions of half-BPS operators in N=4 super- Yang-Mills. It is dual to another complex matrix model where the couplings of the first model are encoded in the Kontsevich-like variables of the second. The duality between the theories is mirrored by the duality of their Feynman diagrams. Analogously to the Hermitian Kontsevich- Penner model, the correlation functions of the second model can be written as sums over discrete points in subspaces of the moduli space of punctured Riemann surfaces. (orig.)
The improved Apriori algorithm based on matrix pruning and weight analysis
Lang, Zhenhong
2018-04-01
This paper uses the matrix compression algorithm and weight analysis algorithm for reference and proposes an improved matrix pruning and weight analysis Apriori algorithm. After the transactional database is scanned for only once, the algorithm will construct the boolean transaction matrix. Through the calculation of one figure in the rows and columns of the matrix, the infrequent item set is pruned, and a new candidate item set is formed. Then, the item's weight and the transaction's weight as well as the weight support for items are calculated, thus the frequent item sets are gained. The experimental result shows that the improved Apriori algorithm not only reduces the number of repeated scans of the database, but also improves the efficiency of data correlation mining.
Mass balance modelling of contaminants in river basins: a flexible matrix approach.
Warren, Christopher; Mackay, Don; Whelan, Mick; Fox, Kay
2005-12-01
A novel and flexible approach is described for simulating the behaviour of chemicals in river basins. A number (n) of river reaches are defined and their connectivity is described by entries in an n x n matrix. Changes in segmentation can be readily accommodated by altering the matrix entries, without the need for model revision. Two models are described. The simpler QMX-R model only considers advection and an overall loss due to the combined processes of volatilization, net transfer to sediment and degradation. The rate constant for the overall loss is derived from fugacity calculations for a single segment system. The more rigorous QMX-F model performs fugacity calculations for each segment and explicitly includes the processes of advection, evaporation, water-sediment exchange and degradation in both water and sediment. In this way chemical exposure in all compartments (including equilibrium concentrations in biota) can be estimated. Both models are designed to serve as intermediate-complexity exposure assessment tools for river basins with relatively low data requirements. By considering the spatially explicit nature of emission sources and the changes in concentration which occur with transport in the channel system, the approach offers significant advantages over simple one-segment simulations while being more readily applicable than more sophisticated, highly segmented, GIS-based models.
A new approach for calculation of volume confined by ECR surface and its area in ECR ion source
International Nuclear Information System (INIS)
Filippov, A.V.
2007-01-01
The volume confined by the resonance surface and its area are important parameters of the balance equations model for calculation of ion charge-state distribution (CSD) in the electron-cyclotron resonance (ECR) ion source. A new approach for calculation of these parameters is given. This approach allows one to reduce the number of parameters in the balance equations model
QUEUEING DISCIPLINES BASED ON PRIORITY MATRIX
Directory of Open Access Journals (Sweden)
Taufik I. Aliev
2014-11-01
Full Text Available The paper deals with queueing disciplines for demands of general type in queueing systems with multivendor load. A priority matrix is proposed to be used for the purpose of mathematical description of such disciplines, which represents the priority type (preemptive priority, not preemptive priority or no priority between any two demands classes. Having an intuitive and simple way of priority assignment, such description gives mathematical dependencies of system operation characteristics on its parameters. Requirements for priority matrix construction are formulated and the notion of canonical priority matrix is given. It is shown that not every matrix, constructed in accordance with such requirements, is correct. The notion of incorrect priority matrix is illustrated by an example, and it is shown that such matrixes do not ensure any unambiguousness and determinacy in design of algorithm, which realizes corresponding queueing discipline. Rules governing construction of correct matrixes are given for canonical priority matrixes. Residence time for demands of different classes in system, which is the sum of waiting time and service time, is considered as one of the most important characteristics. By introducing extra event method Laplace transforms for these characteristics are obtained, and mathematical dependencies are derived on their basis for calculation of two first moments for corresponding characteristics of demands queueing
Energy Technology Data Exchange (ETDEWEB)
Birkholzer, J.; Karasaki, K. [Lawrence Berkeley National Lab., CA (United States). Earth Sciences Div.
1996-07-01
Fracture network simulators have extensively been used in the past for obtaining a better understanding of flow and transport processes in fractured rock. However, most of these models do not account for fluid or solute exchange between the fractures and the porous matrix, although diffusion into the matrix pores can have a major impact on the spreading of contaminants. In the present paper a new finite element code TRIPOLY is introduced which combines a powerful fracture network simulator with an efficient method to account for the diffusive interaction between the fractures and the adjacent matrix blocks. The fracture network simulator used in TRIPOLY features a mixed Lagrangian-Eulerian solution scheme for the transport in fractures, combined with an adaptive gridding technique to account for sharp concentration fronts. The fracture-matrix interaction is calculated with an efficient method which has been successfully used in the past for dual-porosity models. Discrete fractures and matrix blocks are treated as two different systems, and the interaction is modeled by introducing sink/source terms in both systems. It is assumed that diffusive transport in the matrix can be approximated as a one-dimensional process, perpendicular to the adjacent fracture surfaces. A direct solution scheme is employed to solve the coupled fracture and matrix equations. The newly developed combination of the fracture network simulator and the fracture-matrix interaction module allows for detailed studies of spreading processes in fractured porous rock. The authors present a sample application which demonstrate the codes ability of handling large-scale fracture-matrix systems comprising individual fractures and matrix blocks of arbitrary size and shape.
Mizejewski, G J
2015-01-01
Recent studies have demonstrated that the carboxyterminal third domain of alpha-fetoprotein (AFP-CD) binds with various ligands and receptors. Reports within the last decade have established that AFP-CD contains a large fragment of amino acids that interact with several different receptor types. Using computer software specifically designed to identify protein-to-protein interaction at amino acid sequence docking sites, the computer searches identified several types of scavenger-associated receptors and their amino acid sequence locations on the AFP-CD polypeptide chain. The scavenger receptors (SRs) identified were CD36, CD163, Stabilin, SSC5D, SRB1 and SREC; the SR-associated receptors included the mannose, low-density lipoprotein receptors, the asialoglycoprotein receptor, and the receptor for advanced glycation endproducts (RAGE). Interestingly, some SR interaction sites were localized on the AFP-derived Growth Inhibitory Peptide (GIP) segment at amino acids #480-500. Following the detection studies, a structural subdomain analysis of both the receptor and the AFP-CD revealed the presence of epidermal growth factor (EGF) repeats, extracellular matrix-like protein regions, amino acid-rich motifs and dimerization subdomains. For the first time, it was reported that EGF-like sequence repeats were identified on each of the three domains of AFP. Thereafter, the localization of receptors on specific cell types were reviewed and their functions were discussed.
Lipophorin Receptor: The Insect Lipoprotein Receptor
Indian Academy of Sciences (India)
IAS Admin
Director of ... function of the Lp is to deliver lipids throughout the insect body for metabolism ... Lipid is used as a major energy source for development as well as other metabolic .... LpR4 receptor variant was expressed exclusively in the brain and.
Matrix continued-fraction calculation of localization length in disordered systems
International Nuclear Information System (INIS)
Pastawski, H.M.; Weisz, J.F.
1983-01-01
A Matrix Continued-Fraction method is used to study the localization length of the states at the band center of a two dimensional crystals with disorder given by the Anderson model. It is found that exponentially localized states which scale according to the work of Mac Kinnon and Kramer, becomes weakly localized as the disorder becomes weaker, and there is some critical disorder for which the localization length does not saturate with the width of the strips, this confirms the resuts found by Pichard and Sarma. Weakly localized states are also found in one dimension for w/v [pt
Matrix continued-fraction calculation of localization length in disordered systems
International Nuclear Information System (INIS)
Pastawski, H.M.; Weisz, J.F.
1983-01-01
A Matrix Continued-Fraction method is used to study the localization length of the states at the band center of a two dimensional crystal with disorder given by the Anderson model. It is found that exponentially localized states, which scale according to the work of Mac Kinnon and Kramer, becomes weakly localized as the disorder becomes weaker, and there is some critical disorder for which the localization length does not saturate with the width of the strips, this confirms the results found by Pichard and Sarma. Weakly localized states are also found in one dimension for w/v [pt
Fast GPU-based computation of the sensitivity matrix for a PET list-mode OSEM algorithm
Energy Technology Data Exchange (ETDEWEB)
Nassiri, Moulay Ali; Carrier, Jean-Francois [Montreal Univ., QC (Canada). Dept. de Radio-Oncologie; Hissoiny, Sami [Ecole Polytechnique de Montreal, QC (Canada). Dept. de Genie Informatique et Genie Logiciel; Despres, Philippe [Quebec Univ. (Canada). Dept. de Radio-Oncologie
2011-07-01
One of the obstacle in introducing a list-mode PET reconstruction algorithm for routine clinical use is the long computation time required for the sensitivity matrix calculation. This matrix must be computed for each study because it depends on the object attenuation map. During the last decade, studies have shown that 3D list-mode OSEM reconstruction algorithms could be effectively performed and considerably accelerated by GPU devices. However, most of that preliminary work (1) was done for pre-clinical PET systems in which the number of LORs is small compared to modern human PET systems and (2) supposed that the sensitivity matrix is pre-calculated. The time required to compute this matrix can however be longer than the reconstruction time itself. The objective of this work is to investigate the performance of sensitivity matrix calculations in terms of computation time with modern GPUs, for clinical fully 3D LM-OSEM for modern PET scanners. For this purpose, sensitivity matrix calculations and full list-mode OSEM reconstruction for human PET systems were implemented on GPUs using the CUDA framework. The system matrices were built on-the-fly by using the multi-ray Siddon algorithm. The time to compute the sensitivity matrix for 288 x 288 x 57 arrays using 3 tangential LORs was 29 seconds. The 3D LM-OSEM algorithm, including the sensitivity matrix calculation, was performed for the same LORs in 71 seconds for 62 millions events, 6 frames and 1 iterations. This work let envision fast reconstructions for advanced PET application such as dynamic studies and parametric image reconstruction. (orig.)
Statistical Origin of Black Hole Entropy in Matrix Theory
International Nuclear Information System (INIS)
Lowe, D.A.
1998-01-01
The statistical entropy of black holes in matrix theory is considered. Assuming matrix theory is the discretized light-cone quantization of a theory with eleven-dimensional Lorentz invariance, we map the counting problem onto the original Gibbons-Hawking calculations of the thermodynamic entropy. copyright 1998 The American Physical Society
Hybrid transfer-matrix FDTD method for layered periodic structures.
Deinega, Alexei; Belousov, Sergei; Valuev, Ilya
2009-03-15
A hybrid transfer-matrix finite-difference time-domain (FDTD) method is proposed for modeling the optical properties of finite-width planar periodic structures. This method can also be applied for calculation of the photonic bands in infinite photonic crystals. We describe the procedure of evaluating the transfer-matrix elements by a special numerical FDTD simulation. The accuracy of the new method is tested by comparing computed transmission spectra of a 32-layered photonic crystal composed of spherical or ellipsoidal scatterers with the results of direct FDTD and layer-multiple-scattering calculations.
International Nuclear Information System (INIS)
Owens, D.H.
1972-06-01
The KDF9/EGDON programme FADDEEV has been written to investigate a technique for the calculation of the matrix of frequency responses G(jw) describing the response of the output vector y from the multivariable differential/algebraic system S to the drive of the system input vector u. S: Ex = Ax + Bu, y = Cx, G(jw) = C(jw E - A ) -1 B. The programme uses an algorithm due to Faddeev and has been written with emphasis upon: (a) simplicity of programme structure and computational technique which should enable a user to find his way through the programme fairly easily, and hence facilitate its manipulation as a subroutine in a larger code; (b) rapid computational ability, particularly in systems with fairly large number of inputs and outputs and requiring the evaluation of the frequency responses at a large number of frequencies. Transport or time delays must be converted by the user to Pade or Bode approximations prior to input. Conditions under which the algorithm fails to give accurate results are identified, and methods for increasing the accuracy of the calculations are discussed. The conditions for accurate results using FADDEEV indicate that its application is specialized. (author)
Source Signals Separation and Reconstruction Following Principal Component Analysis
Directory of Open Access Journals (Sweden)
WANG Cheng
2014-02-01
Full Text Available For separation and reconstruction of source signals from observed signals problem, the physical significance of blind source separation modal and independent component analysis is not very clear, and its solution is not unique. Aiming at these disadvantages, a new linear and instantaneous mixing model and a novel source signals separation reconstruction solving method from observed signals based on principal component analysis (PCA are put forward. Assumption of this new model is statistically unrelated rather than independent of source signals, which is different from the traditional blind source separation model. A one-to-one relationship between linear and instantaneous mixing matrix of new model and linear compound matrix of PCA, and a one-to-one relationship between unrelated source signals and principal components are demonstrated using the concept of linear separation matrix and unrelated of source signals. Based on this theoretical link, source signals separation and reconstruction problem is changed into PCA of observed signals then. The theoretical derivation and numerical simulation results show that, in despite of Gauss measurement noise, wave form and amplitude information of unrelated source signal can be separated and reconstructed by PCA when linear mixing matrix is column orthogonal and normalized; only wave form information of unrelated source signal can be separated and reconstructed by PCA when linear mixing matrix is column orthogonal but not normalized, unrelated source signal cannot be separated and reconstructed by PCA when mixing matrix is not column orthogonal or linear.
Tang, Jun; Chen, Qianwei; Guo, Jing; Yang, Liming; Tao, Yihao; Li, Lin; Miao, Hongping; Feng, Hua; Chen, Zhi; Zhu, Gang
2016-04-01
Germinal matrix hemorrhage (GMH) is the most common neurological disease of premature newborns leading to detrimental neurological sequelae. Minocycline has been reported to play a key role in neurological inflammatory diseases by controlling some mechanisms that involve cannabinoid receptor 2 (CB2R). The current study investigated whether minocycline reduces neuroinflammation and protects the brain from injury in a rat model of collagenase-induced GMH by regulating CB2R activity. To test this hypothesis, the effects of minocycline and a CB2R antagonist (AM630) were evaluated in male rat pups that were post-natal day 7 (P7) after GMH. We found that minocycline can lead to increased CB2R mRNA expression and protein expression in microglia. Minocycline significantly reduced GMH-induced brain edema, microglial activation, and lateral ventricular volume. Additionally, minocycline enhanced cortical thickness after injury. All of these neuroprotective effects of minocycline were prevented by AM630. A cannabinoid CB2 agonist (JWH133) was used to strengthen the hypothesis, which showed the identical neuroprotective effects of minocycline. Our study demonstrates, for the first time, that minocycline attenuates neuroinflammation and brain injury in a rat model of GMH, and activation of CBR2 was partially involved in these processes.
Phenomenological model of nanocluster in polymer matrix
International Nuclear Information System (INIS)
Oksengendler, B.L.; Turaeva, N.N.; Azimov, J.; Rashidova, S.Sh.
2010-01-01
The phenomenological model of matrix nanoclusters is presented based on the Wood-Saxon potential used in nuclear physics. In frame of this model the following problems have been considered: calculation of width of diffusive layer between nanocluster and matrix, definition of Tamm surface electronic state taking into account the diffusive layer width, receiving the expression for specific magnetic moment of nanoclusters taking into account the interface width. (authors)
Matrix Elements in Fermion Dynamical Symmetry Model
Institute of Scientific and Technical Information of China (English)
LIU Guang-Zhou; LIU Wei
2002-01-01
In a neutron-proton system, the matrix elements of the generators for SO(8) × SO(8) symmetry areconstructed explicitly, and with these matrix elements the low-lying excitation spectra obtained by diagonalization arepresented. The excitation spectra for SO(7) nuclei Pd and Ru isotopes and SO(6) r-soft rotational nuclei Xe, Ba, andCe isotopes are calculated, and comparison with the experimental results is carried out.
Matrix Elements in Fermion Dynamical Symmetry Model
Institute of Scientific and Technical Information of China (English)
LIUGuang－Zhou; LIUWei
2002-01-01
In a neutron-proton system,the matrix elements of the generators for SO(8)×SO(8) symmetry are constructed exp;icitly,and with these matrix elements the low-lying excitation spsectra obtained by diagonalization are presented.The excitation spectra for SO(7) nuclei Pd and Ru isotopes and SO(6) r-soft rotational nuclei Xe,Ba,and Ce isotopes are calculated,and comparison with the experimental results is carried out.
Single-particle density matrix of liquid 4He
International Nuclear Information System (INIS)
Vakarchuk, I.A.
2008-01-01
The density single-particle matrix in the coordinate notation was calculated based on the expression for the interacting Bose-particle N system density matrix. Under the low temperatures the mentioned matrix in the first approximation enables to reproduce the Bogoliubov theory results. In the classical terms the mentioned theory enables to reproduce the results of the theory of the classical fluids in the approximation of the chaotic phases. On the basis of the density single-particle matrix one managed to obtain the function of the pulse distribution of the particles, the Bose-liquid average kinetic energy, and to study the Bose-Einstein condensation phenomenon [ru
Hendriks, C.; Kuenen, J.; Kranenburg, R.; Scholz, Y.; Schaap, M.
2015-01-01
Effective air pollution and short-lived climate forcer mitigation strategies can only be designed when the effect of emission reductions on pollutant concentrations and health and ecosystem impacts are quantified. Within integrated assessment modeling source-receptor relationships (SRRs) based on
Adherence of Staphylococci to plastic, mesothelial cells and mesothelial extracellular matrix
Betjes, M. G.; Tuk, C. W.; Struijk, D. G.; Krediet, R. T.; Arisz, L.; Beelen, R. H.
1992-01-01
In this study we have investigated whether mesothelial cells (MC) and mesothelial extracellular matrix (ECM) are suitable substrates for the adherence of Staphylococci. Mesothelial cells were isolated from the peritoneal dialysis effluent by making use of their lack of Fc-receptors and capacity to
ACORNS, Covariance and Correlation Matrix Diagonalization
International Nuclear Information System (INIS)
Szondi, E.J.
1990-01-01
1 - Description of program or function: The program allows the user to verify the different types of covariance/correlation matrices used in the activation neutron spectrometry. 2 - Method of solution: The program performs the diagonalization of the input covariance/relative covariance/correlation matrices. The Eigen values are then analyzed to determine the rank of the matrices. If the Eigen vectors of the pertinent correlation matrix have also been calculated, the program can perform a complete factor analysis (generation of the factor matrix and its rotation in Kaiser's 'varimax' sense to select the origin of the correlations). 3 - Restrictions on the complexity of the problem: Matrix size is limited to 60 on PDP and to 100 on IBM PC/AT
International Nuclear Information System (INIS)
Chen, Xin
2014-01-01
Understanding the roles of the temporary and spatial structures of quantum functional noise in open multilevel quantum molecular systems attracts a lot of theoretical interests. I want to establish a rigorous and general framework for functional quantum noises from the constructive and computational perspectives, i.e., how to generate the random trajectories to reproduce the kernel and path ordering of the influence functional with effective Monte Carlo methods for arbitrary spectral densities. This construction approach aims to unify the existing stochastic models to rigorously describe the temporary and spatial structure of Gaussian quantum noises. In this paper, I review the Euclidean imaginary time influence functional and propose the stochastic matrix multiplication scheme to calculate reduced equilibrium density matrices (REDM). In addition, I review and discuss the Feynman-Vernon influence functional according to the Gaussian quadratic integral, particularly its imaginary part which is critical to the rigorous description of the quantum detailed balance. As a result, I establish the conditions under which the influence functional can be interpreted as the average of exponential functional operator over real-valued Gaussian processes for open multilevel quantum systems. I also show the difference between the local and nonlocal phonons within this framework. With the stochastic matrix multiplication scheme, I compare the normalized REDM with the Boltzmann equilibrium distribution for open multilevel quantum systems
Ishizawa, Yoshiki; Dobashi, Suguru; Kadoya, Noriyuki; Ito, Kengo; Chiba, Takahito; Takayama, Yoshiki; Sato, Kiyokazu; Takeda, Ken
2018-05-17
An accurate source model of a medical linear accelerator is essential for Monte Carlo (MC) dose calculations. This study aims to propose an analytical photon source model based on particle transport in parameterized accelerator structures, focusing on a more realistic determination of linac photon spectra compared to existing approaches. We designed the primary and secondary photon sources based on the photons attenuated and scattered by a parameterized flattening filter. The primary photons were derived by attenuating bremsstrahlung photons based on the path length in the filter. Conversely, the secondary photons were derived from the decrement of the primary photons in the attenuation process. This design facilitates these sources to share the free parameters of the filter shape and be related to each other through the photon interaction in the filter. We introduced two other parameters of the primary photon source to describe the particle fluence in penumbral regions. All the parameters are optimized based on calculated dose curves in water using the pencil-beam-based algorithm. To verify the modeling accuracy, we compared the proposed model with the phase space data (PSD) of the Varian TrueBeam 6 and 15 MV accelerators in terms of the beam characteristics and the dose distributions. The EGS5 Monte Carlo code was used to calculate the dose distributions associated with the optimized model and reference PSD in a homogeneous water phantom and a heterogeneous lung phantom. We calculated the percentage of points passing 1D and 2D gamma analysis with 1%/1 mm criteria for the dose curves and lateral dose distributions, respectively. The optimized model accurately reproduced the spectral curves of the reference PSD both on- and off-axis. The depth dose and lateral dose profiles of the optimized model also showed good agreement with those of the reference PSD. The passing rates of the 1D gamma analysis with 1%/1 mm criteria between the model and PSD were 100% for 4
Protein structure estimation from NMR data by matrix completion.
Li, Zhicheng; Li, Yang; Lei, Qiang; Zhao, Qing
2017-09-01
Knowledge of protein structures is very important to understand their corresponding physical and chemical properties. Nuclear Magnetic Resonance (NMR) spectroscopy is one of the main methods to measure protein structure. In this paper, we propose a two-stage approach to calculate the structure of a protein from a highly incomplete distance matrix, where most data are obtained from NMR. We first randomly "guess" a small part of unobservable distances by utilizing the triangle inequality, which is crucial for the second stage. Then we use matrix completion to calculate the protein structure from the obtained incomplete distance matrix. We apply the accelerated proximal gradient algorithm to solve the corresponding optimization problem. Furthermore, the recovery error of our method is analyzed, and its efficiency is demonstrated by several practical examples.
Source Apportionment of PM2.5 in Delhi, India Using PMF Model.
Sharma, S K; Mandal, T K; Jain, Srishti; Saraswati; Sharma, A; Saxena, Mohit
2016-08-01
Chemical characterization of PM2.5 [organic carbon, elemental carbon, water soluble inorganic ionic components, and major and trace elements] was carried out for a source apportionment study of PM2.5 at an urban site of Delhi, India from January, 2013, to December, 2014. The annual average mass concentration of PM2.5 was 122 ± 94.1 µg m(-3). Strong seasonal variation was observed in PM2.5 mass concentration and its chemical composition with maxima during winter and minima during monsoon. A receptor model, positive matrix factorization (PMF) was applied for source apportionment of PM2.5 mass concentration. The PMF model resolved the major sources of PM2.5 as secondary aerosols (21.3 %), followed by soil dust (20.5 %), vehicle emissions (19.7 %), biomass burning (14.3 %), fossil fuel combustion (13.7 %), industrial emissions (6.2 %) and sea salt (4.3 %).
Hierarchy of Poisson brackets for elements of a scattering matrix
International Nuclear Information System (INIS)
Konopelchenko, B.G.; Dubrovsky, V.G.
1984-01-01
The infinite family of Poisson brackets [Ssub(i1k1) (lambda 1 ), Ssub(i2k2) (lambda 2 )]sub(n) (n=0, 1, 2, ...) between the elements of a scattering matrix is calculated for the linear matrix spectral problem. (orig.)
2010-07-01
... 40 Protection of Environment 11 2010-07-01 2010-07-01 true HAP ABA Formulation Limitations Matrix for New Sources [see Â§ 63.1297(d)(2)] 1 Table 1 to Subpart III of Part 63 Protection of Environment... Flexible Polyurethane Foam Production Pt. 63, Subpt. III, Table 1 Table 1 to Subpart III of Part 63—HAP ABA...
Direct determination of scattering time delays using the R-matrix propagation method
International Nuclear Information System (INIS)
Walker, R.B.; Hayes, E.F.
1989-01-01
A direct method for determining time delays for scattering processes is developed using the R-matrix propagation method. The procedure involves the simultaneous generation of the global R matrix and its energy derivative. The necessary expressions to obtain the energy derivative of the S matrix are relatively simple and involve many of the same matrix elements required for the R-matrix propagation method. This method is applied to a simple model for a chemical reaction that displays sharp resonance features. The test results of the direct method are shown to be in excellent agreement with the traditional numerical differentiation method for scattering energies near the resonance energy. However, for sharp resonances the numerical differentiation method requires calculation of the S-matrix elements at many closely spaced energies. Since the direct method presented here involves calculations at only a single energy, one is able to generate accurate energy derivatives and time delays much more efficiently and reliably
Electronic annealing Fermi operator expansion for DFT calculations on metallic systems
Aarons, Jolyon; Skylaris, Chris-Kriton
2018-02-01
Density Functional Theory (DFT) calculations with computational effort which increases linearly with the number of atoms (linear-scaling DFT) have been successfully developed for insulators, taking advantage of the exponential decay of the one-particle density matrix. For metallic systems, the density matrix is also expected to decay exponentially at finite electronic temperature and linear-scaling DFT methods should be possible by taking advantage of this decay. Here we present a method for DFT calculations at finite electronic temperature for metallic systems which is effectively linear-scaling (O(N)). Our method generates the elements of the one-particle density matrix and also finds the required chemical potential and electronic entropy using polynomial expansions. A fixed expansion length is always employed to generate the density matrix, without any loss in accuracy by the application of a high electronic temperature followed by successive steps of temperature reduction until the desired (low) temperature density matrix is obtained. We have implemented this method in the ONETEP linear-scaling (for insulators) DFT code which employs local orbitals that are optimised in situ. By making use of the sparse matrix machinery of ONETEP, our method exploits the sparsity of Hamiltonian and density matrices to perform calculations on metallic systems with computational cost that increases asymptotically linearly with the number of atoms. We demonstrate the linear-scaling computational cost of our method with calculation times on palladium nanoparticles with up to ˜13 000 atoms.
QEDMOD: Fortran program for calculating the model Lamb-shift operator
Shabaev, V. M.; Tupitsyn, I. I.; Yerokhin, V. A.
2018-02-01
We present Fortran package QEDMOD for computing the model QED operator hQED that can be used to account for the Lamb shift in accurate atomic-structure calculations. The package routines calculate the matrix elements of hQED with the user-specified one-electron wave functions. The operator can be used to calculate Lamb shift in many-electron atomic systems with a typical accuracy of few percent, either by evaluating the matrix element of hQED with the many-electron wave function, or by adding hQED to the Dirac-Coulomb-Breit Hamiltonian.
A new Tone's method in APOLLO3® and its application to fast and thermal reactor calculations
Directory of Open Access Journals (Sweden)
Li Mao
2017-09-01
Full Text Available This paper presents a newly developed resonance self-shielding method based on Tone's method in APOLLO3® for fast and thermal reactor calculations. The new method is based on simplified models, the narrow resonance approximation for the slowing down source and Tone's approximation for group collision probability matrix. It utilizes mathematical probability tables as quadrature formulas in calculating effective cross-sections. Numerical results for the ZPPR drawer calculations in 1,968 groups show that, in the case of the double-column fuel drawer, Tone's method gives equivalent precision to the subgroup method while markedly reducing the total number of collision probability matrix calculations and hence the central processing unit time. In the case of a single-column fuel drawer with the presence of a uranium metal material, Tone's method obtains less precise results than those of the subgroup method due to less precise heterogeneous–homogeneous equivalence. The same options are also applied to PWR UOX, MOX, and Gd cells using the SHEM 361-group library, with the objective of analyzing whether this energy mesh might be suitable for the application of this methodology to thermal systems. The numerical results show that comparable precision is reached with both Tone's and the subgroup methods, with the satisfactory representation of intrapellet spatial effects.
Energy Technology Data Exchange (ETDEWEB)
Vent-Schmidt, Thomas
2015-11-30
During this thesis, the matrix-isolation technique in conjuction with quantum-chemical calculations has been employed in order to synthesize and characterize new compounds. The focus of the study were new species of the actinide and lanthanide series, but the photochemistry of XeO{sub 4} and the polyfluorides were also investigated. Based on the experience of laser ablated uranium and thorium atoms with H{sub 2} and F{sub 2} the reaction of these actinide atoms with HF has been investigated. The main products in these experiments are HThF and HUF which contain an actinide metal in the rather scarce +II oxidation state. In addition, the deuterated compounds have also been prepared and the isotopic shifts support the assignment. The higher hydride fluorides of thorium such as HThF{sub 3}, H{sub 2}ThF{sub 2} and H{sub 3}ThF have also been observed, whereas there is only little evidence for higher uranium hydride fluorides. The different behavior of the two metals under similar reaction conditions has been investigated theoretically. Besides the hydride fluorides, the reaction of the actinide atoms with HF gives also rise to the low valent fluorides and hydrides such as AnH and AnF (An = U, Th). These compounds have already been identified in experiments using fluorine or hydrogen as reagent, but a more reliable assignment can be made in these experiments due to the lower concentration of H or F. In addition, ThF{sub 2} has been observed in these experiments and there is evidence for the unknown difluoride of uranium, which will be addressed in a future paper. Experiments with laser ablated uranium and thorium atoms were extended to the reaction of these metals with H{sub 2}Se. Previous experiments using H{sub 2}O and H{sub 2}S instead of H{sub 2}Se yielded H{sub 2}AnX (An = U, Th; X = O, S) compounds which show evidence for an actinide-chalcogenide multiple bond. The new synthesized species H{sub 2}ThSe and H{sub 2}USe are characterized by their symmetric and
Project-90 Near-field calculations using CALIBRE
International Nuclear Information System (INIS)
Worgan, K.; Robinson, P.
1992-02-01
A comprehensive set of near-field calculations for the Swedish Nuclear Power Inspectorates Project-90 safety assessment has been performed using the CALIBRE model. In the majority of cases considered the redox front migrates through the bentonite buffer and into the rock, where it becomes effectively immobilised. The fracture remains in a reducing state, which means that for solubility-limited nuclides, the concentration at the bentonite/fracture interface can never be greater than the reducing solubility limit. The calculations also show that significant retardation occurs for nuclides which are even moderately sorbed. The effect is less pronounced in the wider fracture and high flow cases, as the opportunity for diffusion from the fracture to the rock matrix is reduced. In contrast, the release from the near-field of poorly-sorbed nuclides which are not solubility limited is governed by the release rate from the fuel, the diffusive mass transfer resistance of the buffer, rock matrix and fracture, the initial inventories and the nuclide half-lives. In the reference case, the maximum dose potential of nuclides emerging from the near-field occur for I-129 and was 3.2 x 10 -7 Sv per canister-year, assuming the flux to be discharged directly into the wall receptor biosphere. The parameters which have the most impact on the reference base results are high flow, wide aperture and poor chemistry (i.e. high solubility limits and low sorption distribution coefficients). The effects of combining extreme values of parameters does not give results which are in proportion to their effect when applied in isolation. In the worst case variant (early canister failure high flow, wide aperture and poor chemistry) the maximum dose potential is 1.0 x 10 -4 Sv per canister-year, compared with 8.9 x 10 -6 Sv in the high flow case, 4.5 x 10 -7 in the wide aperture case, 2.3 x 10 -6 in the poor chemistry case and 3.9 x 10 -6 in the early failure, wide aperture and high flow case. (au)
González, Mariela Natacha; de Mello, Wallace; Butler-Browne, Gillian S; Silva-Barbosa, Suse Dayse; Mouly, Vincent; Savino, Wilson; Riederer, Ingo
2017-10-10
The hepatocyte growth factor (HGF) is required for the activation of muscle progenitor cells called satellite cells (SC), plays a role in the migration of proliferating SC (myoblasts), and is present as a soluble factor during muscle regeneration, along with extracellular matrix (ECM) molecules. In this study, we aimed at determining whether HGF is able to interact with ECM proteins, particularly laminin 111 and fibronectin, and to modulate human myoblast migration. We evaluated the expression of the HGF-receptor c-Met, laminin, and fibronectin receptors by immunoblotting, flow cytometry, or immunofluorescence and used Transwell assays to analyze myoblast migration on laminin 111 and fibronectin in the absence or presence of HGF. Zymography was used to check whether HGF could modulate the production of matrix metalloproteinases by human myoblasts, and the activation of MAPK/ERK pathways was evaluated by immunoblotting. We demonstrated that human myoblasts express c-Met, together with laminin and fibronectin receptors. We observed that human laminin 111 and fibronectin have a chemotactic effect on myoblast migration, and this was synergistically increased when low doses of HGF were added. We detected an increase in MMP-2 activity in myoblasts treated with HGF. Conversely, MMP-2 inhibition decreased the HGF-associated stimulation of cell migration triggered by laminin or fibronectin. HGF treatment also induced in human myoblasts activation of MAPK/ERK pathways, whose specific inhibition decreased the HGF-associated stimulus of cell migration triggered by laminin 111 or fibronectin. We demonstrate that HGF induces ERK phosphorylation and MMP production, thus stimulating human myoblast migration on ECM molecules. Conceptually, these data state that the mechanisms involved in the migration of human myoblasts comprise both soluble and insoluble moieties. This should be taken into account to optimize the design of therapeutic cell transplantation strategies by improving
Matrix light and pixel light: optical system architecture and requirements to the light source
Spinger, Benno; Timinger, Andreas L.
2015-09-01
Modern Automotive headlamps enable improved functionality for more driving comfort and safety. Matrix or Pixel light headlamps are not restricted to either pure low beam functionality or pure high beam. Light in direction of oncoming traffic is selectively switched of, potential hazard can be marked via an isolated beam and the illumination on the road can even follow a bend. The optical architectures that enable these advanced functionalities are diverse. Electromechanical shutters and lens units moved by electric motors were the first ways to realize these systems. Switching multiple LED light sources is a more elegant and mechanically robust solution. While many basic functionalities can already be realized with a limited number of LEDs, an increasing number of pixels will lead to more driving comfort and better visibility. The required optical system needs not only to generate a desired beam distribution with a high angular dynamic, but also needs to guarantee minimal stray light and cross talk between the different pixels. The direct projection of the LED array via a lens is a simple but not very efficient optical system. We discuss different optical elements for pre-collimating the light with minimal cross talk and improved contrast between neighboring pixels. Depending on the selected optical system, we derive the basic light source requirements: luminance, surface area, contrast, flux and color homogeneity.
Directory of Open Access Journals (Sweden)
J. S. Han
2006-01-01
Full Text Available Size- and time-resolved aerosol samples were collected using an eight-stage Davis rotating unit for monitoring (DRUM sampler from 29 March to 29 May in 2002 at Gosan, Jeju Island, Korea, which is one of the representative background sites in East Asia. These samples were analyzed using synchrotron X-ray fluo