WorldWideScience

Sample records for affinity calculation based

  1. Free energy calculations to estimate ligand-binding affinities in structure-based drug design.

    Science.gov (United States)

    Reddy, M Rami; Reddy, C Ravikumar; Rathore, R S; Erion, Mark D; Aparoy, P; Reddy, R Nageswara; Reddanna, P

    2014-01-01

    Post-genomic era has led to the discovery of several new targets posing challenges for structure-based drug design efforts to identify lead compounds. Multiple computational methodologies exist to predict the high ranking hit/lead compounds. Among them, free energy methods provide the most accurate estimate of predicted binding affinity. Pathway-based Free Energy Perturbation (FEP), Thermodynamic Integration (TI) and Slow Growth (SG) as well as less rigorous end-point methods such as Linear interaction energy (LIE), Molecular Mechanics-Poisson Boltzmann./Generalized Born Surface Area (MM-PBSA/GBSA) and λ-dynamics have been applied to a variety of biologically relevant problems. The recent advances in free energy methods and their applications including the prediction of protein-ligand binding affinity for some of the important drug targets have been elaborated. Results using a recently developed Quantum Mechanics (QM)/Molecular Mechanics (MM) based Free Energy Perturbation (FEP) method, which has the potential to provide a very accurate estimation of binding affinities to date has been discussed. A case study for the optimization of inhibitors for the fructose 1,6- bisphosphatase inhibitors has been described. PMID:23947646

  2. Ga(+) Basicity and Affinity Scales Based on High-Level Ab Initio Calculations.

    Science.gov (United States)

    Brea, Oriana; Mó, Otilia; Yáñez, Manuel

    2015-10-26

    The structure, relative stability and bonding of complexes formed by the interaction between Ga(+) and a large set of compounds, including hydrocarbons, aromatic systems, and oxygen-, nitrogen-, fluorine and sulfur-containing Lewis bases have been investigated through the use of the high-level composite ab initio Gaussian-4 theory. This allowed us to establish rather accurate Ga(+) cation affinity (GaCA) and Ga(+) cation basicity (GaCB) scales. The bonding analysis of the complexes under scrutiny shows that, even though one of the main ingredients of the Ga(+) -base interaction is electrostatic, it exhibits a non-negligible covalent character triggered by the presence of the low-lying empty 4p orbital of Ga(+) , which favors a charge donation from occupied orbitals of the base to the metal ion. This partial covalent character, also observed in AlCA scales, is behind the dissimilarities observed when GaCA are compared with Li(+) cation affinities, where these covalent contributions are practically nonexistent. Quite unexpectedly, there are some dissimilarities between several Ga(+) -complexes and the corresponding Al(+) -analogues, mainly affecting the relative stability of π-complexes involving aromatic compounds. PMID:26269224

  3. Calculation of Host-Guest Binding Affinities Using a Quantum-Mechanical Energy Model

    OpenAIRE

    Muddana, Hari S.; Gilson, Michael K.

    2012-01-01

    The prediction of protein-ligand binding affinities is of central interest in computer-aided drug discovery, but it is still difficult to achieve a high degree of accuracy. Recent studies suggesting that available force fields may be a key source of error motivate the present study, which reports the first mining minima (M2) binding affinity calculations based on a quantum mechanical energy model, rather than an empirical force field. We apply a semi-empirical quantum-mechanical energy functi...

  4. Protein-protein binding affinities calculated using the LIE method

    OpenAIRE

    Andberg, Tor Arne Heim

    2011-01-01

    Absolute binding free energies for the third domain of the turkey ovomucoid inhibitor in complex with Streptomyces griseus proteinase B and porcine pancreatic elastase has been calculated using the linear interaction energy method.

  5. Ab initio calculation of ionization potential and electron affinity in solid-state organic semiconductors

    Science.gov (United States)

    Kang, Youngho; Jeon, Sang Ho; Cho, Youngmi; Han, Seungwu

    2016-01-01

    We investigate the vertical ionization potential (IP) and electron affinity (EA) of organic semiconductors in the solid state that govern the optoelectrical property of organic devices using a fully ab initio way. The present method combines the density functional theory and many-body perturbation theory based on G W approximations. To demonstrate the accuracy of this approach, we carry out calculations on several prototypical organic molecules. Since IP and EA depend on the molecular orientation at the surface, the molecular geometry of the surface is explicitly considered through the slab model. The computed IP and EA are in reasonable and consistent agreements with spectroscopic data on organic surfaces with various molecular arrangements. However, the transport gaps are slightly underestimated in calculations, which can be explained by different screening effects between surface and bulk regions.

  6. Electron affinity of (7)Li calculated with the inclusion of nuclear motion and relativistic corrections.

    Science.gov (United States)

    Stanke, Monika; Kedziera, Dariusz; Bubin, Sergiy; Adamowicz, Ludwik

    2007-10-01

    Explicitly correlated Gaussian functions have been used to perform very accurate variational calculations for the ground states of (7)Li and (7)Li(-). The nuclear motion has been explicitly included in the calculations (i.e., they have been done without assuming the Born-Oppenheimer (BO) approximation). An approach based on the analytical energy gradient calculated with respect to the Gaussian exponential parameters was employed. This led to a noticeable improvement of the previously determined variational upper bound to the nonrelativistic energy of Li(-). The Li energy obtained in the calculations matches those of the most accurate results obtained with Hylleraas functions. The finite-mass (non-BO) wave functions were used to calculate the alpha(2) relativistic corrections (alpha=1c). With those corrections and the alpha(3) and alpha(4) corrections taken from Pachucki and Komasa [J. Chem. Phys. 125, 204304 (2006)], the electron affinity (EA) of (7)Li was determined. It agrees very well with the most recent experimental EA. PMID:17919011

  7. Affine invariant texture analysis based on structural properties

    OpenAIRE

    Zhang, Jianguo; Tan, Tieniu

    2002-01-01

    This paper presents a new texture analysis method based on structural properties. The texture features extracted using this algorithm are invariant to affine transform (including rotation, translation, scaling, and skewing). Affine invariant structural properties are derived based on texel areas. An area-ratio map utilizing these properties is introduced to characterize texture images. Histogram based on this map is constructed for texture classification. Efficiency of this algorithm for affi...

  8. Genetic Algorithm-based Affine Parameter Estimation for Shape Recognition

    Directory of Open Access Journals (Sweden)

    Yuxing Mao

    2014-06-01

    Full Text Available Shape recognition is a classically difficult problem because of the affine transformation between two shapes. The current study proposes an affine parameter estimation method for shape recognition based on a genetic algorithm (GA. The contributions of this study are focused on the extraction of affine- invariant features, the individual encoding scheme, and the fitness function construction policy for a GA. First, the affine-invariant characteristics of the centroid distance ratios (CDRs of any two opposite contour points to the barycentre are analysed. Using different intervals along the azimuth angle, the different numbers of CDRs of two candidate shapes are computed as representations of the shapes, respectively. Then, the CDRs are selected based on predesigned affine parameters to construct the fitness function. After that, a GA is used to search for the affine parameters with optimal matching between candidate shapes, which serve as actual descriptions of the affine transformation between the shapes. Finally, the CDRs are resampled based on the estimated parameters to evaluate the similarity of the shapes for classification. The experimental results demonstrate the robust performance of the proposed method in shape recognition with translation, scaling, rotation and distortion.

  9. Capillary affinity electrophoresis and ab initio calculation studies on complexation of valinomycin with Na+ ion

    Czech Academy of Sciences Publication Activity Database

    Ehala, Sille; Dybal, Jiří; Makrlík, E.; Kašička, Václav

    2009-01-01

    Roč. 32, č. 4 (2009), s. 597-604. ISSN 1615-9306 R&D Projects: GA ČR(CZ) GA203/06/1044; GA ČR(CZ) GA203/08/1428; GA AV ČR 1ET400500402 Institutional research plan: CEZ:AV0Z40550506; CEZ:AV0Z40500505 Keywords : capillary affinity electrophoresis * valinomycin * ab initio calculation Subject RIV: CB - Analytical Chemistry, Separation Impact factor: 2.551, year: 2009

  10. Affinity based and molecularly imprinted cryogels: Applications in biomacromolecule purification.

    Science.gov (United States)

    Andaç, Müge; Galaev, Igor Yu; Denizli, Adil

    2016-05-15

    The publications in macro-molecularly imprinted polymers have increased drastically in recent years with the development of water-based polymer systems. The macroporous structure of cryogels has allowed the use of these materials within different applications, particularly in affinity purification and molecular imprinting based methods. Due to their high selectivity, specificity, efficient mass transfer and good reproducibility, molecularly imprinted cryogels (MICs) have become attractive for researchers in the separation and purification of proteins. In this review, the recent developments in affinity based cryogels and molecularly imprinted cryogels in protein purification are reviewed comprehensively. PMID:26454622

  11. Enhancing Community Detection By Affinity-based Edge Weighting Scheme

    Energy Technology Data Exchange (ETDEWEB)

    Yoo, Andy [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Sanders, Geoffrey [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Henson, Van [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Vassilevski, Panayot [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-10-05

    Community detection refers to an important graph analytics problem of finding a set of densely-connected subgraphs in a graph and has gained a great deal of interest recently. The performance of current community detection algorithms is limited by an inherent constraint of unweighted graphs that offer very little information on their internal community structures. In this paper, we propose a new scheme to address this issue that weights the edges in a given graph based on recently proposed vertex affinity. The vertex affinity quantifies the proximity between two vertices in terms of their clustering strength, and therefore, it is ideal for graph analytics applications such as community detection. We also demonstrate that the affinity-based edge weighting scheme can improve the performance of community detection algorithms significantly.

  12. Semisupervised Clustering for Networks Based on Fast Affinity Propagation

    Directory of Open Access Journals (Sweden)

    Mu Zhu

    2013-01-01

    Full Text Available Most of the existing clustering algorithms for networks are unsupervised, which cannot help improve the clustering quality by utilizing a small number of prior knowledge. We propose a semisupervised clustering algorithm for networks based on fast affinity propagation (SCAN-FAP, which is essentially a kind of similarity metric learning method. Firstly, we define a new constraint similarity measure integrating the structural information and the pairwise constraints, which reflects the effective similarities between nodes in networks. Then, taking the constraint similarities as input, we propose a fast affinity propagation algorithm which keeps the advantages of the original affinity propagation algorithm while increasing the time efficiency by passing only the messages between certain nodes. Finally, by extensive experimental studies, we demonstrate that the proposed algorithm can take fully advantage of the prior knowledge and improve the clustering quality significantly. Furthermore, our algorithm has a superior performance to some of the state-of-art approaches.

  13. Canonical bases and affine Hecke algebras of type B

    CERN Document Server

    Varagnolo, Michela

    2009-01-01

    We prove a series of conjectures of Enomoto and Kashiwara on canonical bases and branching rules of affine Hecke algebras of type B. The main ingredient of the proof is a new graded Ext-algebra associated with quiver with involutions that we compute explicitly.

  14. Hyperspherical Calculations on Electron Affinity and Geometry for Li-and Na-

    Institute of Scientific and Technical Information of China (English)

    HAN Hui-Li; ZHANG Xian-zhou; SHI Ting-Yun

    2007-01-01

    Using a model potential to describe the interaction between the core and the valence electron,we perform hyperspherical calculations for electron affinity and geometry of the weakly bound Li-and Na-systems.In ourcalculation.channel functions are expanded in terms of B-splines.Using the special properties of B-splines,we make the knot distributions more precisely,characterizing the behaviour of channel functions.This improves the convergence greatly.Our results are in good agreement with the other theoretical and experimental values.

  15. DFT Calculations of the Ionization Potential and Electron Affinity of Alaninamide

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Adiabatic and vertical ionization potentials (IPs) and valence electron affinities (EAs) of alaninamide in gas phase have been determined using density functional theory (BLYP, B3LYP, B3P86) methods with 6-311++G(d, p) basis set, respectively. IPs and EAs of alaninamide in solutions have been calculated at the B3LYP/6-311++G(d, p) level. Five possible conformers of alaninamide and their charged states have been optimized employing density functional theory B3LYP method with 6-311++(d, p) basis set, respectively.

  16. Electron Affinity Calculations for Atoms: Sensitive Probe of Many-Body Effects

    Science.gov (United States)

    Felfli, Z.; Msezane, A. Z.

    2016-05-01

    Electron-electron correlations and core-polarization interactions are crucial for the existence and stability of most negative ions. Therefore, they can be used as a sensitive probe of many-body effects in the calculation of the electron affinities (EAs) of atoms. The importance of relativistic effects in the calculation of the EAs of atoms has recently been assessed to be insignificant up to Z of 85. Here we use the complex angular momentum (CAM) methodology wherein is embedded fully the electron-electron correlations, to investigate core-polarization interactions in low-energy electron elastic scattering from the atoms In, Sn, Eu, Au and At through the calculation of their EAs. For the core-polarization interaction we use the rational function approximation of the Thomas-Fermi potential, which can be analytically continued into the complex plane. The EAs are extracted from the large resonance peaks in the CAM calculated low-energy electron-atom scattering total cross sections and compared with those from measurements and sophisticated theoretical methods. It is concluded that when the electron-electron correlations and core polarization interactions (both major many-body effects) are accounted for adequately the importance of relativity on the calculation of the EAs of atoms can be assessed. Even for the high Z (85) At atom relativistic effects are estimated to contribute a maximum of 3.6% to its EA calculation.

  17. Evolution based on chromosome affinity from a network perspective

    Science.gov (United States)

    Monteiro, R. L. S.; Fontoura, J. R. A.; Carneiro, T. K. G.; Moret, M. A.; Pereira, H. B. B.

    2014-06-01

    Recent studies have focused on models to simulate the complex phenomenon of evolution of species. Several studies have been performed with theoretical models based on Darwin's theories to associate them with the actual evolution of species. However, none of the existing models include the affinity between individuals using network properties. In this paper, we present a new model based on the concept of affinity. The model is used to simulate the evolution of species in an ecosystem composed of individuals and their relationships. We propose an evolutive algorithm that incorporates the degree centrality and efficiency network properties to perform the crossover process and to obtain the network topology objective, respectively. Using a real network as a starting point, we simulate its evolution and compare its results with the results of 5788 computer-generated networks.

  18. Studying protein–protein affinity and immobilized ligand–protein affinity interactions using MS-based methods

    OpenAIRE

    Kool, J.; N. Jonker; Irth, H.; Niessen, W.M.A.

    2011-01-01

    This review discusses the most important current methods employing mass spectrometry (MS) analysis for the study of protein affinity interactions. The methods are discussed in depth with particular reference to MS-based approaches for analyzing protein–protein and protein–immobilized ligand interactions, analyzed either directly or indirectly. First, we introduce MS methods for the study of intact protein complexes in the gas phase. Next, pull-down methods for affinity-based analysis of prote...

  19. Calculation of the enthalpies of formation and proton affinities of some isoquinoline derivatives

    Energy Technology Data Exchange (ETDEWEB)

    Namazian, Mansoor [ARC Centre of Excellence for Free-Radical Chemistry and Biotechnology, Research School of Chemistry, Australian National University, Canberra ACT 0200 (Australia)], E-mail: namazian@rsc.anu.edu.au; Coote, Michelle L. [ARC Centre of Excellence for Free-Radical Chemistry and Biotechnology, Research School of Chemistry, Australian National University, Canberra ACT 0200 (Australia)], E-mail: mcoote@rsc.anu.edu.au

    2008-12-15

    Ab initio molecular orbital theory has been used to calculate enthalpies of formation of isoquinoline, 1-hydroxyisoquinoline, 5-hydroxyisoquinoline, and 1,5-dihydroxyisoquinoline as well as some pyridine and quinoline derivatives. The proton affinities of the four isoquinoline derivatives were also obtained. The high-level composite methods G3(MP2), G3(MP2)//B3LYP, G3//B3LYP, and CBS-QB3 have been used for this study, and the results have been compared with available experimental values. For six of the eight studied compounds, the theoretical enthalpies of formation were very close to the experimental values (to within 4.3 kJ . mol{sup -1}); where comparison was possible, the theoretical and experimental proton affinities were also in excellent agreement with one another. However, there is an extraordinary discrepancy between theory and experiment for the enthalpies of formation of 1-hydroxyisoquinoline and 1,5-dihydroxyisoquinoline, suggesting that the experimental values for these two compounds should perhaps be re-examined. We also show that popular low cost computational methods such as B3LYP and MP2 show very large deviations from the benchmark values.

  20. Affinity approaches in RNAi-based therapeutics purification.

    Science.gov (United States)

    Pereira, Patrícia; Queiroz, João A; Figueiras, Ana; Sousa, Fani

    2016-05-15

    The recent investigation on RNA interference (RNAi) related mechanisms and applications led to an increased awareness of the importance of RNA in biology. Nowadays, RNAi-based technology has emerged as a potentially powerful tool for silencing gene expression, being exploited to develop new therapeutics for treating a vast number of human disease conditions, as it is expected that this technology can be translated onto clinical applications in a near future. This approach makes use of a large number of small (namely short interfering RNAs, microRNAs and PIWI-interacting RNAs) and long non-coding RNAs (ncRNAs), which are likely to have a crucial role as the next generation therapeutics. The commercial and biomedical interest in these RNAi-based therapy applications have fostered the need to develop innovative procedures to easily and efficiently purify RNA, aiming to obtain the final product with high purity degree, good quality and biological activity. Recently, affinity chromatography has been applied to ncRNAs purification, in view of the high specificity. Therefore, this article intends to review the biogenesis pathways of regulatory ncRNAs and also to discuss the most significant and recent developments as well as applications of affinity chromatography in the challenging task of purifying ncRNAs. In addition, the importance of affinity chromatography in ncRNAs purification is addressed and prospects for what is forthcoming are presented. PMID:26830537

  1. Realization of parking task based on affine system modeling

    International Nuclear Information System (INIS)

    This paper presents a motion control system of an unmanned vehicle, where parallel parking task is realized based on a self-organizing affine system modeling and a quadratic programming based robust controller. Because of non-linearity of the vehicle system and complexity of the task to realize, control objective is not always realized with a single algorithm or control mode. This paper presents a hybrid model for parallel parking task in which seven modes for describing sub-tasks constitute an entire model

  2. Flexible Molybdenum Electrodes towards Designing Affinity Based Protein Biosensors.

    Science.gov (United States)

    Kamakoti, Vikramshankar; Panneer Selvam, Anjan; Radha Shanmugam, Nandhinee; Muthukumar, Sriram; Prasad, Shalini

    2016-01-01

    Molybdenum electrode based flexible biosensor on porous polyamide substrates has been fabricated and tested for its functionality as a protein affinity based biosensor. The biosensor performance was evaluated using a key cardiac biomarker; cardiac Troponin-I (cTnI). Molybdenum is a transition metal and demonstrates electrochemical behavior upon interaction with an electrolyte. We have leveraged this property of molybdenum for designing an affinity based biosensor using electrochemical impedance spectroscopy. We have evaluated the feasibility of detection of cTnI in phosphate-buffered saline (PBS) and human serum (HS) by measuring impedance changes over a frequency window from 100 mHz to 1 MHz. Increasing changes to the measured impedance was correlated to the increased dose of cTnI molecules binding to the cTnI antibody functionalized molybdenum surface. We achieved cTnI detection limit of 10 pg/mL in PBS and 1 ng/mL in HS medium. The use of flexible substrates for designing the biosensor demonstrates promise for integration with a large-scale batch manufacturing process. PMID:27438863

  3. Affinity sensor based on immobilized molecular imprinted synthetic recognition elements.

    Science.gov (United States)

    Lenain, Pieterjan; De Saeger, Sarah; Mattiasson, Bo; Hedström, Martin

    2015-07-15

    An affinity sensor based on capacitive transduction was developed to detect a model compound, metergoline, in a continuous flow system. This system simulates the monitoring of low-molecular weight organic compounds in natural flowing waters, i.e. rivers and streams. During operation in such scenarios, control of the experimental parameters is not possible, which poses a true analytical challenge. A two-step approach was used to produce a sensor for metergoline. Submicron spherical molecularly imprinted polymers, used as recognition elements, were obtained through emulsion polymerization and subsequently coupled to the sensor surface by electropolymerization. This way, a robust and reusable sensor was obtained that regenerated spontaneously under the natural conditions in a river. Small organic compounds could be analyzed in water without manipulating the binding or regeneration conditions, thereby offering a viable tool for on-site application. PMID:25703726

  4. Binding affinity prediction of novel estrogen receptor ligands using receptor-based 3-D QSAR methods.

    Science.gov (United States)

    Sippl, Wolfgang

    2002-12-01

    We have recently reported the development of a 3-D QSAR model for estrogen receptor ligands showing a significant correlation between calculated molecular interaction fields and experimentally measured binding affinity. The ligand alignment obtained from docking simulations was taken as basis for a comparative field analysis applying the GRID/GOLPE program. Using the interaction field derived with a water probe and applying the smart region definition (SRD) variable selection procedure, a significant and robust model was obtained (q(2)(LOO)=0.921, SDEP=0.345). To further analyze the robustness and the predictivity of the established model several recently developed estrogen receptor ligands were selected as external test set. An excellent agreement between predicted and experimental binding data was obtained indicated by an external SDEP of 0.531. Two other traditionally used prediction techniques were applied in order to check the performance of the receptor-based 3-D QSAR procedure. The interaction energies calculated on the basis of receptor-ligand complexes were correlated with experimentally observed affinities. Also ligand-based 3-D QSAR models were generated using program FlexS. The interaction energy-based model, as well as the ligand-based 3-D QSAR models yielded models with lower predictivity. The comparison with the interaction energy-based model and with the ligand-based 3-D QSAR models, respectively, indicates that the combination of receptor-based and 3-D QSAR methods is able to improve the quality of prediction. PMID:12413831

  5. Core-Polarization and Relativistic Effects in Electron Affinity Calculations for Atoms: A Complex Angular Momentum Investigation

    CERN Document Server

    Felfli, Z

    2015-01-01

    Core-polarization interactions are investigated in low-energy electron elastic scattering from the atoms In,Sn,Eu,Au and At through the calculation of their electron affinities. The complex angular momentum method wherein is embedded the vital electron-electron correlations is used. The core-polarization effects are studied through the well investigated rational function approximation of the Thomas-Fermi potential,which can be analytically continued into the complex plane. The EAs are extracted from the large resonance peaks in the calculated low-energy electron atom scattering total cross sections and compared with those from measurements and sophisticated theoretical methods. It is concluded that when the electron-electron correlation effects and core polarization interactions are accounted for adequately the importance of relativity on the calculation of the electron affinities of atoms can be assessed. For At, relativistic effects are estimated to contribute a maximum of about 3.6 percent to its (non-rela...

  6. PBSA_E: A PBSA-Based Free Energy Estimator for Protein-Ligand Binding Affinity.

    Science.gov (United States)

    Liu, Xiao; Liu, Jinfeng; Zhu, Tong; Zhang, Lujia; He, Xiao; Zhang, John Z H

    2016-05-23

    Improving the accuracy of scoring functions for estimating protein-ligand binding affinity is of significant interest as well as practical utility in drug discovery. In this work, PBSA_E, a new free energy estimator based on the molecular mechanics/Poisson-Boltzmann surface area (MM/PBSA) descriptors, has been developed. This free energy estimator was optimized using high-quality experimental data from a training set consisting of 145 protein-ligand complexes. The method was validated on two separate test sets containing 121 and 130 complexes. Comparison of the binding affinities predicted using the present method with those obtained using three popular scoring functions, i.e., GlideXP, GlideSP, and SYBYL_F, demonstrated that the PBSA_E method is more accurate. This new energy estimator requires a MM/PBSA calculation of the protein-ligand binding energy for a single complex configuration, which is typically obtained by optimizing the crystal structure. The present study shows that PBSA_E has the potential to become a robust tool for more reliable estimation of protein-ligand binding affinity in structure-based drug design. PMID:27088302

  7. Affinity chromatography based on a combinatorial strategy for rerythropoietin purification.

    Science.gov (United States)

    Martínez-Ceron, María C; Marani, Mariela M; Taulés, Marta; Etcheverrigaray, Marina; Albericio, Fernando; Cascone, Osvaldo; Camperi, Silvia A

    2011-05-01

    Small peptides containing fewer than 10 amino acids are promising ligand candidates with which to build affinity chromatographic systems for industrial protein purification. The application of combinatorial peptide synthesis strategies greatly facilitates the discovery of suitable ligands for any given protein of interest. Here we sought to identify peptide ligands with affinity for recombinant human erythropoietin (rhEPO), which is used for the treatment of anemia. A combinatorial library containing the octapeptides X-X-X-Phe-X-X-Ala-Gly, where X = Ala, Asp, Glu, Phe, His, Leu, Asn, Pro, Ser, or Thr, was synthesized on HMBA-ChemMatrix resin by the divide-couple-recombine method. For the library screening, rhEPO was coupled to either Texas Red or biotin. Fluorescent beads or beads showing a positive reaction with streptavidin-peroxidase were isolated. After cleavage, peptides were sequenced by matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF MS). Fifty-seven beads showed a positive reaction. Peptides showing more consensuses were synthesized, and their affinity to rhEPO was assessed using a plasma resonance biosensor. Dissociation constant values in the range of 1-18 μM were obtained. The best two peptides were immobilized on Sepharose, and the resultant chromatographic matrixes showed affinity for rhEPO with dissociation constant values between 1.8 and 2.7 μM. Chinese hamster ovary (CHO) cell culture supernatant was spiked with rhEPO, and the artificial mixture was loaded on Peptide-Sepharose columns. The rhEPO was recovered in the elution fraction with a yield of 90% and a purity of 95% and 97% for P1-Sepharose and P2-Sepharose, respectively. PMID:21495625

  8. Standard Bases for Affine SL(n)-Modules

    OpenAIRE

    Kreiman, V.; Lakshmibai, V.; Magyar, P.; Weyman, J.

    2004-01-01

    We give an elementary and easily computable basis for the Demazure modules in the basic representation of the affine Lie algebra sl(n)-hat (and the loop group SL(n)-hat). A novel feature is that we define our basis ``bottom-up'' by raising each extremal weight vector, rather than ``top-down'' by lowering the highest weight vector. Our basis arises naturally from the combinatorics of its indexing set, which consists of certain subsets of the integers first specified by the Kyoto school in term...

  9. New ultra-high affinity host-guest complexes of cucurbit[7]uril with bicyclo[2.2.2]octane and adamantane guests: Thermodynamic analysis and evaluation of M2 affinity calculations

    OpenAIRE

    Moghaddam, Sarvin; Yang, Cheng; Rekharsky, Mikhail; Ko, Young Ho; Kim, Kimoon; Inoue, Yoshihisa; Gilson, Michael K.

    2011-01-01

    A dicationic ferrocene derivative has previously been shown to bind cucurbit[7]uril (CB[7]) in water with ultra-high affinity (ΔGo= −21 kcal/mol). Here, we describe new compounds that bind aqueous CB[7] equally well, validating our prior suggestion that they, too, would be ultra-high affinity CB[7] guests. The present guests, which are based upon either a bicyclo[2.2.2]octane or adamantane core, have no metal atoms, so these results also confirm that the remarkably high affinities of the ferr...

  10. A novel protein complex identification algorithm based on Connected Affinity Clique Extension (CACE).

    Science.gov (United States)

    Li, Peng; He, Tingting; Hu, Xiaohua; Zhao, Junmin; Shen, Xianjun; Zhang, Ming; Wang, Yan

    2014-06-01

    A novel algorithm based on Connected Affinity Clique Extension (CACE) for mining overlapping functional modules in protein interaction network is proposed in this paper. In this approach, the value of protein connected affinity which is inferred from protein complexes is interpreted as the reliability and possibility of interaction. The protein interaction network is constructed as a weighted graph, and the weight is dependent on the connected affinity coefficient. The experimental results of our CACE in two test data sets show that the CACE can detect the functional modules much more effectively and accurately when compared with other state-of-art algorithms CPM and IPC-MCE. PMID:24803142

  11. Second-Order Perturbation Theory for Fractional Occupation Systems: Applications to Ionization Potential and Electron Affinity Calculations.

    Science.gov (United States)

    Su, Neil Qiang; Xu, Xin

    2016-05-10

    Recently, we have developed an integration approach for the calculations of ionization potentials (IPs) and electron affinities (EAs) of molecular systems at the level of second-order Møller-Plesset (MP2) (Su, N. Q.; Xu, X. J. Chem. Theory Comput. 11, 4677, 2015), where the full MP2 energy gradient with respect to the orbital occupation numbers was derived but only at integer occupations. The theory is completed here to cover the fractional occupation systems, such that Slater's transition state concept can be used to have accurate predictions of IPs and EAs. Antisymmetrized Goldstone diagrams have been employed for interpretations and better understanding of the derived equations, where two additional rules were introduced in the present work specifically for hole or particle lines with fractional occupation numbers. PMID:27010405

  12. Experimental Immunization Based on Plasmodium Antigens Isolated by Antibody Affinity

    Science.gov (United States)

    Kamali, Ali N.; Marín-García, Patricia; Azcárate, Isabel G.; Puyet, Antonio; Diez, Amalia; Bautista, José M.

    2015-01-01

    Vaccines blocking malaria parasites in the blood-stage diminish mortality and morbidity caused by the disease. Here, we isolated antigens from total parasite proteins by antibody affinity chromatography to test an immunization against lethal malaria infection in a murine model. We used the sera of malaria self-resistant ICR mice to lethal Plasmodium yoelii yoelii 17XL for purification of their IgGs which were subsequently employed to isolate blood-stage parasite antigens that were inoculated to immunize BALB/c mice. The presence of specific antibodies in vaccinated mice serum was studied by immunoblot analysis at different days after vaccination and showed an intensive immune response to a wide range of antigens with molecular weight ranging between 22 and 250 kDa. The humoral response allowed delay of the infection after the inoculation to high lethal doses of P. yoelii yoelii 17XL resulting in a partial protection against malaria disease, although final survival was managed in a low proportion of challenged mice. This approach shows the potential to prevent malaria disease with a set of antigens isolated from blood-stage parasites. PMID:26539558

  13. Partial filling affinity capillary electrophoresis including adsorption energy distribution calculations--towards reliable and feasible biomolecular interaction studies.

    Science.gov (United States)

    Witos, Joanna; Samuelsson, Jörgen; Cilpa-Karhu, Geraldine; Metso, Jari; Jauhiainen, Matti; Riekkola, Marja-Liisa

    2015-05-01

    In this work, a method to study and analyze the interaction data in free solution by exploiting partial filling affinity capillary electrophoresis (PF-ACE) followed by adsorption energy distribution calculations (AED) prior model fit to adsorption isotherms will be demonstrated. PF-ACE-AED approach allowed the possibility to distinguish weak and strong interactions of the binding processes between the most common apolipoprotein E protein isoforms (apoE2, apoE3, apoE4) of high density lipoprotein (HDL) and apoE-containing HDL2 with major glycosaminoglycan (GAG) chain of proteoglycans (PGs), chondroitin-6-sulfate (C6S). The AED analysis clearly revealed the heterogeneity of the binding processes. The major difference was that they were heterogeneous with two different adsorption sites for apoE2 and apoE4 isoforms, whereas interestingly for apoE3 and apoE-containing HDL2, the binding was homogeneous (one site) adsorption process. Moreover, our results allowed the evaluation of differences in the binding process strengths giving the following order with C6S: apoE-containing HDL2 > apoE2 > apoE4 > apoE3. In addition, the affinity constant values determined could be compared with those obtained in our previous studies for the interactions between apoE isoforms and another important GAG chain of PGs - dermatan sulfate (DS). The success of the combination of AED calculations prior to non-linear adsorption isotherm model fit with PF-ACE when the concentration range was extended, confirmed the power of the system in the clarification of the heterogeneity of biological processes studied. PMID:25751597

  14. Engineering protein therapeutics: predictive performances of a structure-based virtual affinity maturation protocol.

    Science.gov (United States)

    Oberlin, Michael; Kroemer, Romano; Mikol, Vincent; Minoux, Hervé; Tastan, Erdogan; Baurin, Nicolas

    2012-08-27

    The implementation of a structure based virtual affinity maturation protocol and evaluation of its predictivity are presented. The in silico protocol is based on conformational sampling of the interface residues (using the Dead End Elimination/A* algorithm), followed by the estimation of the change of free energy of binding due to a point mutation, applying MM/PBSA calculations. Several implementations of the protocol have been evaluated for 173 mutations in 7 different protein complexes for which experimental data were available: the use of the Boltzamnn averaged predictor based on the free energy of binding (ΔΔG(*)) combined with the one based on its polar component only (ΔΔE(pol*)) led to the proposal of a subset of mutations out of which 45% would have successfully enhanced the binding. When focusing on those mutations that are less likely to be introduced by natural in vivo maturation methods (99 mutations with at least two base changes in the codon), the success rate is increased to 63%. In another evaluation, focusing on 56 alanine scanning mutations, the in silico protocol was able to detect 89% of the hot-spots. PMID:22788756

  15. Preparation of Affinity Column Based on Zr{sup 4+} Ion for Phosphoproteins Isolation

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Seon Mi; Bae, In Ae; Park, Jung Hyen; Kim, Tae Dong; Choi, Seong Ho [Hannam University, Daejeon (Korea, Republic of)

    2009-06-15

    This paper has described about preparation of Zr{sup 4+} affinity column based on the poly(styreneco- glycidyl methacrylate) prepared by emulsion polymerization of styrene and glycidyl methacrylate in order to isolate phosphopeptide. The Zr{sup 4+} ions were introduced after the phophonation of an epoxy group on polymeric microspheres. The successful preparation of Zr{sup 4+}-immobilized polymeric microsphere stationary phase was confirmed through Fourier transform infrared spectra, optical microscopy, scanning electron microscopy, X-ray photoelectron spectra and inductively coupled plasma-atomic emission spectrometer. The separation efficiency for Zr{sup 4+} affinity column prepared by slurry packing was tested to phosphonated casein and dephosphonated casein. The resolution time (min) of the phosphonated casein was higher than that of dephosphated casein for Zr{sup 4+} affinity polymeric microsphere by liquid chromatography. This Zr{sup 4+} affinity column can be used for isolation of phosphonated casein from casein using liquid chromatography.

  16. Preparation of Affinity Column Based on Zr4+ Ion for Phosphoproteins Isolation

    International Nuclear Information System (INIS)

    This paper has described about preparation of Zr4+ affinity column based on the poly(styreneco- glycidyl methacrylate) prepared by emulsion polymerization of styrene and glycidyl methacrylate in order to isolate phosphopeptide. The Zr4+ ions were introduced after the phophonation of an epoxy group on polymeric microspheres. The successful preparation of Zr4+-immobilized polymeric microsphere stationary phase was confirmed through Fourier transform infrared spectra, optical microscopy, scanning electron microscopy, X-ray photoelectron spectra and inductively coupled plasma-atomic emission spectrometer. The separation efficiency for Zr4+ affinity column prepared by slurry packing was tested to phosphonated casein and dephosphonated casein. The resolution time (min) of the phosphonated casein was higher than that of dephosphated casein for Zr4+ affinity polymeric microsphere by liquid chromatography. This Zr4+ affinity column can be used for isolation of phosphonated casein from casein using liquid chromatography

  17. Quantum image encryption based on generalized affine transform and logistic map

    Science.gov (United States)

    Liang, Hao-Ran; Tao, Xiang-Yang; Zhou, Nan-Run

    2016-07-01

    Quantum circuits of the generalized affine transform are devised based on the novel enhanced quantum representation of digital images. A novel quantum image encryption algorithm combining the generalized affine transform with logistic map is suggested. The gray-level information of the quantum image is encrypted by the XOR operation with a key generator controlled by the logistic map, while the position information of the quantum image is encoded by the generalized affine transform. The encryption keys include the independent control parameters used in the generalized affine transform and the logistic map. Thus, the key space is large enough to frustrate the possible brute-force attack. Numerical simulations and analyses indicate that the proposed algorithm is realizable, robust and has a better performance than its classical counterpart in terms of computational complexity.

  18. Quantum image encryption based on generalized affine transform and logistic map

    Science.gov (United States)

    Liang, Hao-Ran; Tao, Xiang-Yang; Zhou, Nan-Run

    2016-03-01

    Quantum circuits of the generalized affine transform are devised based on the novel enhanced quantum representation of digital images. A novel quantum image encryption algorithm combining the generalized affine transform with logistic map is suggested. The gray-level information of the quantum image is encrypted by the XOR operation with a key generator controlled by the logistic map, while the position information of the quantum image is encoded by the generalized affine transform. The encryption keys include the independent control parameters used in the generalized affine transform and the logistic map. Thus, the key space is large enough to frustrate the possible brute-force attack. Numerical simulations and analyses indicate that the proposed algorithm is realizable, robust and has a better performance than its classical counterpart in terms of computational complexity.

  19. G4MP2, DFT and CBS-Q calculation of proton and electron affinities, gas phase basicities and ionization energies of hydroxylamines and alkanolamines

    Indian Academy of Sciences (India)

    Younes Valadbeigi; Hossein Farrokhpour; Mahmoud Tabrizchi

    2014-07-01

    The proton affinities, gas phase basicities and adiabatic ionization energies and electron affinities of some important hydroxylamines and alkanolamines were calculated using B3LYP, CBS-Q and G4MP2 methods. Also, the B3LYP method was used to calculate vertical ionization energies and electron affinities of the molecules. The calculated ionization energies are in the range of 8-10.5 eV and they decrease as the number of carbon atoms increases. Computational results and ion mobility spectrometry study confirm that some alkanolamines lose a water molecule due to protonation at oxygen site and form cationic cyclic compounds. Effect of different substitutions on the cyclization of ethanolamine was studied theoretically.

  20. The theoretical advantage of affinity membrane-based immunoadsorption therapy of hypercholesterolemia

    International Nuclear Information System (INIS)

    Full text: Therapy of hypercholesterolemia using immunoadsorption of Low Density Lipoprotein (LDL) to a gel substrate is a current clinical technique (Bosch T., Biomat., Art. Cells and Immob. Biotech, 20: 1165- 1169, 1992). Recently, Affinity Membranes have been proposed as an alternate substrate for immunoadsorption (Brandt S and others, Bio Technology, 6:779-782, 1988). Potentially, the overall rate of adsorption to a membrane may be faster than to a gel because of the different geometry (ibid). This implies that for the same conditions, a membrane-based device will have a higher Number of Transfer Units, more efficient adsorption and a smaller device size than a gel. To test this hypothesis, we calculated two key theoretical design parameters: Separation Factor, R, and the Number of Transfer Units, N, for a functioning clinical-scale affinity membrane device: R=Kd/Kd+C0. Kd: Equilibrium Dissociation Constant (M) and Co: Feed Concentration (M) N=kaQmaxVm/F. ka: Intrinsic reaction rate constant (M-1 min-1), Qmax: Substrate capacity (M), Vm: Membrane volume (m1) and F: Flow Rate (m1 min-1). We assumed 1 hr treatment time during which 1 plasma volume (3L) is treated, hence F=50 (m1 min-1). If we assume 2/3 of LDL is removed from an initial level of 3 g/L, we can calculate an average feed concentration Co = 2 g / L. There is some data available in the literature for typical values of Kd (10-8 M) and ka ( 103 M-1s-1 to 3 x 105 M-1 s-1 ) (Olsen WC and others, Molec. Immun: 26: 129-136, 1989). Since the intrinsic reaction kinetics may vary from very slow (103 M) to very fast (3 x 105 M), the Number of Transfer Units, N may vary from small (2) to large (650). Hence for a membrane device, we must select the antibody with the fastest reaction, ka, and highest capacity (Qmax) otherwise, there may be no advantage in a membrane-based device over a gel-based device

  1. Architecture of high-affinity unnatural-base DNA aptamers toward pharmaceutical applications

    OpenAIRE

    Ken-ichiro Matsunaga; Michiko Kimoto; Charlotte Hanson; Michael Sanford; Young, Howard A.; Ichiro Hirao

    2015-01-01

    We present a remodeling method for high-affinity unnatural-base DNA aptamers to augment their thermal stability and nuclease resistance, for use as drug candidates targeting specific proteins. Introducing a unique mini-hairpin DNA provides robust stability to unnatural-base DNA aptamers generated by SELEX using genetic alphabet expansion, without reducing their high affinity. By this method, >80% of the remodeled DNA aptamer targeting interferon-γ (K D of 33 pM) survived in human serum at 37 ...

  2. Gaussian Affine Feature Detector

    OpenAIRE

    Xu, Xiaopeng; Zhang, Xiaochun

    2011-01-01

    A new method is proposed to get image features' geometric information. Using Gaussian as an input signal, a theoretical optimal solution to calculate feature's affine shape is proposed. Based on analytic result of a feature model, the method is different from conventional iterative approaches. From the model, feature's parameters such as position, orientation, background luminance, contrast, area and aspect ratio can be extracted. Tested with synthesized and benchmark data, the method achieve...

  3. Affinity-based constraint optimization for nearly-automatic vessel segmentation

    Science.gov (United States)

    Cooper, O.; Freiman, M.; Joskowicz, L.; Lischinski, D.

    2010-03-01

    We present an affinity-based optimization method for nearly-automatic vessels segmentation in CTA scans. The desired segmentation is modeled as a function that minimizes a quadratic affinity-based functional. The functional incorporates intensity and geometrical vessel shape information and a smoothing constraint. Given a few user-defined seeds, the minimum of the functional is obtained by solving a single set of linear equations. The binary segmentation is then obtained by applying a user-selected threshold. The advantages of our method are that it requires fewer initialization seeds, is robust, and yields better results than existing graph-based interactive segmentation methods. Experimental results on 20 vessel segments including the carotid arteries bifurcation and noisy parts of the carotid yield a mean symmetric surface error of 0.54mm (std=0.28).

  4. Data base to compare calculations and observations

    International Nuclear Information System (INIS)

    Meteorological and climatological data bases were compared with known tritium release points and diffusion calculations to determine if calculated concentrations could replace measure concentrations at the monitoring stations. Daily tritium concentrations were monitored at 8 stations and 16 possible receptors. Automated data retrieval strategies are listed

  5. Continuous Equilibrium in Affine and Information-Based Capital Asset Pricing Models

    OpenAIRE

    Horst, U.; Kupper, M.; Macrina, A; Mainberger, C.

    2012-01-01

    We consider a class of generalized capital asset pricing models in continuous time with a finite number of agents and tradable securities. The securities may not be sufficient to span all sources of uncertainty. If the agents have exponential utility functions and the individual endowments are spanned by the securities, an equilibrium exists and the agents' optimal trading strategies are constant. Affine processes, and the theory of information-based asset pricing are used to model the endoge...

  6. Novel cyclen-based linear polymer as a high-affinity binding material for DNA condensation

    Institute of Scientific and Technical Information of China (English)

    XIANG YongZhe; WANG Na; ZHANG Ji; LI Kun; ZHANG ZhongWei; LIN HongHui; YU XiaoQi

    2009-01-01

    A novel cyclen-based linear polyamine (POGEC) was designed and synthesized from the reaction be-tween 1,3-propanediol diglycidyl ether and 1,7-bis(diethoxyphosphory)-1,4,7,10-tetraazacyclod- odecane.High-affinity binding between POGEC and DNA was demonstrated by agarose gel electrophoresis and scanning electron microscopy (SEM). Moreover, the formed POGEC/DNA complex (termed polyplex) could be disassociated to release the free DNA through addition of the physiological concentration of NaCl solution. Fluorescence spectrum was used to measure the high-affinity binding and DNA con-densation capability of POGEC. Circular dichroism (CD) spectrum indicates that the DNA conformation did not change after binding to POEGC.

  7. Large scale affinity calculations of cyclodextrin host-guest complexes: Understanding the role of reorganization in the molecular recognition process

    OpenAIRE

    Wickstrom, Lauren; He, Peng; Gallicchio, Emilio; Ronald M Levy

    2013-01-01

    Host-guest inclusion complexes are useful models for understanding the structural and energetic aspects of molecular recognition. Due to their small size relative to much larger protein-ligand complexes, converged results can be obtained rapidly for these systems thus offering the opportunity to more reliably study fundamental aspects of the thermodynamics of binding. In this work, we have performed a large scale binding affinity survey of 57 β-cyclodextrin (CD) host guest systems using the b...

  8. Upper Subcritical Calculations Based on Correlated Data

    Energy Technology Data Exchange (ETDEWEB)

    Sobes, Vladimir [ORNL; Rearden, Bradley T [ORNL; Mueller, Don [ORNL; Marshall, William BJ J [ORNL; Scaglione, John M [ORNL; Dunn, Michael E [ORNL

    2015-01-01

    The American National Standards Institute and American Nuclear Society standard for Validation of Neutron Transport Methods for Nuclear Criticality Safety Calculations defines the upper subcritical limit (USL) as “a limit on the calculated k-effective value established to ensure that conditions calculated to be subcritical will actually be subcritical.” Often, USL calculations are based on statistical techniques that infer information about a nuclear system of interest from a set of known/well-characterized similar systems. The work in this paper is part of an active area of research to investigate the way traditional trending analysis is used in the nuclear industry, and in particular, the research is assessing the impact of the underlying assumption that the experimental data being analyzed for USL calculations are statistically independent. In contrast, the multiple experiments typically used for USL calculations can be correlated because they are often performed at the same facilities using the same materials and measurement techniques. This paper addresses this issue by providing a set of statistical inference methods to calculate the bias and bias uncertainty based on the underlying assumption that the experimental data are correlated. Methods to quantify these correlations are the subject of a companion paper and will not be discussed here. The newly proposed USL methodology is based on the assumption that the integral experiments selected for use in the establishment of the USL are sufficiently applicable and that experimental correlations are known. Under the assumption of uncorrelated data, the new methods collapse directly to familiar USL equations currently used. We will demonstrate our proposed methods on real data and compare them to calculations of currently used methods such as USLSTATS and NUREG/CR-6698. Lastly, we will also demonstrate the effect experiment correlations can have on USL calculations.

  9. Lipid A-based affinity biosensor for screening anti-sepsis components from herbs

    Directory of Open Access Journals (Sweden)

    Jie Yao

    2014-05-01

    Full Text Available LPS (lipopolysaccharide, an outer membrane component of Gram-negative bacteria, plays an important role in the pathogenesis of sepsis and lipid A is known to be essential for its toxicity. Therefore it could be an effective measure to prevent sepsis by neutralizing or destroying LPS. Numerous studies have indicated that many traditional Chinese medicines are natural antagonists of LPS in vitro and in vivo. The goal of this study is to develop a rapid method to screen anti-sepsis components from Chinese herbs by use of a direct lipid A-based affinity biosensor technology based on a resonant mirror. The detergent OG (n-octyl β-D-glucopyranoside was immobilized on a planar non-derivatized cuvette which provided an alternative surface to bind the terminal hydrophilic group of lipid A. A total of 78 herbs were screened based on the affinity biosensor with a target of lipid A. The aqueous extract of PSA (Paeonia suffruticosa Andr was found to possess the highest capability of binding lipid A. Therefore an aqueous extraction from this plant was investigated further by our affinity biosensor, polyamide chromatography and IEC–HPLC. Finally, we obtained a component (PSA-I-3 from Paeonia suffruticosa Andr that was evaluated with the affinity biosensor. We also studied the biological activities of PSA-I-3 against sepsis in vitro and in vivo to further confirm the component we screened with the biosensor. In vitro, we found that PSA-I-3 could decrease TNFα (tumour necrosis factor α release from RAW264.7 cells induced by LPS in a dose-dependent manner. In vivo, it increased remarkably the survival of KM (KunMing mice by challenging both lethal-dose LPS and heat-killed Escherichia coli compared with control groups. Our results suggest that the constructed affinity biosensor can successfully screen the anti-sepsis component from Chinese herbs.

  10. Simulation of fires based on flow calculation

    International Nuclear Information System (INIS)

    The fire simulation based on flow calculation is described in the publication, and the calculated result is compared with the results obtained from fire tests. The tests have been made in Germany in a nuclear power plant removed from service. The simulation describes the flow field of the entire building, the main features of the construction effecting on it and the edge conditions. The fire is described as a given source, the value of which varies as a function of time. Heat transfer into the constructions is described using a separate heat transfer program. The result obtained from calculation describes the flow and temperature fields formed in a fire generally correctly. Due to the used sparse calculation network the results contain locally large deviations. The discrete-transfer radiation calculation method used for calculation of burning and heat transfer, and testing of it are described in the appendix. The method describes the heat radiation propagating diagonally to the calculation network better than the six-flux method used before

  11. Color-weak compensation using local affine isometry based on discrimination threshold matching

    OpenAIRE

    Mochizuki, Rika; Kojima, Takanori; Lenz, Reiner; Chao, Jinhui

    2015-01-01

    We develop algorithms for color-weak compensation and color-weak simulation based on Riemannian geometry models of color spaces. The objective function introduced measures the match of color discrimination thresholds of average normal observers and a color-weak observer. The developed matching process makes use of local affine maps between color spaces of color-normal and color-weak observers. The method can be used to generate displays of images that provide color-normal and color-weak obser...

  12. Fractal-based exponential distribution of urban density and self-affine fractal forms of cities

    International Nuclear Information System (INIS)

    Highlights: ► The model of urban population density differs from the common exponential function. ► Fractal landscape influences the exponential distribution of urban density. ► The exponential distribution of urban population suggests a self-affine fractal. ► Urban space can be divided into three layers with scaling and non-scaling regions. ► The dimension of urban form with characteristic scale can be treated as 2. - Abstract: Urban population density always follows the exponential distribution and can be described with Clark’s model. Because of this, the spatial distribution of urban population used to be regarded as non-fractal pattern. However, Clark’s model differs from the exponential function in mathematics because that urban population is distributed on the fractal support of landform and land-use form. By using mathematical transform and empirical evidence, we argue that there are self-affine scaling relations and local power laws behind the exponential distribution of urban density. The scale parameter of Clark’s model indicating the characteristic radius of cities is not a real constant, but depends on the urban field we defined. So the exponential model suggests local fractal structure with two kinds of fractal parameters. The parameters can be used to characterize urban space filling, spatial correlation, self-affine properties, and self-organized evolution. The case study of the city of Hangzhou, China, is employed to verify the theoretical inference. Based on the empirical analysis, a three-ring model of cities is presented and a city is conceptually divided into three layers from core to periphery. The scaling region and non-scaling region appear alternately in the city. This model may be helpful for future urban studies and city planning.

  13. Providing affinity

    DEFF Research Database (Denmark)

    Guglielmi, Michel; Johannesen, Hl

    , Essex, Hertfordshire, Norfolk and Suffolk. Research found that there was a lack of identity or sense of belonging and nothing anchoring people to the region as a whole. Common affinity is somehow forced to the people of East England and thereby we came to the conclusion that a single landmark or a...... a sense of belonging to people sharing deterritorialized synchronic experiences. But at the same time, the immersion experience is highly low tech and desperately analog, mainly based on fabulation, cartoons, and mushrooms growing in local forests. It ultimately appeals to the experienced sense of...

  14. A multiobjective evolutionary algorithm to find community structures based on affinity propagation

    Science.gov (United States)

    Shang, Ronghua; Luo, Shuang; Zhang, Weitong; Stolkin, Rustam; Jiao, Licheng

    2016-07-01

    Community detection plays an important role in reflecting and understanding the topological structure of complex networks, and can be used to help mine the potential information in networks. This paper presents a Multiobjective Evolutionary Algorithm based on Affinity Propagation (APMOEA) which improves the accuracy of community detection. Firstly, APMOEA takes the method of affinity propagation (AP) to initially divide the network. To accelerate its convergence, the multiobjective evolutionary algorithm selects nondominated solutions from the preliminary partitioning results as its initial population. Secondly, the multiobjective evolutionary algorithm finds solutions approximating the true Pareto optimal front through constantly selecting nondominated solutions from the population after crossover and mutation in iterations, which overcomes the tendency of data clustering methods to fall into local optima. Finally, APMOEA uses an elitist strategy, called "external archive", to prevent degeneration during the process of searching using the multiobjective evolutionary algorithm. According to this strategy, the preliminary partitioning results obtained by AP will be archived and participate in the final selection of Pareto-optimal solutions. Experiments on benchmark test data, including both computer-generated networks and eight real-world networks, show that the proposed algorithm achieves more accurate results and has faster convergence speed compared with seven other state-of-art algorithms.

  15. Gaussian Affine Feature Detector

    CERN Document Server

    Xu, Xiaopeng

    2011-01-01

    A new method is proposed to get image features' geometric information. Using Gaussian as an input signal, a theoretical optimal solution to calculate feature's affine shape is proposed. Based on analytic result of a feature model, the method is different from conventional iterative approaches. From the model, feature's parameters such as position, orientation, background luminance, contrast, area and aspect ratio can be extracted. Tested with synthesized and benchmark data, the method achieves or outperforms existing approaches in term of accuracy, speed and stability. The method can detect small, long or thin objects precisely, and works well under general conditions, such as for low contrast, blurred or noisy images.

  16. Improving Network Performance with Affinity based Mobility Model in Opportunistic Network

    CERN Document Server

    Batabyal, Suvadip; 10.5121/ijwmn.2012.4213

    2012-01-01

    Opportunistic network is a type of Delay Tolerant Network which is characterized by intermittent connectivity amongst the nodes and communication largely depends upon the mobility of the participating nodes. The network being highly dynamic, traditional MANET protocols cannot be applied and the nodes must adhere to store-carry-forward mechanism. Nodes do not have the information about the network topology, number of participating nodes and the location of the destination node. Hence, message transfer reliability largely depends upon the mobility pattern of the nodes. In this paper we have tried to find the impact of RWP (Random Waypoint) mobility on packet delivery ratio. We estimate mobility factors like number of node encounters, contact duration(link time) and inter-contact time which in turn depends upon parameters like playfield area (total network area), number of nodes, node velocity, bit-rate and RF range of the nodes. We also propose a restricted form of RWP mobility model, called the affinity based ...

  17. Electrochemical immobilization of Fluorescent labelled probe molecules on a FTO surface for affinity detection based on photo-excited current

    Energy Technology Data Exchange (ETDEWEB)

    Haruyama, Tetsuya; Wakabayashi, Ryo; Cho, Takeshi; Matsuyama, Sho-taro, E-mail: haruyama@life.kyutech.as.jp [Kyushu Institute of Technology, Department of Biological Functions and Engineering, Kitakyushu Science and Research Park, Hibikino, Kitakyushu, Fukuoka 808-0196 (Japan)

    2011-10-29

    Photo-excited current can be generated at a molecular interface between a photo-excited molecules and a semi-conductive material in appropriate condition. The system has been recognized for promoting photo-energy devices such as an organic dye sensitized solar-cell. The photo-current generated reactions are totally dependent on the interfacial energy reactions, which are in a highly fluctuated interfacial environment. The authors investigated the photo-excited current reaction to develop a smart affinity detection method. However, in order to perform both an affinity reaction and a photo-excited current reaction at a molecular interface, ordered fabrications of the functional (affinity, photo-excitation, etc.) molecules layer on a semi-conductive surface is required. In the present research, we would like to present the fabrication and functional performance of photo-excited current-based affinity assay device and its application for detection of endocrine disrupting chemicals. On the FTO surface, fluorescent pigment labelled affinity peptide was immobilized through the EC tag (electrochemical-tag) method. The modified FTO produced a current when it was irradiated with diode laser light. However, the photo current decreased drastically when estrogen (ES) coexisted in the reaction solution. In this case, immobilized affinity probe molecules formed a complex with ES and estrogen receptor (ER). The result strongly suggests that the photo-excited current transduction between probe molecule-labelled cyanine pigment and the FTO surface was partly inhibited by a complex that formed at the affinity oligo-peptide region in a probe molecule on the FTO electrode. The bound bulky complex may act as an impediment to perform smooth transduction of photo-excited current in the molecular interface. The present system is new type of photo-reaction-based analysis. This system can be used to perform simple high-sensitive homogeneous assays.

  18. New Mathematical Model Based on Affine Transformation for Remote Sensing Image with High Resolution

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    This paper calculates the parameters of image position and orientation,proposes a mathematical model and adopts a new method with three steps of transformations based on parallel ray projection.Every step of the model is strict,and the map function of each transformation is the first order polynomials and other simple function.The final calculation of the parameters is for the linear equations with good status.As a result,the problem of the relativity of image parameter calculation is solved completely.Some experiments are carried out.

  19. Computational prediction of binding affinity for CYP1A2-ligand complexes using empirical free energy calculations

    DEFF Research Database (Denmark)

    Poongavanam, Vasanthanathan; Olsen, Lars; Jørgensen, Flemming Steen;

    2010-01-01

    , and methods based on statistical mechanics. In the present investigation, we started from an LIE model to predict the binding free energy of structurally diverse compounds of cytochrome P450 1A2 ligands, one of the important human metabolizing isoforms of the cytochrome P450 family. The data set...... includes both substrates and inhibitors. It appears that the electrostatic contribution to the binding free energy becomes negligible in this particular protein and a simple empirical model was derived, based on a training set of eight compounds. The root mean square error for the training set was 3.7 k...

  20. Dimension theory and fractal constructions based on self-affine carpets

    OpenAIRE

    Fraser, Jonathan M.

    2013-01-01

    The aim of this thesis is to develop the dimension theory of self-affine carpets in several directions. Self-affine carpets are an important class of planar self-affine sets which have received a great deal of attention in the literature on fractal geometry over the last 30 years. These constructions are important for several reasons. In particular, they provide a bridge between the relatively well-understood world of self-similar sets and the far from understood world of general self-affi...

  1. Affinity-Based Network Interfaces for Efficient Communication on Multicore Architectures

    Institute of Scientific and Technical Information of China (English)

    Andrés Ortiz; Julio Ortega; Antonio F.Díaz; Alberto Prieto

    2013-01-01

    Improving the network interface performance is needed by the demand of applications with high communication requirements (for example,some multimedia,real-time,and high-performance computing applications),and the availability of network links providing multiple gigabits per second bandwidths that could require many processor cycles for communication tasks.Multicore architectures,the current trend in the microprocessor development to cope with the difficulties to further increase clock frequencies and microarchitecture efficiencies,provide new opportunities to exploit the parallelism available in the nodes for designing efficient communication architectures.Nevertheless,although present OS network stacks include multiple threals that make it possible to execute network tasks concurrently in the kernel,the implementations of packet-based or connection-based parallelism are not trivial as they have to take into account issues related with the cost of synchronization in the access to shared resources and the efficient use of caches.Therefore,a common trend in many recent researches on this topic is to assign network interrupts and the corresponding protocol and network application processing to the same core,as with this affinity scheduling it would be possible to reduce the contention for shared resources and the cache misses.In this paper we propose and analyze several configurations to distribute the network interface among the different cores available in the server.These alternatives have been devised according to the affinity of the corresponding communication tasks with the location (proximity to the memories where the different data structures are stored) and characteristics of the processing core.As this approach uses several cores to accelerate the communication path of a given connection,it can be seen as complementary to those that consider several cores to simultaneously process packets belonging to either the same or different connections.Message passing

  2. Development of an aptamer-based affinity purification method for vascular endothelial growth factor

    Directory of Open Access Journals (Sweden)

    Maren Lönne

    2015-12-01

    Full Text Available Since aptamers bind their targets with high affinity and specificity, they are promising alternative ligands in protein affinity purification. As aptamers are chemically synthesized oligonucleotides, they can be easily produced in large quantities regarding GMP conditions allowing their application in protein production for therapeutic purposes. Several advantages of aptamers compared to antibodies are described in general within this paper. Here, an aptamer directed against the human Vascular Endothelial Growth Factor (VEGF was used as affinity ligand for establishing a purification platform for VEGF in small scale. The aptamer was covalently immobilized on magnetic beads in a controlled orientation resulting in a functional active affinity matrix. Target binding was optimized by introduction of spacer molecules and variation of aptamer density. Further, salt-induced target elution was demonstrated as well as VEGF purification from a complex protein mixture proving the specificity of protein-aptamer binding.

  3. Dioxygen Affinities and Biomimetic Catalytic Performance of Transition-metal Complexes with Crowned Bis-Schiff Bases

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    The dioxygen affinities and biomimetic catalytic performance of transition-metal complexes with (15-crown-5) salophen and its substituted derivatives were examined. The oxygenation constants of Co(II) complexes with crowned bis-Schiff bases were measured and their Mn(III) complexes were employed as models to mimic monooxygenase in catalytic epoxidation of styrene. The highest conversion and selectivity were up to 57.2% and 100% respectively at ambient temperature and pressure. The effects of crown ether ring and substituents R on the dioxygen affinities and catalytic activities were also investigated through comparing with the uncrowned analogues.

  4. GPU-based calculations in digital holography

    Science.gov (United States)

    Madrigal, R.; Acebal, P.; Blaya, S.; Carretero, L.; Fimia, A.; Serrano, F.

    2013-05-01

    In this work we are going to apply GPU (Graphical Processing Units) with CUDA environment for scientific calculations, concretely high cost computations on the field of digital holography. For this, we have studied three typical problems in digital holography such as Fourier transforms, Fresnel reconstruction of the hologram and the calculation of vectorial diffraction integral. In all cases the runtime at different image size and the corresponding accuracy were compared to the obtained by traditional calculation systems. The programs have been carried out on a computer with a graphic card of last generation, Nvidia GTX 680, which is optimized for integer calculations. As a result a large reduction of runtime has been obtained which allows a significant improvement. Concretely, 15 fold shorter times for Fresnel approximation calculations and 600 times for the vectorial diffraction integral. These initial results, open the possibility for applying such kind of calculations in real time digital holography.

  5. Multicarrier Communications Based on the Affine Fourier Transform in Doubly-Dispersive Channels

    Directory of Open Access Journals (Sweden)

    Djurović Igor

    2010-01-01

    Full Text Available The affine Fourier transform (AFT, a general formulation of chirp transforms, has been recently proposed for use in multicarrier communications. The AFT-based multicarrier (AFT-MC system can be considered as a generalization of the orthogonal frequency division multiplexing (OFDM, frequently used in modern wireless communications. AFT-MC keeps all important properties of OFDM and, in addition, gives a new degree of freedom in suppressing interference caused by Doppler spreading in time-varying multipath channels. We present a general interference analysis of the AFT-MC system that models both time and frequency dispersion effects. Upper and lower bounds on interference power are given, followed by interference power approximation that significantly simplifies interference analysis. The optimal parameters are obtained in the closed form followed by the analysis of the effects of synchronization errors and the optimal symbol period. A detailed interference analysis and optimal parameters are given for different aeronautical and land-mobile satellite (LMS channel scenarios. It is shown that the AFT-MC system is able to match changes in these channels and efficiently reduce interference with high-spectral efficiency.

  6. The Lectin Frontier Database (LfDB, and Data Generation Based on Frontal Affinity Chromatography

    Directory of Open Access Journals (Sweden)

    Jun Hirabayashi

    2015-01-01

    Full Text Available Lectins are a large group of carbohydrate-binding proteins, having been shown to comprise at least 48 protein scaffolds or protein family entries. They occur ubiquitously in living organisms—from humans to microorganisms, including viruses—and while their functions are yet to be fully elucidated, their main underlying actions are thought to mediate cell-cell and cell-glycoconjugate interactions, which play important roles in an extensive range of biological processes. The basic feature of each lectin’s function resides in its specific sugar-binding properties. In this regard, it is beneficial for researchers to have access to fundamental information about the detailed oligosaccharide specificities of diverse lectins. In this review, the authors describe a publicly available lectin database named “Lectin frontier DataBase (LfDB”, which undertakes the continuous publication and updating of comprehensive data for lectin-standard oligosaccharide interactions in terms of dissociation constants (Kd’s. For Kd determination, an advanced system of frontal affinity chromatography (FAC is used, with which quantitative datasets of interactions between immobilized lectins and >100 fluorescently labeled standard glycans have been generated. The FAC system is unique in its clear principle, simple procedure and high sensitivity, with an increasing number (>67 of associated publications that attest to its reliability. Thus, LfDB, is expected to play an essential role in lectin research, not only in basic but also in applied fields of glycoscience.

  7. Clustering Protein Sequences Using Affinity Propagation Based on an Improved Similarity Measure

    Directory of Open Access Journals (Sweden)

    Fan Yang

    2010-01-01

    Full Text Available The sizes of the protein databases are growing rapidly nowadays, thus it becomes increasingly important to cluster protein sequences only based on sequence information. In this paper we improve the similarity measure proposed by Kelil et al, then cluster sequences using the Affinity propagation (AP algorithm and provide a method to decide the input preference of AP algorithm. We tested our method extensively and compared its performance with other four methods on several datasets of COG, G protein, CAZy, SCOP database. We consistently observed that, the number of clusters that we obtained for a given set of proteins approximate to the correct number of clusters in that set. Moreover, in our experiments, the quality of the clusters when quantified by F-measure was better than that of other algorithms (on average, it is 15% better than that of BlastClust, 56% better than that of TribeMCL, 23% better than that of CLUSS, and 42% better than that of Spectral clustering.

  8. Design of Bcl-2 and Bcl-xL Inhibitors with Subnanomolar Binding Affinities Based upon a New Scaffold

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Haibin; Chen, Jianfang; Meagher, Jennifer L.; Yang, Chao-Yie; Aguilar, Angelo; Liu, Liu; Bai, Longchuan; Cong, Xin; Cai, Qian; Fang, Xueliang; Stuckey, Jeanne A.; Wang, Shaomeng (Michigan)

    2014-10-02

    Employing a structure-based strategy, we have designed a new class of potent small-molecule inhibitors of the anti-apoptotic proteins Bcl-2 and Bcl-xL. An initial lead compound with a new scaffold was designed based upon the crystal structure of Bcl-xL and U.S. Food and Drug Administration (FDA) approved drugs and was found to have an affinity of 100 {micro}M for both Bcl-2 and Bcl-xL. Linking this weak lead to another weak-affinity fragment derived from Abbott's ABT-737 led to an improvement of the binding affinity by a factor of >10,000. Further optimization ultimately yielded compounds with subnanomolar binding affinities for both Bcl-2 and Bcl-xL and potent cellular activity. The best compound (21) binds to Bcl-xL and Bcl-2 with K{sub i} < 1 nM, inhibits cell growth in the H146 and H1417 small-cell lung cancer cell lines with IC{sub 50} values of 60-90 nM, and induces robust cell death in the H146 cancer cell line at 30-100 nM.

  9. Image-Moment Based Affine Invariant Watermarking Scheme Utilizing Neural Networks

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    A new image watermarking scheme is proposed to resist rotation, scaling and translation (RST) attacks. Six combined low order image moments are utilized to represent image information on rotation, scaling and translation. Affine transform parameters are registered by feedforward neural networks. Watermark is adaptively embedded in discrete wavelet transform (DWT) domain while watermark extraction is carried out without original image after attacked watermarked image has been synchronized by making inverse transform through parameters learned by neural networks. Experimental results show that the proposed scheme can effectively register affine transform parameters, embed watermark more robustly and resist geometric attacks as well as JPEG2000 compression.

  10. An Activation Force-based Affinity Measure for Analyzing Complex Networks

    OpenAIRE

    Jun Guo; Hanliang Guo; Zhanyi Wang

    2011-01-01

    Affinity measure is a key factor that determines the quality of the analysis of a complex network. Here, we introduce a type of statistics, activation forces, to weight the links of a complex network and thereby develop a desired affinity measure. We show that the approach is superior in facilitating the analysis through experiments on a large-scale word network and a protein-protein interaction (PPI) network consisting of ∼5,000 human proteins. The experiment on the word network verifies tha...

  11. Linear Interaction Energy Based Prediction of Cytochrome P450 1A2 Binding Affinities with Reliability Estimation.

    Directory of Open Access Journals (Sweden)

    Luigi Capoferri

    Full Text Available Prediction of human Cytochrome P450 (CYP binding affinities of small ligands, i.e., substrates and inhibitors, represents an important task for predicting drug-drug interactions. A quantitative assessment of the ligand binding affinity towards different CYPs can provide an estimate of inhibitory activity or an indication of isoforms prone to interact with the substrate of inhibitors. However, the accuracy of global quantitative models for CYP substrate binding or inhibition based on traditional molecular descriptors can be limited, because of the lack of information on the structure and flexibility of the catalytic site of CYPs. Here we describe the application of a method that combines protein-ligand docking, Molecular Dynamics (MD simulations and Linear Interaction Energy (LIE theory, to allow for quantitative CYP affinity prediction. Using this combined approach, a LIE model for human CYP 1A2 was developed and evaluated, based on a structurally diverse dataset for which the estimated experimental uncertainty was 3.3 kJ mol-1. For the computed CYP 1A2 binding affinities, the model showed a root mean square error (RMSE of 4.1 kJ mol-1 and a standard error in prediction (SDEP in cross-validation of 4.3 kJ mol-1. A novel approach that includes information on both structural ligand description and protein-ligand interaction was developed for estimating the reliability of predictions, and was able to identify compounds from an external test set with a SDEP for the predicted affinities of 4.6 kJ mol-1 (corresponding to 0.8 pKi units.

  12. Experimental study of methane hydrate formation kinetics with or without additives and modeling based on chemical affinity

    International Nuclear Information System (INIS)

    Highlights: • Applying chemical affinity for investigating the effects of additives. • Effects of thermodynamic additives on methane hydrate formation kinetics. • Determining kinetic parameters for methane hydrate formation with additives. • A unique path for the methane hydrate formation with aqueous solution of addetives. - Abstract: In this work, methane hydrate formation process (as a process for energy conversion and cool-energy storage) with or without additives was investigated. First, the effects of initial pressure and three surfactants (sodium dodecyl sulfate (SDS), dodecyltrimethyl ammonium bromide (DTAB) and Triton X-100 (TX-100)) and two thermodynamic additives (tetrahydrofuran (THF) and tetra butyl ammonium bromide (TBAB)) on methane hydrate formation kinetics were experimentally studied. Then the macroscopic modeling of methane hydrate formation kinetics with and without additives based on chemical affinity was done. The kinetic parameters of the chemical affinity model were determined for methane hydrate formation with and without additives. The effects of initial pressure and additives on the chemical affinity model parameters were also investigated. In addition, the results of the model were in a good agreement with experimental data

  13. Bioanalytical applications of affinity-based nanotube membranes for sensing and separations

    Science.gov (United States)

    Caicedo, Hector Mario

    2008-11-01

    Nanotechnology has played an important role in the development of research and technology during the last two decades. The contribution of nanotechnology in different fields, along with the versatility of the constructed nanoscale materials, have made nanotechnology one of the most suitable tools to develop particular nanostructures to realize a desired function and application. A nanostructure is simply an entity at the nanometer scale with one, two or three dimensional features. Since nanotechnology covers a broad range of nanoscale materials, to simplify nanotechnology, it can be classified into two categories based on how the nanostructures are prepared: top-down and bottom-up. In the top-down methods, the nanostructures are constructed by chiseling larger bulk materials into entities of smaller size. Conversely, in the bottom-up case, small units are grown or assembled into their desired size and shape. The nanoporous materials specifically have attracted a lot of attention because they can be used for the synthesis of a variety of functional nanostructures of great usefulness in technology. These porous nanostructures usually combine many of the advantages of the top-down and bottom-up methodologies such as flexibility, size controllability, and cost. The research presented in this work utilizes nanoporous membranes to develop porous nanostructured platforms with potential applications in sensing and separations. In particular, this work is centered in fundamental studies for bioanalytical applications of affinity-based nanotube membranes for sensing and separations. A bottom-up methodology like the template synthesis was used to produce silica nanotubes inside of the pores of alumina membrane. The functionalization of the inside walls of these silica nanotube membranes allowed control of the functional behavior and properties of the nanostructured membrane during membrane-based separations and sensing. The general scheme of the work presented here, is

  14. Nonlinear scoring functions for similarity-based ligand docking and binding affinity prediction.

    Science.gov (United States)

    Brylinski, Michal

    2013-11-25

    A common strategy for virtual screening considers a systematic docking of a large library of organic compounds into the target sites in protein receptors with promising leads selected based on favorable intermolecular interactions. Despite a continuous progress in the modeling of protein-ligand interactions for pharmaceutical design, important challenges still remain, thus the development of novel techniques is required. In this communication, we describe eSimDock, a new approach to ligand docking and binding affinity prediction. eSimDock employs nonlinear machine learning-based scoring functions to improve the accuracy of ligand ranking and similarity-based binding pose prediction, and to increase the tolerance to structural imperfections in the target structures. In large-scale benchmarking using the Astex/CCDC data set, we show that 53.9% (67.9%) of the predicted ligand poses have RMSD of <2 Å (<3 Å). Moreover, using binding sites predicted by recently developed eFindSite, eSimDock models ligand binding poses with an RMSD of 4 Å for 50.0-39.7% of the complexes at the protein homology level limited to 80-40%. Simulations against non-native receptor structures, whose mean backbone rearrangements vary from 0.5 to 5.0 Å Cα-RMSD, show that the ratio of docking accuracy and the estimated upper bound is at a constant level of ∼0.65. Pearson correlation coefficient between experimental and predicted by eSimDock Ki values for a large data set of the crystal structures of protein-ligand complexes from BindingDB is 0.58, which decreases only to 0.46 when target structures distorted to 3.0 Å Cα-RMSD are used. Finally, two case studies demonstrate that eSimDock can be customized to specific applications as well. These encouraging results show that the performance of eSimDock is largely unaffected by the deformations of ligand binding regions, thus it represents a practical strategy for across-proteome virtual screening using protein models. eSimDock is freely

  15. Writing in the Wild: Writers' Motivation in Fan-Based Affinity Spaces

    Science.gov (United States)

    Curwood, Jen Scott; Magnifico, Alecia Marie; Lammers, Jayne C.

    2013-01-01

    In order to understand the culture of the physical, virtual, and blended spheres that adolescents inhabit, we build on Gee's concept of affinity spaces. Drawing on our ethnographic research of adolescent literacies related to The Hunger Games novels, the Neopets online game, and The Sims videogames, this article explores the nature of…

  16. Reduction of false sharing by using process affinity in page-based distributed shared memory mutiprocessor systems

    OpenAIRE

    Hung, KP; Cheung, PYS; Yung, NHC

    1996-01-01

    In page-based distributed shared memory systems, a large page size makes efficient use of interconnection network, but increases the chance of false sharing, while a small page size reduces the level of false sharing but results in an inefficient use of the network. This paper proposes a technique that uses process affinity to achieve data pages clustering so as to optimize the temporal data locality on DSM systems, and therefore reduces the chance of false sharing and improves the data local...

  17. Fractal-Based Exponential Distribution of Urban Density and Self-Affine Fractal Forms of Cities

    CERN Document Server

    Chen, Yanguang

    2016-01-01

    Urban population density always follows the exponential distribution and can be described with Clark's model. Because of this, the spatial distribution of urban population used to be regarded as non-fractal pattern. However, Clark's model differs from the exponential function in mathematics because that urban population is distributed on the fractal support of landform and land-use form. By using mathematical transform and empirical evidence, we argue that there are self-affine scaling relations and local power laws behind the exponential distribution of urban density. The scale parameter of Clark's model indicating the characteristic radius of cities is not a real constant, but depends on the urban field we defined. So the exponential model suggests local fractal structure with two kinds of fractal parameters. The parameters can be used to characterize urban space filling, spatial correlation, self-affine properties, and self-organized evolution. The case study of the city of Hangzhou, China, is employed to ...

  18. Modification of silica-based monolithic capillary columns for boronate affinity chromatography

    Czech Academy of Sciences Publication Activity Database

    Moravcová, Dana; Planeta, Josef; Kahle, Vladislav; Roth, Michal

    2011. P2-G-471-WE. ISBN 978-963-89335-0-8. [International Symposium on High - Performance Liquid Phase Separations and Related Techniques /36./. 19.06.2011-23.06.2011, Budapest] R&D Projects: GA AV ČR IAAX00310701; GA MŠk LC06023 Institutional research plan: CEZ:AV0Z40310501 Keywords : silicagel monolithic column * boronate affinity chromatography Subject RIV: CB - Analytical Chemistry, Separation

  19. Nanotechnology-Based Surface Plasmon Resonance Affinity Biosensors for In Vitro Diagnostics.

    Science.gov (United States)

    Antiochia, Riccarda; Bollella, Paolo; Favero, Gabriele; Mazzei, Franco

    2016-01-01

    In the last decades, in vitro diagnostic devices (IVDDs) became a very important tool in medicine for an early and correct diagnosis, a proper screening of targeted population, and also assessing the efficiency of a specific therapy. In this review, the most recent developments regarding different configurations of surface plasmon resonance affinity biosensors modified by using several nanostructured materials for in vitro diagnostics are critically discussed. Both assembly and performances of the IVDDs tested in biological samples are reported and compared. PMID:27594884

  20. Passive Fault Tolerant Control of Piecewise Affine Systems Based on H Infinity Synthesis

    DEFF Research Database (Denmark)

    Gholami, Mehdi; Cocquempot, vincent; Schiøler, Henrik; Bak, Thomas

    2011-01-01

    In this paper we design a passive fault tolerant controller against actuator faults for discretetime piecewise affine (PWA) systems. By using dissipativity theory and H analysis, fault tolerant state feedback controller design is expressed as a set of Linear Matrix Inequalities (LMIs). In the cur...... current paper, the PWA system switches not only due to the state but also due to the control input. The method is applied on a large scale livestock ventilation model....

  1. Improved efficient proportionate affine projection algorithm based on l(0-norm for sparse system identification

    Directory of Open Access Journals (Sweden)

    Haiquan Zhao

    2014-01-01

    Full Text Available A new improved memorised improved proportionate affine projection algorithm (IMIPAPA is proposed to improve the convergence performance of sparse system identification, which incorporates l(0-norm as a measure of sparseness into a recently proposed MIPAPA algorithm. In addition, a simplified implementation of the IMIPAPA (SIMIPAPA with low-computational burden is presented while maintaining the consistent convergence performance. The simulation results demonstrate that the IMIPAPA and SIMIPAPA algorithms outperform the MIPAPA algorithm for sparse system identification.

  2. Automated evaluation of protein binding affinity of anti-inflammatory choline based ionic liquids.

    Science.gov (United States)

    Ribeiro, Rosa; Pinto, Paula C A G; Azevedo, Ana M O; Bica, Katharina; Ressmann, Anna K; Reis, Salette; Saraiva, M Lúcia M F S

    2016-04-01

    In this work, an automated system for the study of the interaction of drugs with human serum albumin (HSA) was developed. The methodology was based on the quenching of the intrinsic fluorescence of HSA by binding of the drug to one of its binding sites. The fluorescence quenching assay was implemented in a sequential injection analysis (SIA) system and the optimized assay was applied to ionic liquids based on the association of non-steroidal anti-inflammatory drugs with choline (IL-API). In each cycle, 100µL of HSA and 100µL of IL-API (variable concentration) were aspirated at a flow rate of 1mLmin(-1) and then sent through the reaction coil to the detector where the fluorescence intensity was measured. In the optimized conditions the effect of increasing concentrations of choline ketoprofenate and choline naproxenate (and respective starting materials: ketoprofen and naproxen) on the intrinsic fluorescence of HSA was studied and the dissociation constants (Kd) were calculated by means of models of drug-protein binding in the equilibrium. The calculated Kd showed that all the compounds bind strongly to HSA (Kd<100µmolL(-1)) and that the use of the drugs in the IL format does not affect or can even improve their HSA binding. The obtained results were compared with those provided by a conventional batch assay and the relative errors were lower than 4.5%. The developed SIA methodology showed to be robust and exhibited good repeatability in all the assay conditions (rsd<6.5%). PMID:26838377

  3. A note on geometric method-based procedures to calculate the Hurst exponent

    Science.gov (United States)

    Trinidad Segovia, J. E.; Fernández-Martínez, M.; Sánchez-Granero, M. A.

    2012-03-01

    Geometric method-based procedures, which we will call GM algorithms hereafter, were introduced in M.A. Sánchez-Granero, J.E. Trinidad Segovia, J. García Pérez, Some comments on Hurst exponent and the long memory processes on capital markets, Phys. A 387 (2008) 5543-5551, to calculate the Hurst exponent of a time series. The authors proved that GM algorithms, based on a geometrical approach, are more accurate than classical algorithms, especially with short length time series. The main contribution of this paper is to provide a mathematical background for the validity of these two algorithms to calculate the Hurst exponent H of random processes with stationary and self-affine increments. In particular, we show that these procedures are valid not only for exploring long memory in classical processes such as (fractional) Brownian motions, but also for estimating the Hurst exponent of (fractional) Lévy stable motions.

  4. Affine dynamics with torsion

    Energy Technology Data Exchange (ETDEWEB)

    Gueltekin, Kemal [Izmir Institute of Technology, Department of Physics, Izmir (Turkey)

    2016-03-15

    In this study, we give a thorough analysis of a general affine gravity with torsion. After a brief exposition of the affine gravities considered by Eddington and Schroedinger, we construct and analyze different affine gravities based on the determinants of the Ricci tensor, the torsion tensor, the Riemann tensor, and their combinations. In each case we reduce equations of motion to their simplest forms and give a detailed analysis of their solutions. Our analyses lead to the construction of the affine connection in terms of the curvature and torsion tensors. Our solutions of the dynamical equations show that the curvature tensors at different points are correlated via non-local, exponential rescaling factors determined by the torsion tensor. (orig.)

  5. Adjoint affine fusion and tadpoles

    Science.gov (United States)

    Urichuk, Andrew; Walton, Mark A.

    2016-06-01

    We study affine fusion with the adjoint representation. For simple Lie algebras, elementary and universal formulas determine the decomposition of a tensor product of an integrable highest-weight representation with the adjoint representation. Using the (refined) affine depth rule, we prove that equally striking results apply to adjoint affine fusion. For diagonal fusion, a coefficient equals the number of nonzero Dynkin labels of the relevant affine highest weight, minus 1. A nice lattice-polytope interpretation follows and allows the straightforward calculation of the genus-1 1-point adjoint Verlinde dimension, the adjoint affine fusion tadpole. Explicit formulas, (piecewise) polynomial in the level, are written for the adjoint tadpoles of all classical Lie algebras. We show that off-diagonal adjoint affine fusion is obtained from the corresponding tensor product by simply dropping non-dominant representations.

  6. Adjoint affine fusion and tadpoles

    CERN Document Server

    Urichuk, Andrew

    2016-01-01

    We study affine fusion with the adjoint representation. For simple Lie algebras, elementary and universal formulas determine the decomposition of a tensor product of an integrable highest-weight representation with the adjoint representation. Using the (refined) affine depth rule, we prove that equally striking results apply to adjoint affine fusion. For diagonal fusion, a coefficient equals the number of nonzero Dynkin labels of the relevant affine highest weight, minus 1. A nice lattice-polytope interpretation follows, and allows the straightforward calculation of the genus-1 1-point adjoint Verlinde dimension, the adjoint affine fusion tadpole. Explicit formulas, (piecewise) polynomial in the level, are written for the adjoint tadpoles of all classical Lie algebras. We show that off-diagonal adjoint affine fusion is obtained from the corresponding tensor product by simply dropping non-dominant representations.

  7. Local release from affinity-based polymers increases urethral concentration of the stem cell chemokine CCL7 in rats.

    Science.gov (United States)

    Rivera-Delgado, Edgardo; Sadeghi, Zhina; Wang, Nick X; Kenyon, Jonathan; Satyanarayan, Sapna; Kavran, Michael; Flask, Chris; Hijaz, Adonis Z; von Recum, Horst A

    2016-01-01

    The protein chemokine (C-C motif) ligand 7 (CCL7) is significantly over-expressed in urethral and vaginal tissues immediately following vaginal distention in a rat model of stress urinary incontinence. Further evidence, in this scenario and other clinical scenarios, indicates CCL7 stimulates stem cell homing for regenerative repair. This CCL7 gradient is likely absent or compromised in the natural repair process of women who continue to suffer from SUI into advanced age. We evaluated the feasibility of locally providing this missing CCL7 gradient by means of an affinity-based implantable polymer. To engineer these polymers we screened the affinity of different proteoglycans, to use them as CCL7-binding hosts. We found heparin to be the strongest binding host for CCL7 with a 0.323 nM dissociation constant. Our experimental approach indicates conjugation of heparin to a polymer backbone (using either bovine serum albumin or poly (ethylene glycol) as the base polymer) can be used as a delivery system capable of providing sustained concentrations of CCL7 in a therapeutically useful range up to a month in vitro. With this approach we are able to detect, after polymer implantation, significant increase in CCL7 in the urethral tissue directly surrounding the polymer implants with only trace amounts of human CCL7 present in the blood of the animals. Whole animal serial sectioning shows evidence of retention of locally injected human mesenchymal stem cells (hMSCs) only in animals with sustained CCL7 delivery, 2 weeks after affinity-polymers were implanted. PMID:27097800

  8. Structure-based rational design of a Toll-like receptor 4 (TLR4 decoy receptor with high binding affinity for a target protein.

    Directory of Open Access Journals (Sweden)

    Jieun Han

    Full Text Available Repeat proteins are increasingly attracting much attention as alternative scaffolds to immunoglobulin antibodies due to their unique structural features. Nonetheless, engineering interaction interface and understanding molecular basis for affinity maturation of repeat proteins still remain a challenge. Here, we present a structure-based rational design of a repeat protein with high binding affinity for a target protein. As a model repeat protein, a Toll-like receptor4 (TLR4 decoy receptor composed of leucine-rich repeat (LRR modules was used, and its interaction interface was rationally engineered to increase the binding affinity for myeloid differentiation protein 2 (MD2. Based on the complex crystal structure of the decoy receptor with MD2, we first designed single amino acid substitutions in the decoy receptor, and obtained three variants showing a binding affinity (K(D one-order of magnitude higher than the wild-type decoy receptor. The interacting modes and contributions of individual residues were elucidated by analyzing the crystal structures of the single variants. To further increase the binding affinity, single positive mutations were combined, and two double mutants were shown to have about 3000- and 565-fold higher binding affinities than the wild-type decoy receptor. Molecular dynamics simulations and energetic analysis indicate that an additive effect by two mutations occurring at nearby modules was the major contributor to the remarkable increase in the binding affinities.

  9. Prediction of binding modes and affinities of 4-substituted-2,3,5,6-tetrafluorobenzenesulfonamide inhibitors to the carbonic anhydrase receptor by docking and ONIOM calculations.

    Science.gov (United States)

    Samanta, Pabitra Narayan; Das, Kalyan Kumar

    2016-01-01

    Inhibition activities of a series of 4-substituted-2,3,5,6-tetrafluorobenzenesulfonamides against the human carbonic anhydrase II (HCAII) enzyme have been explored by employing molecular docking and hybrid QM/MM methods. The docking protocol has been employed to assess the best pose of each ligand in the active site cavity of the enzyme, and probe the interactions with the amino acid residues. The docking calculations reveal that the inhibitor binds to the catalytic Zn(2+) site through the deprotonated sulfonamide nitrogen atom by making several hydrophobic and hydrogen bond interactions with the side chain residues depending on the substituted moiety. A cross-docking approach has been adopted prior to the hybrid QM/MM calculation to validate the docked poses. A correlation between the experimental dissociation constants and the docked free energies for the enzyme-inhibitor complexes has been established. Two-layered ONIOM calculations based on QM/MM approach have been performed to evaluate the binding efficacy of the inhibitors. The inhibitor potency has been predicted from the computed binding energies after taking into account of the electronic phenomena associated with enzyme-inhibitor interactions. Both the hybrid (B3LYP) and meta-hybrid (M06-2X) functionals are used for the description of the QM region. To improve the correlation between the experimental biological activity and the theoretical results, a three-layered ONIOM calculation has been carried out and verified for some of the selected inhibitors. The charge transfer stabilization energies are calculated via natural bond orbital analysis to recognize the donor-acceptor interaction in the binding pocket of the enzyme. The nature of binding between the inhibitors and HCAII active site is further analyzed from the electron density distribution maps. PMID:26619075

  10. Self-Powered Wireless Affinity-Based Biosensor Based on Integration of Paper-Based Microfluidics and Self-Assembled RFID Antennas.

    Science.gov (United States)

    Yuan, Mingquan; Alocilja, Evangelyn C; Chakrabartty, Shantanu

    2016-08-01

    This paper presents a wireless, self-powered, affinity-based biosensor based on the integration of paper-based microfluidics with our previously reported method for self-assembling radio-frequency (RF) antennas. At the core of the proposed approach is a silver-enhancement technique that grows portions of a RF antenna in regions where target antigens hybridize with target specific affinity probes. The hybridization regions are defined by a network of nitrocellulose based microfluidic channels which implement a self-powered approach to sample the reagent and control its flow and mixing. The integration substrate for the biosensor has been constructed using polyethylene and the patterning of the antenna on the substrate has been achieved using a low-cost ink-jet printing technique. The substrate has been integrated with passive radio-frequency identification (RFID) tags to demonstrate that the resulting sensor-tag can be used for continuous monitoring in a food supply-chain where direct measurement of analytes is typically considered to be impractical. We validate the proof-of-concept operation of the proposed sensor-tag using IgG as a model analyte and using a 915 MHz Ultra-high-frequency (UHF) RFID tagging technology. PMID:27214914

  11. Affinity chromatography of GroEL chaperonin based on denatured proteins: role of electrostatic interactions in regulation of GroEL affinity for protein substrates.

    Science.gov (United States)

    Marchenko, N Iu; Marchenkov, V V; Kaĭsheva, A L; Kashparov, I A; Kotova, N V; Kaliman, P A; Semisotnov, G V

    2006-12-01

    The chaperonin GroEL of the heat shock protein family from Escherichia coli cells can bind various polypeptides lacking rigid tertiary structure and thus prevent their nonspecific association and provide for acquisition of native conformation. In the present work we studied the interaction of GroEL with six denatured proteins (alpha-lactalbumin, ribonuclease A, egg lysozyme in the presence of dithiothreitol, pepsin, beta-casein, and apocytochrome c) possessing negative or positive total charge at neutral pH values and different in hydrophobicity (affinity for a hydrophobic probe ANS). To prevent the influence of nonspecific association of non-native proteins on their interaction with GroEL and make easier the recording of the complexing, the proteins were covalently attached to BrCN-activated Sepharose. At low ionic strength (lower than 60 mM), tight binding of the negatively charged denatured proteins with GroEL (which is also negatively charged) needed relatively low concentrations (approximately 10 mM) of bivalent cations Mg2+ or Ca2+. At the high ionic strength (approximately 600 mM), a tight complex was produced also in the absence of bivalent cations. In contrast, positively charged denatured proteins tightly interacted with GroEL irrespectively of the presence of bivalent cations and ionic strength of the solution (from 20 to 600 mM). These features of GroEL interaction with positively and negatively charged denatured proteins were confirmed by polarized fluorescence (fluorescence anisotropy). The findings suggest that the affinity of GroEL for denatured proteins can be determined by the balance of hydrophobic and electrostatic interactions. PMID:17223789

  12. Self-training-based face recognition using semi-supervised linear discriminant analysis and affinity propagation.

    Science.gov (United States)

    Gan, Haitao; Sang, Nong; Huang, Rui

    2014-01-01

    Face recognition is one of the most important applications of machine learning and computer vision. The traditional supervised learning methods require a large amount of labeled face images to achieve good performance. In practice, however, labeled images are usually scarce while unlabeled ones may be abundant. In this paper, we introduce a semi-supervised face recognition method, in which semi-supervised linear discriminant analysis (SDA) and affinity propagation (AP) are integrated into a self-training framework. In particular, SDA is employed to compute the face subspace using both labeled and unlabeled images, and AP is used to identify the exemplars of different face classes in the subspace. The unlabeled data can then be classified according to the exemplars and the newly labeled data with the highest confidence are added to the labeled data, and the whole procedure iterates until convergence. A series of experiments on four face datasets are carried out to evaluate the performance of our algorithm. Experimental results illustrate that our algorithm outperforms the other unsupervised, semi-supervised, and supervised methods. PMID:24561932

  13. Affinity improvement of a therapeutic antibody by structure-based computational design: generation of electrostatic interactions in the transition state stabilizes the antibody-antigen complex.

    Directory of Open Access Journals (Sweden)

    Masato Kiyoshi

    Full Text Available The optimization of antibodies is a desirable goal towards the development of better therapeutic strategies. The antibody 11K2 was previously developed as a therapeutic tool for inflammatory diseases, and displays very high affinity (4.6 pM for its antigen the chemokine MCP-1 (monocyte chemo-attractant protein-1. We have employed a virtual library of mutations of 11K2 to identify antibody variants of potentially higher affinity, and to establish benchmarks in the engineering of a mature therapeutic antibody. The most promising candidates identified in the virtual screening were examined by surface plasmon resonance to validate the computational predictions, and to characterize their binding affinity and key thermodynamic properties in detail. Only mutations in the light-chain of the antibody are effective at enhancing its affinity for the antigen in vitro, suggesting that the interaction surface of the heavy-chain (dominated by the hot-spot residue Phe101 is not amenable to optimization. The single-mutation with the highest affinity is L-N31R (4.6-fold higher affinity than wild-type antibody. Importantly, all the single-mutations showing increase affinity incorporate a charged residue (Arg, Asp, or Glu. The characterization of the relevant thermodynamic parameters clarifies the energetic mechanism. Essentially, the formation of new electrostatic interactions early in the binding reaction coordinate (transition state or earlier benefits the durability of the antibody-antigen complex. The combination of in silico calculations and thermodynamic analysis is an effective strategy to improve the affinity of a matured therapeutic antibody.

  14. Polymer-conjugated albumin and fibrinogen composite hydrogels as cell scaffolds designed for affinity-based drug delivery.

    Science.gov (United States)

    Oss-Ronen, Liat; Seliktar, Dror

    2011-01-01

    Serum albumin was conjugated to poly-(ethylene glycol) (PEG) and cross-linked to form mono-PEGylated albumin hydrogels. These hydrogels were used as a basis for drug carrying tissue engineering scaffold materials, based on the natural affinity of various drugs and compounds for the tethered albumin in the polymer network. The results of the drug release validation experiments showed that the release kinetics of the drugs from the mono-PEGylated albumin hydrogels were controlled by the molecular weight (MW) of PEG conjugated to the albumin protein, the drug MW and its inherent affinity for albumin. Composite hydrogels containing both mono-PEGylated albumin and PEGylated fibrinogen were used specifically for three-dimensional (3D) cell culture scaffolds, with inherent bioactivity, proteolytic biodegradability and controlled drug release properties. The specific characteristics of these complex hydrogels were governed by the ratio between the concentrations of each protein, the addition of free PEG diacrylate (PEG DA) molecules to the hydrogel matrix and the MW of the PEG conjugated to each protein. Comprehensive characterization of the drug release and degradation properties, as well as 3D cell culture experiments using these composite materials, demonstrated the effectiveness of this combined approach in creating a tissue engineering scaffold material with controlled drug release features. PMID:20643230

  15. Affinity Purification of Insulin by Peptide-Ligand Affinity Chromatography

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    The affinity heptapeptide (HWWWPAS) for insulin, selected from phage display library,was coupled to EAH Sepharose 4B gel and packed to a 1-mL column. The column was used for the affinity purification of insulin from protein mixture and commercial insulin preparation. It was observed that the minor impurity in the commercial insulin was removed by the affinity chromatography. Nearly 40 mg of insulin could be purified with the 1-mL affinity column. The results revealed the high specificity and capacity of the affinity column for insulin purification. Moreover, based on the analysis of the amino acids in the peptide sequence, shorter peptides were designed and synthesized for insulin chromatography. As a result, HWWPS was found to be a good alternative to HWWWPAS, while the other two peptides with three or four amino acids showed weak affinity for insulin. The results indicated that the peptide sequence of HWWWPAS was quite conservative for specific binding of insulin.

  16. A Mixed Approach to Similarity Metric Selection in Affinity Propagation-Based WiFi Fingerprinting Indoor Positioning.

    Science.gov (United States)

    Caso, Giuseppe; de Nardis, Luca; di Benedetto, Maria-Gabriella

    2015-01-01

    The weighted k-nearest neighbors (WkNN) algorithm is by far the most popular choice in the design of fingerprinting indoor positioning systems based on WiFi received signal strength (RSS). WkNN estimates the position of a target device by selecting k reference points (RPs) based on the similarity of their fingerprints with the measured RSS values. The position of the target device is then obtained as a weighted sum of the positions of the k RPs. Two-step WkNN positioning algorithms were recently proposed, in which RPs are divided into clusters using the affinity propagation clustering algorithm, and one representative for each cluster is selected. Only cluster representatives are then considered during the position estimation, leading to a significant computational complexity reduction compared to traditional, flat WkNN. Flat and two-step WkNN share the issue of properly selecting the similarity metric so as to guarantee good positioning accuracy: in two-step WkNN, in particular, the metric impacts three different steps in the position estimation, that is cluster formation, cluster selection and RP selection and weighting. So far, however, the only similarity metric considered in the literature was the one proposed in the original formulation of the affinity propagation algorithm. This paper fills this gap by comparing different metrics and, based on this comparison, proposes a novel mixed approach in which different metrics are adopted in the different steps of the position estimation procedure. The analysis is supported by an extensive experimental campaign carried out in a multi-floor 3D indoor positioning testbed. The impact of similarity metrics and their combinations on the structure and size of the resulting clusters, 3D positioning accuracy and computational complexity are investigated. Results show that the adoption of metrics different from the one proposed in the original affinity propagation algorithm and, in particular, the combination of different

  17. Calculation of electromagnetic parameter based on interpolation algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Wenqiang, E-mail: zwqcau@gmail.com [College of Engineering, China Agricultural University, Beijing 100083 (China); Bionic and Micro/Nano/Bio Manufacturing Technology Research Center, Beihang University, Beijing 100191 (China); Yuan, Liming; Zhang, Deyuan [Bionic and Micro/Nano/Bio Manufacturing Technology Research Center, Beihang University, Beijing 100191 (China)

    2015-11-01

    Wave-absorbing material is an important functional material of electromagnetic protection. The wave-absorbing characteristics depend on the electromagnetic parameter of mixed media. In order to accurately predict the electromagnetic parameter of mixed media and facilitate the design of wave-absorbing material, based on the electromagnetic parameters of spherical and flaky carbonyl iron mixture of paraffin base, this paper studied two different interpolation methods: Lagrange interpolation and Hermite interpolation of electromagnetic parameters. The results showed that Hermite interpolation is more accurate than the Lagrange interpolation, and the reflectance calculated with the electromagnetic parameter obtained by interpolation is consistent with that obtained through experiment on the whole. - Highlights: • We use interpolation algorithm on calculation of EM-parameter with limited samples. • Interpolation method can predict EM-parameter well with different particles added. • Hermite interpolation is more accurate than Lagrange interpolation. • Calculating RL based on interpolation is consistent with calculating RL from experiment.

  18. Calculation of electromagnetic parameter based on interpolation algorithm

    International Nuclear Information System (INIS)

    Wave-absorbing material is an important functional material of electromagnetic protection. The wave-absorbing characteristics depend on the electromagnetic parameter of mixed media. In order to accurately predict the electromagnetic parameter of mixed media and facilitate the design of wave-absorbing material, based on the electromagnetic parameters of spherical and flaky carbonyl iron mixture of paraffin base, this paper studied two different interpolation methods: Lagrange interpolation and Hermite interpolation of electromagnetic parameters. The results showed that Hermite interpolation is more accurate than the Lagrange interpolation, and the reflectance calculated with the electromagnetic parameter obtained by interpolation is consistent with that obtained through experiment on the whole. - Highlights: • We use interpolation algorithm on calculation of EM-parameter with limited samples. • Interpolation method can predict EM-parameter well with different particles added. • Hermite interpolation is more accurate than Lagrange interpolation. • Calculating RL based on interpolation is consistent with calculating RL from experiment

  19. Innovative Product Design Based on Customer Requirement Weight Calculation Model

    Institute of Scientific and Technical Information of China (English)

    Chen-Guang Guo; Yong-Xian Liu; Shou-Ming Hou; Wei Wang

    2010-01-01

    In the processes of product innovation and design, it is important for the designers to find and capture customer's focus through customer requirement weight calculation and ranking. Based on the fuzzy set theory and Euclidean space distance, this paper puts forward a method for customer requirement weight calculation called Euclidean space distances weighting ranking method. This method is used in the fuzzy analytic hierarchy process that satisfies the additive consistent fuzzy matrix. A model for the weight calculation steps is constructed;meanwhile, a product innovation design module on the basis of the customer requirement weight calculation model is developed. Finally, combined with the instance of titanium sponge production, the customer requirement weight calculation model is validated. By the innovation design module, the structure of the titanium sponge reactor has been improved and made innovative.

  20. Programmable calculator: alternative to minicomputer-based analyzer

    International Nuclear Information System (INIS)

    Described are a number of typical field and laboratory counting systems that use standard stand-alone multichannel analyzers (MCA) interfaced to a Hewlett-Packard Company (HP 9830) programmable calculator. Such systems can offer significant advantages in cost and flexibility over a minicomputyr-based system. Because most laboratories tend to accumulate MCA's over the years, the programmable calculator also offers an easy way to upgrade the laboratory while making optimum use of existing systems. Software programs are easily tailored to fit a variety of general or specific applications. The only disadvantage of the calculator vs a computer-based system is in speed of analyses; however, for most applications this handicap is minimal. Applications discussed give a brief overview of the power and flexibility of the MCA-calculator approach to automated counting and data reduction

  1. Calculating Track-Based Observables for the LHC

    OpenAIRE

    Chang, Hsi-Ming(Department of Physics, University of California at San Diego, La Jolla, CA 92093, USA); Procura, Massimiliano; Thaler, Jesse; Waalewijn, Wouter J.

    2013-01-01

    By using observables that only depend on charged particles (tracks), one can efficiently suppress pile-up contamination at the LHC. Such measurements are not infrared safe in perturbation theory, so any calculation of track-based observables must account for hadronization effects. We develop a formalism to perform these calculations in QCD, by matching partonic cross sections onto new non-perturbative objects called track functions which absorb infrared divergences. The track function T_i(x) ...

  2. On the calculation of percentile-based bibliometric indicators

    CERN Document Server

    Waltman, Ludo

    2012-01-01

    A percentile-based bibliometric indicator is an indicator that values publications based on their position within the citation distribution of their field. The most straightforward percentile-based indicator is the proportion of frequently cited publications, for instance the proportion of publications that belong to the top 10% most frequently cited of their field. Recently, more complex percentile-based indicators were proposed. A difficulty in the calculation of percentile-based indicators is caused by the discrete nature of citation distributions combined with the presence of many publications with the same number of citations. We introduce an approach to calculating percentile-based indicators that deals with this difficulty in a more satisfactory way than earlier approaches suggested in the literature. We show in a formal mathematical framework that our approach leads to indicators that do not suffer from biases in favor of or against particular fields of science.

  3. Affine General Equilibrium Models

    OpenAIRE

    Bjørn Eraker

    2008-01-01

    No-arbitrage models are extremely flexible modelling tools but often lack economic motivation. This paper describes an equilibrium consumption-based CAPM framework based on Epstein-Zin preferences, which produces analytic pricing formulas for stocks and bonds under the assumption that macro growth rates follow affine processes. This allows the construction of equilibrium pricing formulas while maintaining the same flexibility of state dynamics as in no-arbitrage models. In demonstrating the a...

  4. Structure-based virtual screening of novel, high-affinity BRD4 inhibitors.

    Science.gov (United States)

    Muvva, Charuvaka; Singam, E R Azhagiya; Raman, S Sundar; Subramanian, V

    2014-07-29

    Bromodomains (BRDs) are a diverse family of evolutionarily conserved protein-interaction modules. Among various members of the bromodomain and extra terminal domain family, BRD4 is found to be an important target for many diseases such as cancer, acute myeloid leukemia, multiple myeloma, Burkitt's lymphoma, etc. Therefore, in this study an attempt has been made to screen compounds from NCI Diversity, Drug Bank and Toslab Databases targeting the Kac binding site of BRD4 using molecular docking, molecular dynamics simulations, MM-PB/GBSA binding free energy calculations and steered molecular dynamics simulations. Using virtual screening and docking, we have identified 11 inhibitors. These new inhibitors exhibit binding energy values higher than that of the (+)JQ1 inhibitor which is effective against BRD4. However, due to the toxicity of (+)JQ1, the designing of new inhibitors becomes significantly important. Thus, these new 11 ligands were systematically analyzed using other computational investigations. Results reveal that the compounds ZINC01411240, ZINC19632618 and ZINC04818522 could be potential drug candidates for targeting BRD4. It can also be seen from the results that there is a linear relationship between the results obtained from the SMD simulation and free energy obtained from the MM-PBSA/GBSA approach. This study clearly illustrates that the steered molecular dynamics can be effectively used for the design of new inhibitors. PMID:24976024

  5. Enhancement of affinity-based biosensors: effect of sensing chamber geometry on sensitivity

    Czech Academy of Sciences Publication Activity Database

    Lynn, Nicholas Scott; Šípová, Hana; Adam, Pavel; Homola, Jiří

    2013-01-01

    Roč. 13, č. 7 (2013), s. 1413-1421. ISSN 1473-0197 R&D Projects: GA ČR GBP205/12/G118 Institutional support: RVO:67985882 Keywords : SURFACE-BASED BIOSENSORS * DIFFUSION * PLASMON RESONANCE BIOSENSOR Subject RIV: BH - Optics, Masers, Lasers Impact factor: 5.748, year: 2013

  6. Novel and high affinity fluorescent ligands for the serotonin transporter based on (s)-citalopram

    DEFF Research Database (Denmark)

    Kumar, Vivek; Rahbek-Clemmensen, Troels; Billesbølle, Christian B;

    2014-01-01

    Novel rhodamine-labeled ligands, based on (S)-citalopram, were synthesized and evaluated for uptake inhibition at the human serotonin, dopamine, and norepinephrine transporters (hSERT, hDAT, and hNET, respectively) and for binding at SERT, in transiently transfected COS7 cells. Compound 14 demons...

  7. Highly sensitive voltammetric biosensor for nitric oxide based on its high affinity with hemoglobin

    International Nuclear Information System (INIS)

    Although heme protein-based, amperometric nitric oxide (NO) biosensors have been well documented in previous studies, most have been conducted in anaerobic conditions. Herein we report a novel hemoglobin-based NO biosensor that is not only very sensitive but also usable in air. The heme protein was entrapped in a sodium montmorillonite film, which was immobilized at a pyrolytic graphite electrode surface. Film-entrapped hemoglobin can directly exchange electrons with the electrode, and this process has proven to favor the catalytic reduction of oxygen. In addition, NO induced a cathodic potential shift of the catalytic reduction peak of oxygen. This potential shift was proportional to the logarithm of NO concentration ranging from 4.0 x 10-11 to 5.0 x 10-6 mol/L. The detection limit has been estimated to be 20 pM, approximately four orders lower than previously reported amperometric detectors

  8. The role of taste affinity in agent-based models for social recommendation

    CERN Document Server

    Cimini, Giulio; Medo, Matus; Chen, Duanbing

    2013-01-01

    In the Internet era, online social media emerged as the main tool for sharing opinions and information among individuals. In this work we study an adaptive model of a social network where directed links connect users with similar tastes, and over which information propagates through social recommendation. Agent-based simulations of two different artificial settings for modeling user tastes are compared with patterns seen in real data, suggesting that users differing in their scope of interests is a more realistic assumption than users differing only in their particular interests. We further introduce an extensive set of similarity metrics based on users' past assessments, and evaluate their use in the given social recommendation model with both artificial simulations and real data. Superior recommendation performance is observed for similarity metrics that give preference to users with small scope---who thus act as selective filters in social recommendation.

  9. Investigations on Monte Carlo based coupled core calculations

    International Nuclear Information System (INIS)

    The present trend in advanced and next generation nuclear reactor core designs is towards increased material heterogeneity and geometry complexity. The continuous energy Monte Carlo method has the capability of modeling such core environments with high accuracy. This paper presents results from feasibility studies being performed at the Pennsylvania State University (PSU) on both accelerating Monte Carlo criticality calculations by using hybrid nodal diffusion Monte Carlo schemes and thermal-hydraulic feedback modeling in Monte Carlo core calculations. The computation process is greatly accelerated by calculating the three-dimensional (3D) distributions of fission source and thermal-hydraulics parameters with the coupled NEM/COBRA-TF code and then using coupled MCNP5/COBRA-TF code to fine tune the results to obtain an increased accuracy. The PSU NEM code employs cross-sections generated by MCNP5 for pin-cell based nodal compositions. The implementation of different code modifications facilitating coupled calculations are presented first. Then the coupled hybrid Monte Carlo based code system is applied to a 3D 2*2 pin array extracted from a Boiling Water Reactor (BWR) assembly with reflective radial boundary conditions. The obtained results are discussed and it is showed that performing Monte-Carlo based coupled core steady state calculations are feasible. (authors)

  10. Alternative affinity tools: more attractive than antibodies?

    NARCIS (Netherlands)

    Ruigrok, V.J.B.; Levisson, M.; Eppink, M.H.M.; Smidt, H.; Oost, van der J.

    2011-01-01

    Antibodies are the most successful affinity tools used today, in both fundamental and applied research (diagnostics, purification and therapeutics). Nonetheless, antibodies do have their limitations, including high production costs and low stability. Alternative affinity tools based on nucleic acids

  11. DFT study on the effect of exocyclic substituents on the proton affinity of 1-methylimidazole

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Haining; Bara, Jason E.; Turner, C. Heath, E-mail: hturner@eng.ua.edu

    2013-04-18

    Highlights: • DFT calculations are used to predict the proton affinity of 1-methylimidazoles. • The electron-withdrawing groups dominate the predicted proton affinity. • The effects of multiple substituents on the proton affinity can be accurately predicted. • Large compound libraries can be screened for imidazoles with tailored reactivity. - Abstract: A deeper understanding of the acid/base properties of imidazole derivatives will aid the development of solvents, polymer membranes and other materials that can be used for CO{sub 2} capture and acid gas removal. In this study, we employ density functional theory calculations to investigate the effect of various electron-donating and electron-withdrawing groups on the proton affinity of 1-methylimidazole. We find that electron-donating groups are able to increase the proton affinity relative to 1-methylimidazole, i.e., making the molecule more basic. In contrast, electron-withdrawing groups cause a decrease of the proton affinity. When multiple substituents are present, their effects on the proton affinity were found to be additive. This finding offers a quick approach for predicting and targeting the proton affinities of this series of molecules, and we show the strong correlation between the calculated proton affinities and experimental pK{sub a} values.

  12. Algorithm for calculating torque base in vehicle traction control system

    Science.gov (United States)

    Li, Hongzhi; Li, Liang; Song, Jian; Wu, Kaihui; Qiao, Yanjuan; Liu, Xingchun; Xia, Yongguang

    2012-11-01

    Existing research on the traction control system(TCS) mainly focuses on control methods, such as the PID control, fuzzy logic control, etc, aiming at achieving an ideal slip rate of the drive wheel over long control periods. The initial output of the TCS (referred to as the torque base in this paper), which has a great impact on the driving performance of the vehicle in early cycles, remains to be investigated. In order to improve the control performance of the TCS in the first several cycles, an algorithm is proposed to determine the torque base. First, torque bases are calculated by two different methods, one based on states judgment and the other based on the vehicle dynamics. The confidence level of the torque base calculated based on the vehicle dynamics is also obtained. The final torque base is then determined based on the two torque bases and the confidence level. Hardware-in-the-loop(HIL) simulation and vehicle tests emulating sudden start on low friction roads have been conducted to verify the proposed algorithm. The control performance of a PID-controlled TCS with and without the proposed torque base algorithm is compared, showing that the proposed algorithm improves the performance of the TCS over the first several cycles and enhances about 5% vehicle speed by contrast. The proposed research provides a more proper initial value for TCS control, and improves the performance of the first several control cycles of the TCS.

  13. Data base for terrestrial food pathways dose commitment calculations

    International Nuclear Information System (INIS)

    A computer program is under development to allow calculation of the dose-to-man in Georgia and South Carolina from ingestion of radionuclides in terrestrial foods resulting from deposition of airborne radionuclides. This program is based on models described in Regulatory Guide 1.109 (USNRC, 1977). The data base describes the movement of radionuclides through the terrestrial food chain, growth and consumption factors for a variety of radionuclides

  14. Generalized hamilton-jacobi-bellman formulation -based neural network control of affine nonlinear discrete-time systems.

    Science.gov (United States)

    Chen, Zheng; Jagannathan, Sarangapani

    2008-01-01

    In this paper, we consider the use of nonlinear networks towards obtaining nearly optimal solutions to the control of nonlinear discrete-time (DT) systems. The method is based on least squares successive approximation solution of the generalized Hamilton-Jacobi-Bellman (GHJB) equation which appears in optimization problems. Successive approximation using the GHJB has not been applied for nonlinear DT systems. The proposed recursive method solves the GHJB equation in DT on a well-defined region of attraction. The definition of GHJB, pre-Hamiltonian function, HJB equation, and method of updating the control function for the affine nonlinear DT systems under small perturbation assumption are proposed. A neural network (NN) is used to approximate the GHJB solution. It is shown that the result is a closed-loop control based on an NN that has been tuned a priori in offline mode. Numerical examples show that, for the linear DT system, the updated control laws will converge to the optimal control, and for nonlinear DT systems, the updated control laws will converge to the suboptimal control. PMID:18269941

  15. Neural network-based finite-horizon optimal control of uncertain affine nonlinear discrete-time systems.

    Science.gov (United States)

    Zhao, Qiming; Xu, Hao; Jagannathan, Sarangapani

    2015-03-01

    In this paper, the finite-horizon optimal control design for nonlinear discrete-time systems in affine form is presented. In contrast with the traditional approximate dynamic programming methodology, which requires at least partial knowledge of the system dynamics, in this paper, the complete system dynamics are relaxed utilizing a neural network (NN)-based identifier to learn the control coefficient matrix. The identifier is then used together with the actor-critic-based scheme to learn the time-varying solution, referred to as the value function, of the Hamilton-Jacobi-Bellman (HJB) equation in an online and forward-in-time manner. Since the solution of HJB is time-varying, NNs with constant weights and time-varying activation functions are considered. To properly satisfy the terminal constraint, an additional error term is incorporated in the novel update law such that the terminal constraint error is also minimized over time. Policy and/or value iterations are not needed and the NN weights are updated once a sampling instant. The uniform ultimate boundedness of the closed-loop system is verified by standard Lyapunov stability theory under nonautonomous analysis. Numerical examples are provided to illustrate the effectiveness of the proposed method. PMID:25720005

  16. A Novel Clustering Methodology Based on Modularity Optimisation for Detecting Authorship Affinities in Shakespearean Era Plays.

    Science.gov (United States)

    Naeni, Leila M; Craig, Hugh; Berretta, Regina; Moscato, Pablo

    2016-01-01

    In this study we propose a novel, unsupervised clustering methodology for analyzing large datasets. This new, efficient methodology converts the general clustering problem into the community detection problem in graph by using the Jensen-Shannon distance, a dissimilarity measure originating in Information Theory. Moreover, we use graph theoretic concepts for the generation and analysis of proximity graphs. Our methodology is based on a newly proposed memetic algorithm (iMA-Net) for discovering clusters of data elements by maximizing the modularity function in proximity graphs of literary works. To test the effectiveness of this general methodology, we apply it to a text corpus dataset, which contains frequencies of approximately 55,114 unique words across all 168 written in the Shakespearean era (16th and 17th centuries), to analyze and detect clusters of similar plays. Experimental results and comparison with state-of-the-art clustering methods demonstrate the remarkable performance of our new method for identifying high quality clusters which reflect the commonalities in the literary style of the plays. PMID:27571416

  17. A Novel Clustering Methodology Based on Modularity Optimisation for Detecting Authorship Affinities in Shakespearean Era Plays

    Science.gov (United States)

    Craig, Hugh; Berretta, Regina; Moscato, Pablo

    2016-01-01

    In this study we propose a novel, unsupervised clustering methodology for analyzing large datasets. This new, efficient methodology converts the general clustering problem into the community detection problem in graph by using the Jensen-Shannon distance, a dissimilarity measure originating in Information Theory. Moreover, we use graph theoretic concepts for the generation and analysis of proximity graphs. Our methodology is based on a newly proposed memetic algorithm (iMA-Net) for discovering clusters of data elements by maximizing the modularity function in proximity graphs of literary works. To test the effectiveness of this general methodology, we apply it to a text corpus dataset, which contains frequencies of approximately 55,114 unique words across all 168 written in the Shakespearean era (16th and 17th centuries), to analyze and detect clusters of similar plays. Experimental results and comparison with state-of-the-art clustering methods demonstrate the remarkable performance of our new method for identifying high quality clusters which reflect the commonalities in the literary style of the plays. PMID:27571416

  18. Compressed images for affinity prediction-2 (CIFAP-2): an improved machine learning methodology on protein-ligand interactions based on a study on caspase 3 inhibitors.

    Science.gov (United States)

    Erdas, Ozlem; Andac, Cenk A; Gurkan-Alp, A Selen; Alpaslan, Ferda Nur; Buyukbingol, Erdem

    2015-01-01

    The aim of this study is to propose an improved computational methodology, which is called Compressed Images for Affinity Prediction-2 (CIFAP-2) to predict binding affinities of structurally related protein-ligand complexes. CIFAP-2 method is established based on a protein-ligand model from which computational affinity information is obtained by utilizing 2D electrostatic potential images determined for the binding site of protein-ligand complexes. The quality of the prediction of the CIFAP-2 algorithm was tested using partial least squares regression (PLSR) as well as support vector regression (SVR) and adaptive neuro-fuzzy ınference system (ANFIS), which are highly promising prediction methods in drug design. CIFAP-2 was applied on a protein-ligand complex system involving Caspase 3 (CASP3) and its 35 inhibitors possessing a common isatin sulfonamide pharmacophore. As a result, PLSR affinity prediction for the CASP3-ligand complexes gave rise to the most consistent information with reported empirical binding affinities (pIC(50)) of the CASP3 inhibitors. PMID:25578823

  19. Software-Based Visual Loan Calculator For Banking Industry

    Science.gov (United States)

    Isizoh, A. N.; Anazia, A. E.; Okide, S. O. 3; Onyeyili, T. I.; Okwaraoka, C. A. P.

    2012-03-01

    industry is very necessary in modern day banking system using many design techniques for security reasons. This paper thus presents the software-based design and implementation of a Visual Loan calculator for banking industry using Visual Basic .Net (VB.Net). The fundamental approach to this is to develop a Graphical User Interface (GUI) using VB.Net operating tools, and then developing a working program which calculates the interest of any loan obtained. The VB.Net programming was done, implemented and the software proved satisfactory.

  20. Cross section library based discrepancies in MCNP criticality calculations

    International Nuclear Information System (INIS)

    In nuclear engineering several reactor physics problems can be approached using Monte Carlo neutron transport techniques, which usually give reliable results when properly used. The quality of the results is largely determined by the accuracy of the geometry model and the statistical uncertainty of the Monte Carlo calculation. There is, however, another potential source of error, namely the cross section data used with the Monte Carlo codes. It has been shown in several studies that there may be significant discrepancies between results calculated using cross section libraries based on different evaluated nuclear data files. These discrepancies are well known to the evaluators of nuclear data but less acknowledged by reactor physicists, who often rely on a single cross section library in their calculations. In this study, discrepancies originating from base nuclear data were investigated in a systematic manner using the MCNP4C code. Calculations on simplified UOX and MOX fuelled LWR lattices were carried out using cross section libraries based on ENDF/B-VI.8, JEFF-3.0, JENDL-3.3, JEF-2.2 and JENDL-3.2 evaluated data files. The neutron spectrum of the system was varied over a wide range by changing the ratio of hydrogen to heavy metal atoms. The essential isotopes underlying the discrepancies were identified and the roles of fission and absorption cross sections of the most important nuclides assessed. The results confirm that there are large systematic differences up to a few per cent in the multiplication factors of LWR lattices. The discrepancies are strongly dependent on material compositions and neutron spectra, and largely originate from U-238 and the primary fissile isotopes. It is concluded that these discrepancies should be taken into account in all reactor physics calculations, and that reactor physicists should not rely on results based on a single cross section library. (author)

  1. A Comparative Study of Lectin Affinity Based Plant N-Glycoproteome Profiling Using Tomato Fruit as a Model*

    OpenAIRE

    Ruiz-May, Eliel; Hucko, Simon; Kevin J. Howe; Zhang, Sheng; Sherwood, Robert W.; Thannhauser, Theodore W; Rose, Jocelyn K. C.

    2013-01-01

    Lectin affinity chromatography (LAC) can provide a valuable front-end enrichment strategy for the study of N-glycoproteins and has been used to characterize a broad range eukaryotic N-glycoproteomes. Moreover, studies with mammalian systems have suggested that the use of multiple lectins with different affinities can be particularly effective. A multi-lectin approach has also been reported to provide a significant benefit for the analysis of plant N-glycoproteins; however, it has yet to be de...

  2. Switching strategy based on homotopy continuation for non-regular affine systems with application in induction motor control

    OpenAIRE

    Borisevich, Alex; Schullerus, Gernot

    2012-01-01

    In the article the problem of output setpoint tracking for affine non-linear system is considered. Presented approach combines state feedback linearization and homotopy numerical continuation in subspaces of phase space where feedback linearization fails. The method of numerical parameter continuation for solving systems of nonlinear equations is generalized to control affine non-linear dynamical systems. The illustrative example of control of MIMO system which is not static feedback lineariz...

  3. A mix-and-read drop-based in vitro two-hybrid method for screening high-affinity peptide binders.

    Science.gov (United States)

    Cui, Naiwen; Zhang, Huidan; Schneider, Nils; Tao, Ye; Asahara, Haruichi; Sun, Zhiyi; Cai, Yamei; Koehler, Stephan A; de Greef, Tom F A; Abbaspourrad, Alireza; Weitz, David A; Chong, Shaorong

    2016-01-01

    Drop-based microfluidics have recently become a novel tool by providing a stable linkage between phenotype and genotype for high throughput screening. However, use of drop-based microfluidics for screening high-affinity peptide binders has not been demonstrated due to the lack of a sensitive functional assay that can detect single DNA molecules in drops. To address this sensitivity issue, we introduced in vitro two-hybrid system (IVT2H) into microfluidic drops and developed a streamlined mix-and-read drop-IVT2H method to screen a random DNA library. Drop-IVT2H was based on the correlation between the binding affinity of two interacting protein domains and transcriptional activation of a fluorescent reporter. A DNA library encoding potential peptide binders was encapsulated with IVT2H such that single DNA molecules were distributed in individual drops. We validated drop-IVT2H by screening a three-random-residue library derived from a high-affinity MDM2 inhibitor PMI. The current drop-IVT2H platform is ideally suited for affinity screening of small-to-medium-sized libraries (10(3)-10(6)). It can obtain hits within a single day while consuming minimal amounts of reagents. Drop-IVT2H simplifies and accelerates the drop-based microfluidics workflow for screening random DNA libraries, and represents a novel alternative method for protein engineering and in vitro directed protein evolution. PMID:26940078

  4. Binding energies and electron affinities of small silicon clusters (n=2--5)

    International Nuclear Information System (INIS)

    The Gaussian-2 (G2) theoretical procedure, based on ab initio molecular orbital theory, is used to calculate the energies of Sin and Si-n (n=1--5) clusters. The G2 energies are used to derive accurate binding energies and electron affinities of these clusters. The calculated electron affinities of Si2--Si4 are in agreement to within 0.1 eV with results from recent photoelectron spectroscopic measurements

  5. Electric field calculations in brain stimulation based on finite elements

    DEFF Research Database (Denmark)

    Windhoff, Mirko; Opitz, Alexander; Thielscher, Axel

    2013-01-01

    The need for realistic electric field calculations in human noninvasive brain stimulation is undisputed to more accurately determine the affected brain areas. However, using numerical techniques such as the finite element method (FEM) is methodologically complex, starting with the creation...... high-quality head models from magnetic resonance images and their usage in subsequent field calculations based on the FEM. The pipeline starts by extracting the borders between skin, skull, cerebrospinal fluid, gray and white matter. The quality of the resulting surfaces is subsequently improved...... the successful usage of the pipeline in six subjects, including field calculations for transcranial magnetic stimulation and transcranial direct current stimulation. The quality of the head volume meshes is validated both in terms of capturing the underlying anatomy and of the well-shapedness of the mesh...

  6. Motion compensation for interventional navigation on 3D static roadmaps based on an affine model and gating

    International Nuclear Information System (INIS)

    Current cardiac interventions are performed under 2D fluoroscopy, which comes along with well-known burdens to patients and physicians, such as x-ray exposure and the use of contrast agent. Furthermore, the navigation on complex structures such as the coronaries is complicated by the use of 2D images in which the catheter position is only visible while the contrast agent is introduced. In this work, a new method is presented, which circumvents these drawbacks and enables the cardiac interventional navigation on motion-compensated 3D static roadmaps. For this, the catheter position is continuously reconstructed within a previously acquired 3D roadmap of the coronaries. The motion compensation makes use of an affine motion model for compensating the respiratory motion and compensates the motion due to cardiac contraction by gating the catheter position. In this process, only those positions which have been acquired during the rest phase of the heart are used for the reconstruction. The method necessitates the measurement of the catheter position, which is done by using a magnetic tracking system. Nevertheless, other techniques, such as image-based catheter tracking, can be applied. This motion compensation has been tested on a dynamic heart phantom. The evaluation shows that the algorithm can reconstruct the catheter position on the 3D static roadmap precisely with a residual motion of 1.0 mm and less

  7. Real-time label-free affinity biosensors for enumeration of total bacteria based on immobilized concanavalin A.

    Science.gov (United States)

    Jantra, Jongjit; Kanatharana, Proespichaya; Asawatreratanakul, Punnee; Hedström, Martin; Mattiasson, Bo; Thavarungkul, Panote

    2011-01-01

    This work presents the results of the use of flow injection surface plasmon resonance and impedimetric affinity biosensors for detecting and enumerating total bacteria based on the binding between E. coli and Con A, immobilized on a modified gold electrode. The single analysis time for both techniques was less than 20 min. Dissociation between the immobilized Con A and the E. coli using 200 mM of glucose in HCl at pH of 2.00 enabling the sensor to be reused for between 29-35 times. Impedimetric detection provided a much lower limit of detection (12 CFU mL(-1)) than the surface plasmon resonance method (6.1 × 10(7) CFU mL(-1)). Using the impedimetric system, real sample analysis was performed and the results were compared to the plate count agar method. Cell concentrations obtained by the biosensor were only slightly different from the result obtained from the plate count agar. The proposed system offers a rapid and useful tool for screening detection and enumeration of total bacteria. PMID:21961522

  8. Fast Estimation of Defect Profiles from the Magnetic Flux Leakage Signal Based on a Multi-Power Affine Projection Algorithm

    Directory of Open Access Journals (Sweden)

    Wenhua Han

    2014-09-01

    Full Text Available Magnetic flux leakage (MFL inspection is one of the most important and sensitive nondestructive testing approaches. For online MFL inspection of a long-range railway track or oil pipeline, a fast and effective defect profile estimating method based on a multi-power affine projection algorithm (MAPA is proposed, where the depth of a sampling point is related with not only the MFL signals before it, but also the ones after it, and all of the sampling points related to one point appear as serials or multi-power. Defect profile estimation has two steps: regulating a weight vector in an MAPA filter and estimating a defect profile with the MAPA filter. Both simulation and experimental data are used to test the performance of the proposed method. The results demonstrate that the proposed method exhibits high speed while maintaining the estimated profiles clearly close to the desired ones in a noisy environment, thereby meeting the demand of accurate online inspection.

  9. Hierarchical Affinity Propagation

    CERN Document Server

    Givoni, Inmar; Frey, Brendan J

    2012-01-01

    Affinity propagation is an exemplar-based clustering algorithm that finds a set of data-points that best exemplify the data, and associates each datapoint with one exemplar. We extend affinity propagation in a principled way to solve the hierarchical clustering problem, which arises in a variety of domains including biology, sensor networks and decision making in operational research. We derive an inference algorithm that operates by propagating information up and down the hierarchy, and is efficient despite the high-order potentials required for the graphical model formulation. We demonstrate that our method outperforms greedy techniques that cluster one layer at a time. We show that on an artificial dataset designed to mimic the HIV-strain mutation dynamics, our method outperforms related methods. For real HIV sequences, where the ground truth is not available, we show our method achieves better results, in terms of the underlying objective function, and show the results correspond meaningfully to geographi...

  10. Affine functors and duality

    OpenAIRE

    J. Navarro; Sancho, C.; Sancho, P.

    2009-01-01

    A functor of sets $\\mathbb X$ over the category of $K$-commutative algebras is said to be an affine functor if its functor of functions, $\\mathbb A_{\\mathbb X}$, is reflexive and $\\mathbb X=\\Spec \\mathbb A_{\\mathbb X}$. We prove that affine functors are equal to a direct limit of affine schemes and that affine schemes, formal schemes, the completion of affine schemes along a closed subscheme, etc., are affine functors. Endowing an affine functor $\\mathbb X$ with a functor of monoids structure...

  11. Vertical emission profiles for Europe based on plume rise calculations.

    Science.gov (United States)

    Bieser, J; Aulinger, A; Matthias, V; Quante, M; Denier van der Gon, H A C

    2011-10-01

    The vertical allocation of emissions has a major impact on results of Chemistry Transport Models. However, in Europe it is still common to use fixed vertical profiles based on rough estimates to determine the emission height of point sources. This publication introduces a set of new vertical profiles for the use in chemistry transport modeling that were created from hourly gridded emissions calculated by the SMOKE for Europe emission model. SMOKE uses plume rise calculations to determine effective emission heights. Out of more than 40,000 different vertical emission profiles 73 have been chosen by means of hierarchical cluster analysis. These profiles show large differences to those currently used in many emission models. Emissions from combustion processes are released in much lower altitudes while those from production processes are allocated to higher altitudes. The profiles have a high temporal and spatial variability which is not represented by currently used profiles. PMID:21561695

  12. Calculation of the debris flow concentration based on clay content

    Institute of Scientific and Technical Information of China (English)

    CHEN; Ningsheng; CUI; Peng; LIU; Zhonggang; WEI; Fangqiang

    2003-01-01

    The debris flow clay content has very tremendous influence on its concentration (γC). It is reported that the concentration can be calculated by applying the relative polynomial based on the clay content. Here one polynomial model and one logarithm model to calculate the concentration based on the clay content for both the ordinary debris flow and viscous debris flow are obtained. The result derives from the statistics and analysis of the relationship between the debris flow concentrations and clay content in 45 debris flow sites located in the southwest of China. The models can be applied for the concentration calculation to those debris flows that are impossible to observe. The models are available to calculate the debris flow concentration, the principles of which are in the clay content affecting on the debris flow formation, movement and suspending particle diameter. The mechanism of the relationship of the clay content and concentration is clear and reliable. The debris flow is usually of micro-viscous when the clay content is low (<3%), by analyzing the developing tendency on the basics of the relationship between the clay content and debris flow concentration. Indeed, the less the clay content, the less the concentration for most debris flows. The debris flow tends to become the water rock flow or the hyperconcentrated flow with the clay content decrease. Through statistics it is apt to transform the soil into the viscous debris flow when the clay content of ranges is in 3%-18%. Its concentration increases with the increasing of the clay content when the clay content is between 5% and 10%. But the value decreases with the increasing of the clay content when the clay content is between 10% and 18%. It is apt to transform the soil into the mudflow, when the clay content exceeds 18%. The concentration of the mudflow usually decreases with the increase of the clay content, and this developing tendency reverses to that of the micro-viscous debris flow. There is

  13. Mining Temporal Protein Complex Based on the Dynamic PIN Weighted with Connected Affinity and Gene Co-Expression.

    Science.gov (United States)

    Shen, Xianjun; Yi, Li; Jiang, Xingpeng; He, Tingting; Hu, Xiaohua; Yang, Jincai

    2016-01-01

    The identification of temporal protein complexes would make great contribution to our knowledge of the dynamic organization characteristics in protein interaction networks (PINs). Recent studies have focused on integrating gene expression data into static PIN to construct dynamic PIN which reveals the dynamic evolutionary procedure of protein interactions, but they fail in practice for recognizing the active time points of proteins with low or high expression levels. We construct a Time-Evolving PIN (TEPIN) with a novel method called Deviation Degree, which is designed to identify the active time points of proteins based on the deviation degree of their own expression values. Owing to the differences between protein interactions, moreover, we weight TEPIN with connected affinity and gene co-expression to quantify the degree of these interactions. To validate the efficiencies of our methods, ClusterONE, CAMSE and MCL algorithms are applied on the TEPIN, DPIN (a dynamic PIN constructed with state-of-the-art three-sigma method) and SPIN (the original static PIN) to detect temporal protein complexes. Each algorithm on our TEPIN outperforms that on other networks in terms of match degree, sensitivity, specificity, F-measure and function enrichment etc. In conclusion, our Deviation Degree method successfully eliminates the disadvantages which exist in the previous state-of-the-art dynamic PIN construction methods. Moreover, the biological nature of protein interactions can be well described in our weighted network. Weighted TEPIN is a useful approach for detecting temporal protein complexes and revealing the dynamic protein assembly process for cellular organization. PMID:27100396

  14. Affinity-based enrichment strategies to assay methyl-CpG binding activity and DNA methylation in early Xenopus embryos

    Directory of Open Access Journals (Sweden)

    Bogdanović Ozren

    2011-08-01

    Full Text Available Abstract Background DNA methylation is a widespread epigenetic modification in vertebrate genomes. Genomic sites of DNA methylation can be bound by methyl-CpG-binding domain proteins (MBDs and specific zinc finger proteins, which can recruit co-repressor complexes to silence transcription on targeted loci. The binding to methylated DNA may be regulated by post-translational MBD modifications. Findings A methylated DNA affinity precipitation method was implemented to assay binding of proteins to methylated DNA. Endogenous MeCP2 and MBD3 were precipitated from Xenopus oocyte extracts and conditions for methylation-specific binding were optimized. For a reverse experiment, DNA methylation in early Xenopus embryos was assessed by MBD affinity capture. Conclusions A methylated DNA affinity resin can be applied to probe for MBD activity in extracts. This assay has a broad application potential as it can be coupled to downstream procedures such as western blotting, fluorimetric HDAC assays and quantitative mass spectrometry. Methylated DNA affinity capture by methyl-CpG binding proteins produces fractions highly enriched for methylated DNA, suitable for coupling to next generation sequencing technologies. The two enrichment strategies allow probing of methyl-CpG protein interactions in early vertebrate oocytes and embryos.

  15. A comparative study of lectin affinity based plant n-glycoproteome profiling using tomato fruit as a model

    Science.gov (United States)

    Lectin affinity chromatography (LAC) can provide a valuable front-end enrichment strategy for the study of N-glycoproteins and has been used to characterize a broad range eukaryotic N-glycoproteomes. Moreover, studies with mammalian systems have suggested that the use of multiple lectins with differ...

  16. Representations of affine Hecke algebras

    CERN Document Server

    Xi, Nanhua

    1994-01-01

    Kazhdan and Lusztig classified the simple modules of an affine Hecke algebra Hq (q E C*) provided that q is not a root of 1 (Invent. Math. 1987). Ginzburg had some very interesting work on affine Hecke algebras. Combining these results simple Hq-modules can be classified provided that the order of q is not too small. These Lecture Notes of N. Xi show that the classification of simple Hq-modules is essentially different from general cases when q is a root of 1 of certain orders. In addition the based rings of affine Weyl groups are shown to be of interest in understanding irreducible representations of affine Hecke algebras. Basic knowledge of abstract algebra is enough to read one third of the book. Some knowledge of K-theory, algebraic group, and Kazhdan-Lusztig cell of Cexeter group is useful for the rest

  17. Jet identification based on probability calculations using Bayes' theorem

    International Nuclear Information System (INIS)

    The problem of identifying jets at LEP and HERA has been studied. Identification using jet energies and fragmentation properties was treated separately in order to investigate the degree of quark-gluon separation that can be achieved by either of these approaches. In the case of the fragmentation-based identification, a neural network was used, and a test of the dependence on the jet production process and the fragmentation model was done. Instead of working with the separation variables directly, these have been used to calculate probabilities of having a specific type of jet, according to Bayes' theorem. This offers a direct interpretation of the performance of the jet identification and provides a simple means of combining the results of the energy- and fragmentation-based identifications. (orig.)

  18. Design, Synthesis, Binding and Docking-Based 3D-QSAR Studies of 2-Pyridylbenzimidazoles—A New Family of High Affinity CB1 Cannabinoid Ligands

    Directory of Open Access Journals (Sweden)

    Patricio Iturriaga-Vásquez

    2013-04-01

    Full Text Available A series of novel 2-pyridylbenzimidazole derivatives was rationally designed and synthesized based on our previous studies on benzimidazole 14, a CB1 agonist used as a template for optimization. In the present series, 21 compounds displayed high affinities with Ki values in the nanomolar range. JM-39 (compound 39 was the most active of the series (KiCB1 = 0.53 nM, while compounds 31 and 44 exhibited similar affinities to WIN 55212-2. CoMFA analysis was performed based on the biological data obtained and resulted in a statistically significant CoMFA model with high predictive value (q2 = 0.710, r2 = 0.998, r2pred = 0.823.

  19. Density functional calculations of planar DNA base-pairs

    CERN Document Server

    Machado, M V T; Artacho, E; Sánchez-Portál, D; Soler, J M; Machado, Maider; Ordejon, Pablo; Artacho, Emilio; Sanchez-Portal, Daniel; Soler, Jose M.

    1999-01-01

    We present a systematic Density Functional Theory (DFT) study of geometries and energies of the nucleic acid DNA bases (guanine, adenine, cytosine and thymine) and 30 different DNA base-pairs. We use a recently developed linear-scaling DFT scheme, which is specially suited for systems with large numbers of atoms. As a first step towards the study of large DNA systems, in this work: (i) We establish the reliability of the approximations of our method (including pseudopotentials and basis sets) for the description of the hydrogen-bonded base pairs, by comparing our results with those of former calculations. We show that the interaction energies at Hartree-Fock geometries are in very good agreement with those of second order M{ø}ller-Plesset (MP2) perturbation theory (the most accurate technique that can be applied at present for system of the sizes of the base-pairs). (ii) We perform DFT structural optimizations for the 30 different DNA base-pairs, only three of which had been previously studied with DFT. Our ...

  20. A graph-cut approach to image segmentation using an affinity graph based on l0−sparse representation of features

    OpenAIRE

    Wang, Xiaofang; Li, Huibin; Bichot, Charles-Edmond; Masnou, Simon; Chen, Liming

    2013-01-01

    International audience We propose a graph-cut based image segmentation method by constructing an affinity graph using l0 sparse representation. Computing first oversegmented images, we associate with all segments, that we call superpixels, a collection of features. We find the sparse representation of each set of features over the dictionary of all features by solving a l0-minimization problem. Then, the connection information between superpixels is encoded as the non-zero representation c...

  1. Design of software for calculation of shielding based on various standards radiodiagnostic calculation

    International Nuclear Information System (INIS)

    The aim of this study was to develop a software application that performs calculation shields in radiology room depending on the type of equipment. The calculation will be done by selecting the user, the method proposed in the Guide 5.11, the Report 144 and 147 and also for the methodology given by the Portuguese Health Ministry. (Author)

  2. The PHREEQE Geochemical equilibrium code data base and calculations

    International Nuclear Information System (INIS)

    Compilation of a thermodynamic data base for actinides and fission products for use with PHREEQE has begun and a preliminary set of actinide data has been tested for the PHREEQE code in a version run on an IBM XT computer. The work until now has shown that the PHREEQE code mostly gives satisfying results for specification of actinides in natural water environment. For U and Np under oxidizing conditions, however, the code has difficulties to converge with pH and Eh conserved when a solubility limit is applied. For further calculations of actinide and fission product specification and solubility in a waste repository and in the surrounding geosphere, more data are needed. It is necessary to evaluate the influence of the large uncertainties of some data. A quality assurance and a check on the consistency of the data base is also needed. Further work with data bases should include: an extension to fission products, an extension to engineering materials, an extension to other ligands than hydroxide and carbonate, inclusion of more mineral phases, inclusion of enthalpy data, a control of primary references in order to decide if values from different compilations are taken from the same primary reference and contacts and discussions with other groups, working with actinide data bases, e.g. at the OECD/NEA and at the IAEA. (author)

  3. Assessing the binding affinity of a selected class of DPP4 inhibitors using chemical descriptor-based multiple linear regression

    OpenAIRE

    Jose Isagani Janairo; Gerardo Janairo; Frumencio Co; Derrick Ethelbhert Yu

    2011-01-01

    The activity of a selected class of DPP4 inhibitors was preliminarily assessed using chemical descriptors derived AM1 optimized geometries. Using multiple linear regression model, it was found that ?E0, LUMO energy, area, molecular weight and ?H0 are the significant descriptors that can adequately assess the binding affinity of the compounds. The derived multiple linear regression (MLR) model was validated using rigorous statistical analysis. The preliminary model suggests t...

  4. Density functional theory study of interaction, bonding and affinity of group IIb transition metal cations with nucleic acid bases

    Science.gov (United States)

    Bagchi, Sabyasachi; Mandal, Debasish; Ghosh, Deepanwita; Das, Abhijit K.

    2012-05-01

    The structure, bonding, and energetics of the complexes obtained from the interaction between the most stable tautomeric forms of free DNA and RNA bases and Zn2+, Cd2+ and Hg2+ cations have been studied using density functional B3LYP method. The 6-311+G (2df, 2p) basis set along with LANL2DZ pseudopotentials for the cations are used in the calculations. The tautomerization paths of the nucleobases are investigated and transition states between the tautomeric forms of the free bases are located. The relative stability of the complexes and the tautomers of the free nucleobases are discussed referring to MIA and relative energy values. For uracil, thymine and adenine, interaction of the metal cations with the most stable tautomers form the least stable molecular complexes. For cytosine and guanine, the stability of the metalated complexes differs significantly. The enthalpy (ΔH), entropy (TΔS) and free energy (ΔG) of the complexes at 298 K have also been calculated.

  5. [Calculating method for crop water requirement based on air temperature].

    Science.gov (United States)

    Tao, Guo-Tong; Wang, Jing-Lei; Nan, Ji-Qin; Gao, Yang; Chen, Zhi-Fang; Song, Ni

    2014-07-01

    The importance of accurately estimating crop water requirement for irrigation forecast and agricultural water management has been widely recognized. Although it has been broadly adopted to determine crop evapotranspiration (ETc) via meteorological data and crop coefficient, most of the data in whether forecast are qualitative rather than quantitative except air temperature. Therefore, in this study, how to estimate ETc precisely only using air temperature data in forecast was explored, the accuracy of estimation based on different time scales was also investigated, which was believed to be beneficial to local irrigation forecast as well as optimal management of water and soil resources. Three parameters of Hargreaves equation and two parameters of McClound equation were corrected by using meteorological data of Xinxiang from 1970 to 2010, and Hargreaves equation was selected to calculate reference evapotranspiration (ET0) during the growth period of winter wheat. A model of calculating crop water requirement was developed to predict ETc at time scales of 1, 3, and 7 d intervals through combining Hargreaves equation and crop coefficient model based on air temperature. Results showed that the correlation coefficients between measured and predicted values of ETc reached 0.883 (1 d), 0.933 (3 d), and 0.959 (7 d), respectively. The consistency indexes were 0.94, 0.95 and 0.97, respectively, which showed that forecast error decreased with the increasing time scales. Forecasted accuracy with an error less than 1 mm x d(-1) was more than 80%, and that less than 2 mm x d(-1) was greater than 90%. This study provided sound basis for irrigation forecast and agricultural management in irrigated areas since the forecasted accuracy at each time scale was relatively high. PMID:25345053

  6. Goal based mesh adaptivity for fixed source radiation transport calculations

    International Nuclear Information System (INIS)

    Highlights: ► Derives an anisotropic goal based error measure for shielding problems. ► Reduces the error in the detector response by optimizing the finite element mesh. ► Anisotropic adaptivity captures material interfaces using fewer elements than AMR. ► A new residual based on the numerical scheme chosen forms the error measure. ► The error measure also combines the forward and adjoint metrics in a novel way. - Abstract: In this paper, the application of goal based error measures for anisotropic adaptivity applied to shielding problems in which a detector is present is explored. Goal based adaptivity is important when the response of a detector is required to ensure that dose limits are adhered to. To achieve this, a dual (adjoint) problem is solved which solves the neutron transport equation in terms of the response variables, in this case the detector response. The methods presented can be applied to general finite element solvers, however, the derivation of the residuals are dependent on the underlying finite element scheme which is also discussed in this paper. Once error metrics for the forward and adjoint solutions have been formed they are combined using a novel approach. The two metrics are combined by forming the minimum ellipsoid that covers both the error metrics rather than taking the maximum ellipsoid that is contained within the metrics. Another novel approach used within this paper is the construction of the residual. The residual, used to form the goal based error metrics, is calculated from the subgrid scale correction which is inherent in the underlying spatial discretisation employed

  7. Dual Affine invariant points

    OpenAIRE

    Meyer, Mathieu; Schuett, Carsten; Werner, Elisabeth M.

    2013-01-01

    An affine invariant point on the class of convex bodies in R^n, endowed with the Hausdorff metric, is a continuous map p which is invariant under one-to-one affine transformations A on R^n, that is, p(A(K))=A(p(K)). We define here the new notion of dual affine point q of an affine invariant point p by the formula q(K^{p(K)})=p(K) for every convex body K, where K^{p(K)} denotes the polar of K with respect to p(K). We investigate which affine invariant points do have a dual point, whether this ...

  8. Fan affinity laws from a collision model

    International Nuclear Information System (INIS)

    The performance of a fan is usually estimated using hydrodynamical considerations. The calculations are long and involved and the results are expressed in terms of three affinity laws. In this paper we use kinetic theory to attack this problem. A hard sphere collision model is used, and subsequently a correction to account for the flow behaviour of air is incorporated. Our calculations prove the affinity laws and provide numerical estimates of the air delivery, thrust and drag on a rotating fan. (paper)

  9. A brachytherapy model-based dose calculation algorithm -AMIGOBrachy

    International Nuclear Information System (INIS)

    Brachytherapy treatments have been performed based on TG-43U1 water dose formalism which neglects human tissues density and composition, body interfaces and applicator effects. As these effects could be relevant for brachytherapy energy range, modern treatment planning systems (TPS) are now available that are based on model-based dose calculation algorithms (MBDCA) enabling heterogeneity corrections, which are needed to replace the TG-43U1 water dose formalism for a more accurate approach. The recently published AAPM TG-186 report is the first step towards to a TPS taking into account heterogeneities, applicators and human body complexities. This report presents the current status, recommendations for clinical implementation and specifies research areas where considerable efforts are necessary to move forward with MBDCA. Monte Carlo (MC) codes are an important part of the current algorithms due their flexibility and accuracy, although, almost all MC codes present no interface to process the large amount of data necessary to perform clinical cases simulations, which may include hundreds of dwell positions, inter-seed attenuation, image processing and others time consuming issues that can make MC simulation unfeasible without a pre-processing interface. This work presents the AMIGOBrachy interface tool (Algorithm for Medical Image-based Generating Object - Brachytherapy module) which provides all the pre-processing task needed for the simulation. This software can import and edit treatments plans from BrachyVision™ (Varian Medical Systems, Inc., Palo Alto, CA) and ONCENTRA™ (Elekta AB, Stockholm, Sweden), and also create a new plan through contouring resources, needle recognition, HU segmentation, combining voxels phantoms with analytical geometries to define applicators and other resources used to create MCNP5 input and analyze the results. This work presents some results used to validate the software and to evaluate the heterogeneities impact in a clinical case

  10. A CNS calculation line based on a Monte Carlo method

    International Nuclear Information System (INIS)

    Full text: The design of the moderator cell of a Cold Neutron Source (CNS) involves many different considerations regarding geometry, location, and materials. Decisions taken in this sense affect not only the neutron flux in the source neighborhood, which can be evaluated by a standard empirical method, but also the neutron flux values in experimental positions far away of the neutron source. At long distances from the neutron source, very time consuming 3D deterministic methods or Monte Carlo transport methods are necessary in order to get accurate figures. Standard and typical terminology such as average neutron flux, neutron current, angular flux, luminosity, are magnitudes very difficult to evaluate in positions located several meters away from the neutron source. The Monte Carlo method is a unique and powerful tool to transport neutrons. Its use in a bootstrap scheme appears to be an appropriate solution for this type of systems. The proper use of MCNP as the main tool leads to a fast and reliable method to perform calculations in a relatively short time with low statistical errors. The design goal is to evaluate the performance of the neutron sources, their beam tubes and neutron guides at specific experimental locations in the reactor hall as well as in the neutron or experimental hall. In this work, the calculation methodology used to design Cold, Thermal and Hot Neutron Sources and their associated Neutron Beam Transport Systems, based on the use of the MCNP code, is presented. This work also presents some changes made to the cross section libraries in order to cope with cryogenic moderators such as liquid hydrogen and liquid deuterium. (author)

  11. Structural determinants of sigma receptor affinity

    International Nuclear Information System (INIS)

    The structural determinants of sigma receptor affinity have been evaluated by examining a wide range of compounds related to opioids, neuroleptics, and phenylpiperidine dopaminergic structures for affinity at sigma receptor-binding sites labeled with (+)-[3H]3-PPP. Among opioid compounds, requirements for sigma receptor affinity differ strikingly from the determinants of affinity for conventional opiate receptors. Sigma sites display reverse stereoselectivity to classical opiate receptors. Multi-ringed opiate-related compounds such as morphine and naloxone have negligible affinity for sigma sites, with the highest sigma receptor affinity apparent for benzomorphans which lack the C ring of opioids. Highest affinity among opioids and other compounds occurs with more lipophilic N-substituents. This feature is particularly striking among the 3-PPP derivatives as well as the opioids. The butyrophenone haloperidol is the most potent drug at sigma receptors we have detected. Among the series of butyrophenones, receptor affinity is primarily associated with the 4-phenylpiperidine moiety. Conformational calculations for various compounds indicate a fairly wide range of tolerance for distances between the aromatic ring and the amine nitrogen, which may account for the potency at sigma receptors of structures of considerable diversity. Among the wide range of structures that bind to sigma receptor-binding sites, the common pharmacophore associated with high receptor affinity is a phenylpiperidine with a lipophilic N-substituent

  12. Structural determinants of sigma receptor affinity

    Energy Technology Data Exchange (ETDEWEB)

    Largent, B.L.; Wikstroem, H.G.; Gundlach, A.L.; Snyder, S.H.

    1987-12-01

    The structural determinants of sigma receptor affinity have been evaluated by examining a wide range of compounds related to opioids, neuroleptics, and phenylpiperidine dopaminergic structures for affinity at sigma receptor-binding sites labeled with (+)-(/sup 3/H)3-PPP. Among opioid compounds, requirements for sigma receptor affinity differ strikingly from the determinants of affinity for conventional opiate receptors. Sigma sites display reverse stereoselectivity to classical opiate receptors. Multi-ringed opiate-related compounds such as morphine and naloxone have negligible affinity for sigma sites, with the highest sigma receptor affinity apparent for benzomorphans which lack the C ring of opioids. Highest affinity among opioids and other compounds occurs with more lipophilic N-substituents. This feature is particularly striking among the 3-PPP derivatives as well as the opioids. The butyrophenone haloperidol is the most potent drug at sigma receptors we have detected. Among the series of butyrophenones, receptor affinity is primarily associated with the 4-phenylpiperidine moiety. Conformational calculations for various compounds indicate a fairly wide range of tolerance for distances between the aromatic ring and the amine nitrogen, which may account for the potency at sigma receptors of structures of considerable diversity. Among the wide range of structures that bind to sigma receptor-binding sites, the common pharmacophore associated with high receptor affinity is a phenylpiperidine with a lipophilic N-substituent.

  13. Fast calculation of object infrared spectral scattering based on CUDA

    Science.gov (United States)

    Li, Liang-chao; Niu, Wu-bin; Wu, Zhen-sen

    2010-11-01

    Computational unified device architecture (CUDA) is used for paralleling the spectral scattering calculation from non-Lambertian object of sky and earth background irradiation. The bidirectional reflectance distribution function (BRDF) of five parameter model is utilized in object surface element scattering calculation. The calculation process is partitioned into many threads running in GPU kernel and each thread computes a visible surface element infrared spectral scattering intensity in a specific incident direction, all visible surface elements' intensity are weighted and averaged to obtain the object surface scattering intensity. The comparison of results of the CPU calculation and CUDA parallel calculation of a cylinder shows that the CUDA parallel calculation speed improves more than two hundred times in meeting the accuracy, with a high engineering value.

  14. Seismic response based on transient calculations. Spectral and stochastic methods

    International Nuclear Information System (INIS)

    Further to the recent development in the ASTER code of functionalities enabling random dynamic responses to be calculated, notably a stochastic type seismic analysis, we propose a combination of three calculation methods to estimate a probabilistic seismic response on an N4 reactor building stick-model. Transient calculations involves time-and cost-consuming repetition. The conventional oscillator response spectrum calculation yields only the maximum response expectation. On the other hand, the stochastic approach in this context gives the response corresponding to selected probabilities. (authors). 12 figs., 3 tabs., 4 refs

  15. Application of CFD based wave loads in aeroelastic calculations

    DEFF Research Database (Denmark)

    Schløer, Signe; Paulsen, Bo Terp; Bredmose, Henrik

    2014-01-01

    Two fully nonlinear irregular wave realizations with different significant wave heights are considered. The wave realizations are both calculated in the potential flow solver Ocean-Wave3D and in a coupled domain decomposed potential-flow CFD solver. The surface elevations of the calculated wave...... realizations compare well with corresponding surface elevations from laboratory experiments. In aeroelastic calculations of an offshore wind turbine on a monopile foundation the hydrodynamic loads due to the potential flow solver and Morison’s equation and the hydrodynamic loads calculated by the coupled...

  16. Efficient Error Calculation for Multiresolution Texture-Based Volume Visualization

    Energy Technology Data Exchange (ETDEWEB)

    LaMar, E; Hamann, B; Joy, K I

    2001-10-16

    Multiresolution texture-based volume visualization is an excellent technique to enable interactive rendering of massive data sets. Interactive manipulation of a transfer function is necessary for proper exploration of a data set. However, multiresolution techniques require assessing the accuracy of the resulting images, and re-computing the error after each change in a transfer function is very expensive. They extend their existing multiresolution volume visualization method by introducing a method for accelerating error calculations for multiresolution volume approximations. Computing the error for an approximation requires adding individual error terms. One error value must be computed once for each original voxel and its corresponding approximating voxel. For byte data, i.e., data sets where integer function values between 0 and 255 are given, they observe that the set of error pairs can be quite large, yet the set of unique error pairs is small. instead of evaluating the error function for each original voxel, they construct a table of the unique combinations and the number of their occurrences. To evaluate the error, they add the products of the error function for each unique error pair and the frequency of each error pair. This approach dramatically reduces the amount of computation time involved and allows them to re-compute the error associated with a new transfer function quickly.

  17. Methods for Improving Aptamer Binding Affinity

    OpenAIRE

    Hijiri Hasegawa; Nasa Savory; Koichi Abe; Kazunori Ikebukuro

    2016-01-01

    Aptamers are single stranded oligonucleotides that bind a wide range of biological targets. Although aptamers can be isolated from pools of random sequence oligonucleotides using affinity-based selection, aptamers with high affinities are not always obtained. Therefore, further refinement of aptamers is required to achieve desired binding affinities. The optimization of primary sequences and stabilization of aptamer conformations are the main approaches to refining the binding properties of a...

  18. Affine Grassmann codes

    DEFF Research Database (Denmark)

    Høholdt, Tom; Beelen, Peter; Ghorpade, Sudhir Ramakant

    2010-01-01

    We consider a new class of linear codes, called affine Grassmann codes. These can be viewed as a variant of generalized Reed-Muller codes and are closely related to Grassmann codes.We determine the length, dimension, and the minimum distance of any affine Grassmann code. Moreover, we show that...... affine Grassmann codes have a large automorphism group and determine the number of minimum weight codewords....

  19. Prediction of binding affinity and efficacy of thyroid hormone receptor ligands using QSAR and structure-based modeling methods

    International Nuclear Information System (INIS)

    The thyroid hormone receptor (THR) is an important member of the nuclear receptor family that can be activated by endocrine disrupting chemicals (EDC). Quantitative Structure–Activity Relationship (QSAR) models have been developed to facilitate the prioritization of THR-mediated EDC for the experimental validation. The largest database of binding affinities available at the time of the study for ligand binding domain (LBD) of THRβ was assembled to generate both continuous and classification QSAR models with an external accuracy of R2 = 0.55 and CCR = 0.76, respectively. In addition, for the first time a QSAR model was developed to predict binding affinities of antagonists inhibiting the interaction of coactivators with the AF-2 domain of THRβ (R2 = 0.70). Furthermore, molecular docking studies were performed for a set of THRβ ligands (57 agonists and 15 antagonists of LBD, 210 antagonists of the AF-2 domain, supplemented by putative decoys/non-binders) using several THRβ structures retrieved from the Protein Data Bank. We found that two agonist-bound THRβ conformations could effectively discriminate their corresponding ligands from presumed non-binders. Moreover, one of the agonist conformations could discriminate agonists from antagonists. Finally, we have conducted virtual screening of a chemical library compiled by the EPA as part of the Tox21 program to identify potential THRβ-mediated EDCs using both QSAR models and docking. We concluded that the library is unlikely to have any EDC that would bind to the THRβ. Models developed in this study can be employed either to identify environmental chemicals interacting with the THR or, conversely, to eliminate the THR-mediated mechanism of action for chemicals of concern. - Highlights: • This is the largest curated dataset for ligand binding domain (LBD) of the THRβ. • We report the first QSAR model for antagonists of AF-2 domain of THRβ. • A combination of QSAR and docking enables prediction of both

  20. Prediction of binding affinity and efficacy of thyroid hormone receptor ligands using QSAR and structure-based modeling methods

    Energy Technology Data Exchange (ETDEWEB)

    Politi, Regina [Laboratory for Molecular Modeling, Division of Chemical Biology and Medicinal Chemistry, University of North Carolina, Chapel Hill, NC 27599 (United States); Department of Environmental Sciences and Engineering, University of North Carolina, Chapel Hill, NC 27599 (United States); Rusyn, Ivan, E-mail: iir@unc.edu [Department of Environmental Sciences and Engineering, University of North Carolina, Chapel Hill, NC 27599 (United States); Tropsha, Alexander, E-mail: alex_tropsha@unc.edu [Laboratory for Molecular Modeling, Division of Chemical Biology and Medicinal Chemistry, University of North Carolina, Chapel Hill, NC 27599 (United States)

    2014-10-01

    The thyroid hormone receptor (THR) is an important member of the nuclear receptor family that can be activated by endocrine disrupting chemicals (EDC). Quantitative Structure–Activity Relationship (QSAR) models have been developed to facilitate the prioritization of THR-mediated EDC for the experimental validation. The largest database of binding affinities available at the time of the study for ligand binding domain (LBD) of THRβ was assembled to generate both continuous and classification QSAR models with an external accuracy of R{sup 2} = 0.55 and CCR = 0.76, respectively. In addition, for the first time a QSAR model was developed to predict binding affinities of antagonists inhibiting the interaction of coactivators with the AF-2 domain of THRβ (R{sup 2} = 0.70). Furthermore, molecular docking studies were performed for a set of THRβ ligands (57 agonists and 15 antagonists of LBD, 210 antagonists of the AF-2 domain, supplemented by putative decoys/non-binders) using several THRβ structures retrieved from the Protein Data Bank. We found that two agonist-bound THRβ conformations could effectively discriminate their corresponding ligands from presumed non-binders. Moreover, one of the agonist conformations could discriminate agonists from antagonists. Finally, we have conducted virtual screening of a chemical library compiled by the EPA as part of the Tox21 program to identify potential THRβ-mediated EDCs using both QSAR models and docking. We concluded that the library is unlikely to have any EDC that would bind to the THRβ. Models developed in this study can be employed either to identify environmental chemicals interacting with the THR or, conversely, to eliminate the THR-mediated mechanism of action for chemicals of concern. - Highlights: • This is the largest curated dataset for ligand binding domain (LBD) of the THRβ. • We report the first QSAR model for antagonists of AF-2 domain of THRβ. • A combination of QSAR and docking enables

  1. Calculation of hydrodynamics for semi-submersibles based on NURBS

    Institute of Scientific and Technical Information of China (English)

    REN Hui-long; LIU Wen-xi

    2008-01-01

    Accurate hydrodynamic calculations for semi-submersibles are critical to support modern rapid exploration and extraction of ocean resources.In order to speed hydrodynamic calculations,lines modeling structures were separated into structural parts and then fitted to Non-uniform Rational B-spline(NURBS).In this way,the bow and stern section lines were generated.Modeling of the intersections of the parts was then done with the universal modeling tool MSC.Patran.Mesh was gererated on the model in order to obtain points of intersection on the joints,and then these points were fitted to NURBS.Next,the patch representation method was adopted to generate the meshes of wetted surfaces and interior free surfaces.Velocity potentials on the surfaces were calculated separately,on basis of which the irregular frequency effect was dealt with in the calculation of hydrodynamic coefficients.Finally,the motion response of the semi-submersible was calculated,and in order to improve calculations of vertical motion,a damping term was affixed in the vertical direction.The results show that the above methods cangenerate fine mesh accurately representing the wetted surface of a semi-submersible and thus improve the accuracy of hydrodynamic calculations.

  2. Neural Stem Cell Affinity of Chitosan and Feasibility of Chitosan-Based Porous Conduits as Scaffolds for Nerve Tissue Engineering

    Institute of Scientific and Technical Information of China (English)

    WANG Aijun; AO Qiang; HE Qing; GONG Xiaoming; GONG Kai; GONG Yandao; ZHAO Nanming; ZHANG Xiufang

    2006-01-01

    Neural stem cells (NSCs) are currently considered as powerful candidate seeding cells for regeneration of both spinal cords and peripheral nerves. In this study, NSCs derived from fetal rat cortices were co-cultured with chitosan to evaluate the cell affinity of this material. The results showed that NSCs grew and proliferated well on chitosan films and most of them differentiated into neuron-like cells after 4 days of culture. Then, molded and braided chitosan conduits were fabricated and characterized for their cytotoxicity, swelling, and mechanical properties. Both types of conduits had no cytotoxic effects on fibroblasts (L929 cells) or neuroblastoma (Neuro-2a) cells. The molded conduits are much softer and more flexible while the braided conduits possess much better mechanical properties, which suggests different potential applications.

  3. Hybrid Electric Vehicle Control Strategy Based on Power Loss Calculations

    OpenAIRE

    Boyd, Steven J

    2006-01-01

    Defining an operation strategy for a Split Parallel Architecture (SPA) Hybrid Electric Vehicle (HEV) is accomplished through calculating powertrain component losses. The results of these calculations define how the vehicle can decrease fuel consumption while maintaining low vehicle emissions. For a HEV, simply operating the vehicle's engine in its regions of high efficiency does not guarantee the most efficient vehicle operation. The results presented are meant only to define a literal str...

  4. Lectures on extended affine Lie algebras

    CERN Document Server

    Neher, Erhard

    2010-01-01

    We give an introduction to the structure theory of extended affine Lie algebras, which provide a common framework for finite-dimensional semisimple, affine and toroidal Lie algebras. The notes are based on a lecture series given during the Fields Institute summer school at the University of Ottawa in June 2009.

  5. Affine Constellations Without Mutually Unbiased Counterparts

    CERN Document Server

    Weigert, Stefan

    2010-01-01

    It has been conjectured that a complete set of mutually unbiased bases in a space of dimension d exists if and only if there is an affine plane of order d. We introduce affine constellations and compare their existence properties with those of mutually unbiased constellations, mostly in dimension six. The observed discrepancies make a deeper relation between the two existence problems unlikely.

  6. Affine constellations without mutually unbiased counterparts

    Energy Technology Data Exchange (ETDEWEB)

    Weigert, Stefan [Department of Mathematics, University of York, York YO10 5DD (United Kingdom); Durt, Thomas, E-mail: slow500@york.ac.u, E-mail: thomdurt@vub.ac.b [IR-TONA, VUB, BE-1050 Brussels (Belgium)

    2010-10-08

    It has been conjectured that a complete set of mutually unbiased bases in a space of dimension d exists if and only if there is an affine plane of order d. We introduce affine constellations and compare their existence properties with those of mutually unbiased constellations. The observed discrepancies make a deeper relation between the two existence problems unlikely. (fast track communication)

  7. Affine constellations without mutually unbiased counterparts

    International Nuclear Information System (INIS)

    It has been conjectured that a complete set of mutually unbiased bases in a space of dimension d exists if and only if there is an affine plane of order d. We introduce affine constellations and compare their existence properties with those of mutually unbiased constellations. The observed discrepancies make a deeper relation between the two existence problems unlikely. (fast track communication)

  8. Calculation of VPP basing on functional analyzing method

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    VPP can be used to deter mine the maxi mumvelocities of a sailboard at various sailing-routes,byestablishing the forces and moments balance-equa-tions on the sail and board in accordance with theprinciple of the maxi mal drive-force.Selectingroute is the most i mportant issue in upwind-sailing,and VPP calculations could provide the basis for de-ter mining the opti mal routes.VPP calculation of the sailboard perfor mance isa complex and difficult research task,and there arefew projects in this research-field...

  9. Ray-Space Interpolation Based on Affine Projection%基于仿射投影的光线空间插值方法

    Institute of Scientific and Technical Information of China (English)

    孙季丰; 吴军政

    2013-01-01

    In order to effectively deal with the shape variation and noise pollution of the corresponding area in different views and, thereby, improve the quality and speed of viewpoint image synthesis, a ray-space interpolation method based on affine projection is proposed. In this method, first, the scale-invariant feature transform is applied to the feature detection and image matching for determining the affine transformation matrix among multi-viewpoint images. Then, ray-space interpolation is implemented via the projection through affine transformation matrix. Finally, a virtual viewpoint image is synthesized in the wavelet transform domain. Experimental results show that the proposed method is superior to the traditional block-matching interpolation and DDFI (Disparity Domain Filtering Interpolation) methods in terms of PSNR and computation time of the virtual viewpoint image.%为有效处理不同视点图像中对应区域的不同形变和噪声污染,提高所绘制的视点图像的质量和速度,提出了一种基于仿射投影的光线空间插值方法.该方法首先利用尺度不变特征变换进行特征检测和图像配准,计算出相应图像对之间的仿射变换矩阵,然后利用仿射变换投影进行光线空间插值,并在小波变换域绘制出虚拟视点图像.实验结果表明,采用文中方法绘制的虚拟视点图像的峰值信噪比和计算时间均优于传统的块匹配插值和视差域滤波插值方法.

  10. 3D transient calculations of PGV-1000 based on TRAC

    Energy Technology Data Exchange (ETDEWEB)

    Anatoly A Kazantsev; Andrey N Pozdnyakov [Simulation Systems Ltd., 133-208 Lenin str. 249030, Obninsk city, Kaluga reg. (Russian Federation); Vladimir V Sergeev; Valery A Evstifeev [Institute of Physics and Power Engineering (IPPE) 249020, Bondarenko square 1, Obninsk city, Kaluga reg., Russia, Government research center RF IPPE by A.I. Leipynski (Russian Federation)

    2005-07-01

    Full text of publication follows: During calculations of SAR accidents and transients it is necessary to perform steam generator simulation. Best accuracy is 3D transient calculations presented in report. Main outcomes of work was next: 1. There was shown by analysis the applicability of code TRAC (Los-Alamos laboratory) for thermal - hydraulic calculations of horizontal steam generator PGV-1000M. Special nodalization scheme was developed for it purposes. 2. Validation and selection of thermal-hydraulic correlations for improvement of using the code at calculation PGV-1000M were performed. As result Labuntsov formula is recommended for horizontal SG. 3. Calculations of nominal mode operation of PGV-1000M for cross-verification with code STEG (Electrogorsk Research and Engineering Center EREC) during its verification were performed. Solution by TRAC was obtained for transient problem after stabilization time. 4. Development of dynamic SG model as conjugate problem (thermal hydraulic of I and II circuits are calculated together) for research of the transient and accident processes stipulated by safety standards for NPP with VVER-1000 and VVER-1500. 5. Creating of calculation complex on the basis of code TRAC for the purposes of the analysis and optimization of a design. Development graphic pre- and post-processor for code TRAC. 6. The TRAC code allows to use correlation Zukauskas for friction factors in tube bundles through the initial data. Using postprocessing calculations and restart mode iterations allows to use Kolbasnikov's correlations for friction factors for biphasic mode in tube bundles. Developed nodalization model of PGV-1000M includes a conjugate hydrodynamical problem on I and to II circuits in view of thermal connection through packages of tubes. Thus primary circuit is considered in multichannel 1D approximation with hydraulic non-uniformity of flow rates between pre-settled groups of tubes. The hydrodynamics into shell of PG is presented as 3D

  11. Linking electromagnetic precursors with earthquake dynamics: an approach based on nonextensive fragment and self-affine asperity models

    CERN Document Server

    Minadakis, G; Nomicos, C; Eftaxias, K

    2011-01-01

    EM emissions in a wide frequency spectrum ranging from kHz to MHz are produced by opening cracks, which can be considered as precursors of general fracture. An important feature, observed on both laboratory and geophysical scale, is that the MHz radiation systematically precedes the kHz one. Yet, the link between an individual EM precursor and a distinctive stage of the EQ preparation comprises a crucial open question. A recently proposed two-stage model on preseismic EM activity suggests that the MHz EM emission is due to the fracture of the highly heterogeneous system that surrounds the fault. The finally emerged kHz EM emission is rooted in the final stage of EQ generation, namely, the fracture of entities sustaining the system. In this work we try to further penetrate and elucidate the link of the precursory kHz EM activity with the last stage of EQ generation building on two theoretical models for EQ dynamics. Firstly, the self-affine model states that an EQ is due to the slipping of two rough and rigid ...

  12. Searching for Si-based spintronics by first principles calculations

    International Nuclear Information System (INIS)

    Density functional theory (DFT) calculations are used to study the epitaxial growth and the magnetic properties of thin films of MnSi on the Si(001) surface. For adsorption of a single Mn atom, we find that binding at the subsurface site below the Si surface dimers is the most stable adsorption site. There is an energy barrier of only 0.3 eV for adsorbed Mn to go subsurface, and an energy barrier of 1.3 eV for penetration to deeper layers. From the calculated potential-energy surface for the Mn adatom we conclude that the most stable site on the surface corresponds to the hollow site where Mn is placed between two Si surface dimers. Despite Si(001) geometrically being an anisotropic surface, the on-surface diffusion for both directions along and perpendicular to the Si dimer rows has almost the same diffusion barrier of 0.65 eV. For coverage above 1 ML, the lowest energy structure is a pure Mn subsurface layer, capped by a layer of Si adatoms. We conclude that the Mn-silicide films stabilize in an epitaxially CsCl-like (B2) crystal structure. Such MnSi films are found to have sizable magnetic moments at the Mn atoms near the surface and interface, and ferromagnetic coupling of the Mn clarify within the layers. Layer-resolved electronic densities-of-states are presented that show a high degree of spin polarization at the Fermi level, up to 30 and 50% for films with one or two MnSi films, respectively. In order to clarify the stability of ferromagnetism at finite temperatures we estimate the Curie temperature (Tc) of MnSi films using a multiple-sublattice Heisenberg model with first- and second-nearest neighbor interactions determined from DFT calculations for various collinear spin configurations. The Curie temperature is calculated both in the mean-field approximation (MFA) and in the random-phase approximation (RPA). In the latter case, we find a weak logarithmic dependence of Tc on the magnetic anisotropy parameter, which was calculated to be 0.4 meV. Large Curie

  13. Affine and Projective Geometry

    CERN Document Server

    Bennett, M K

    1995-01-01

    An important new perspective on AFFINE AND PROJECTIVE GEOMETRY. This innovative book treats math majors and math education students to a fresh look at affine and projective geometry from algebraic, synthetic, and lattice theoretic points of view. Affine and Projective Geometry comes complete with ninety illustrations, and numerous examples and exercises, covering material for two semesters of upper-level undergraduate mathematics. The first part of the book deals with the correlation between synthetic geometry and linear algebra. In the second part, geometry is used to introduce lattice theory

  14. Calculations of NMR chemical shifts with APW-based methods

    Science.gov (United States)

    Laskowski, Robert; Blaha, Peter

    2012-01-01

    We present a full potential, all electron augmented plane wave (APW) implementation of first-principles calculations of NMR chemical shifts. In order to obtain the induced current we follow a perturbation approach [Pickard and Mauri, Phys. Rev. BPRBMDO1098-012110.1103/PhysRevB.63.245101 63, 245101 (2001)] and extended the common APW + local orbital (LO) basis by several LOs at higher energies. The calculated all-electron current is represented in traditional APW manner as Fourier series in the interstitial region and with a spherical harmonics representation inside the nonoverlapping atomic spheres. The current is integrated using a “pseudocharge” technique. The implementation is validated by comparison of the computed chemical shifts with some “exact” results for spherical atoms and for a set of solids and molecules with available published data.

  15. Calculation of VPP basing on functional analyzing method

    Institute of Scientific and Technical Information of China (English)

    Bai Kaixiang; Wang Dexun; Han Jiurui

    2007-01-01

    The establishment and realization of the VPP calucation's model for the functional analytic theory are discussed in this paper. Functional analyzing method is a theoretical model of the VPP calculation which can eliminate the influence of the sail and board's size skillfully, so it can be regarded as a brief standard of the sailboard's VPP results. As a brief watery dynamical model, resistance on board can be regarded as a direct proportion to the square of the boat-velocity. The boat-velocities at the state of six wind-velocities (3 m/s-8 m/s) with angles of 25°-180° are obtained by calculating, which provides an important gist of the sailing-route's selection in upwind-sailing.

  16. Freeway Travel Speed Calculation Model Based on ETC Transaction Data

    OpenAIRE

    Jiancheng Weng; Rongliang Yuan; Ru Wang; Chang Wang

    2014-01-01

    Real-time traffic flow operation condition of freeway gradually becomes the critical information for the freeway users and managers. In fact, electronic toll collection (ETC) transaction data effectively records operational information of vehicles on freeway, which provides a new method to estimate the travel speed of freeway. First, the paper analyzed the structure of ETC transaction data and presented the data preprocess procedure. Then, a dual-level travel speed calculation model was estab...

  17. Silver nanoparticles for SERS-based ultrasensitive chemical detection in aqueous solutions: Role of binding affinity and surface oxidation in the detection limit

    Science.gov (United States)

    Erol, Melek

    Surface-enhanced Raman spectroscopy (SERS) in the presence of noble metal nanostructures holds significant promise for sensing and molecular fingerprinting down to single molecule level. This dissertation explores the effect of binding affinity and surface oxidation of Ag nanoparticles on SERS detection sensitivity of SO42-, CN-, SCN-, ClO4- and nitro-aromatic compounds in water. Specifically positively charged Ag nanoparticles (Ag [+]) were synthesized by UV-assisted reduction of silver nitrate using branched polyethyleneimine (BPEI) and 4-(2-hydroxyethyl)-1-piperazineethanesulfonic acid (HEPES) solutions. Both primary amino and amide groups on the surface of Ag [+] allowed strong binding affinity with anions, critical for sensitive SERS measurements. For substrates with immobilized Ag [+] (30 nanoparticles/mum2), SERS sensitivity increased in the order of SO42- physiological conditions due to steric hindrance from the branched architecture of adsorbed polymer chains. BPEI coated surfaces were also effective for suppression of smaller positively charged proteins such as lysozyme and ribonuclease A at pH 7 and 0.15 M NaCl and of negatively charged proteins such as BSA and fibrinogen at pH 7 and 0.75 M NaCl. Furthermore, using PEI-modified protein-repellent surfaces, selective binding of avidin was achieved to surface-bound Ag nanoparticles, thus providing a promising strategy for SERS-based bio-detection.

  18. A simple one pot purification of bacterial amylase from fermented broth based on affinity toward starch-functionalized magnetic nanoparticle.

    Science.gov (United States)

    Paul, Tanima; Chatterjee, Saptarshi; Bandyopadhyay, Arghya; Chattopadhyay, Dwiptirtha; Basu, Semanti; Sarkar, Keka

    2015-08-18

    Surface-functionalized adsorbant particles in combination with magnetic separation techniques have received considerable attention in recent years. Selective manipulation on such magnetic nanoparticles permits separation with high affinity in the presence of other suspended solids. Amylase is used extensively in food and allied industries. Purification of amylase from bacterial sources is a matter of concern because most of the industrial need for amylase is met by microbial sources. Here we report a simple, cost-effective, one-pot purification technique for bacterial amylase directly from fermented broth of Bacillus megaterium utilizing starch-coated superparamagnetic iron oxide nanoparticles (SPION). SPION was prepared by co-precipitation method and then functionalized by starch coating. The synthesized nanoparticles were characterized by transmission electron microscopy (TEM), a superconducting quantum interference device (SQUID, zeta potential, and ultraviolet-visible (UV-vis) and Fourier-transform infrared (FTIR) spectroscopy. The starch-coated nanoparticles efficiently purified amylase from bacterial fermented broth with 93.22% recovery and 12.57-fold purification. Sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE) revealed that the molecular mass of the purified amylase was 67 kD, and native gel showed the retention of amylase activity even after purification. Optimum pH and temperature of the purified amylase were 7 and 50°C, respectively, and it was stable over a range of 20°C to 50°C. Hence, an improved one-pot bacterial amylase purification method was developed using starch-coated SPION. PMID:24840788

  19. Potentials and pitfalls using high affinity radioligands in PET and SPET determinations on regional drug induced D2 receptor occupancy--a simulation study based on experimental data.

    Science.gov (United States)

    Olsson, H; Farde, L

    2001-10-01

    The D2 dopamine receptor density ranges from 0.2 to 40 nM among human brain regions. For high density regions radioligands like [(11)C]raclopride provide accurate and reliable estimates of the receptor density. In research on neuropsychiatric disorders there is, however, a growing need for quantitative approaches that accurately measure D2 dopamine receptor occupancy induced by drugs or endogenous dopamine in regions with low receptor density. The new high affinity radioligands [(11)C]FLB 457 and [(123)I]epidepride have been shown to provide a signal for extrasriatal D2 dopamine receptor populations in the human brain in vivo. Initial observations indicate, however, that the time required to reach equilibrium is dependent on receptor density. Ratio analyses may thus not be readily used for comparisons among different brain regions. The aim of the present simulation study was to examine commonly used approaches for calculation of drug induced D2 dopamine receptor occupancy among regions with widely different receptor density. The input functions and the rate constants of [(11)C]FLB 457 and the reference ligand [(11)C]raclopride were first used in a simulation estimating the effect of receptor density on equilibrium time. In a second step we examined how errors produced by inaccurate determination of the binding potential parameter propagate to calculations of drug induced receptor occupancy. The simulations showed a marked effect of receptor density on equilibrium time for [(11)C]FLB 457, but not for [(11)C]raclopride. For [(11)C]FLB 457, a receptor density above about 7 nM caused the time of equilibrium to fall beyond time of data acquisition (1 h). The use of preequilibrium data caused the peak equilibrium and the end time ratio approaches but not the simplified reference tissue model (SRTM) approach to underestimate the binding potential and thus also the drug occupancy calculated for high-density regions. The study supports the use of ratio and SRTM analyses in

  20. Space resection model calculation based on Random Sample Consensus algorithm

    Science.gov (United States)

    Liu, Xinzhu; Kang, Zhizhong

    2016-03-01

    Resection has been one of the most important content in photogrammetry. It aims at the position and attitude information of camera at the shooting point. However in some cases, the observed values for calculating are with gross errors. This paper presents a robust algorithm that using RANSAC method with DLT model can effectually avoiding the difficulties to determine initial values when using co-linear equation. The results also show that our strategies can exclude crude handicap and lead to an accurate and efficient way to gain elements of exterior orientation.

  1. Freeway travel speed calculation model based on ETC transaction data.

    Science.gov (United States)

    Weng, Jiancheng; Yuan, Rongliang; Wang, Ru; Wang, Chang

    2014-01-01

    Real-time traffic flow operation condition of freeway gradually becomes the critical information for the freeway users and managers. In fact, electronic toll collection (ETC) transaction data effectively records operational information of vehicles on freeway, which provides a new method to estimate the travel speed of freeway. First, the paper analyzed the structure of ETC transaction data and presented the data preprocess procedure. Then, a dual-level travel speed calculation model was established under different levels of sample sizes. In order to ensure a sufficient sample size, ETC data of different enter-leave toll plazas pairs which contain more than one road segment were used to calculate the travel speed of every road segment. The reduction coefficient α and reliable weight θ for sample vehicle speed were introduced in the model. Finally, the model was verified by the special designed field experiments which were conducted on several freeways in Beijing at different time periods. The experiments results demonstrated that the average relative error was about 6.5% which means that the freeway travel speed could be estimated by the proposed model accurately. The proposed model is helpful to promote the level of the freeway operation monitoring and the freeway management, as well as to provide useful information for the freeway travelers. PMID:25580107

  2. Affine and degenerate affine BMW algebras: Actions on tensor space

    CERN Document Server

    Daugherty, Zajj; Virk, Rahbar

    2012-01-01

    The affine and degenerate affine Birman-Murakami-Wenzl (BMW) algebras arise naturally in the context of Schur-Weyl duality for orthogonal and symplectic quantum groups and Lie algebras, respectively. Cyclotomic BMW algebras, affine and cyclotomic Hecke algebras, and their degenerate versions are quotients. In this paper we explain how the affine and degenerate affine BMW algebras are tantalizers (tensor power centralizer algebras) by defining actions of the affine braid group and the degenerate affine braid algebra on tensor space and showing that, in important cases, these actions induce actions of the affine and degenerate affine BMW algebras. We then exploit the connection to quantum groups and Lie algebras to determine universal parameters for the affine and degenerate affine BMW algebras. Finally, we show that the universal parameters are central elements--the higher Casimir elements for orthogonal and symplectic enveloping algebras and quantum groups.

  3. Integrated Power Flow and Short Circuit Calculation Method for Distribution Network with Inverter Based Distributed Generation

    OpenAIRE

    Shan Yang; Xiangqian Tong

    2016-01-01

    Power flow calculation and short circuit calculation are the basis of theoretical research for distribution network with inverter based distributed generation. The similarity of equivalent model for inverter based distributed generation during normal and fault conditions of distribution network and the differences between power flow and short circuit calculation are analyzed in this paper. Then an integrated power flow and short circuit calculation method for distribution network with inverte...

  4. Proton Affinity of Isomeric Dipeptides Containing Lysine and Non-Proteinogenic Lysine Homologues.

    Science.gov (United States)

    Batoon, Patrick; Ren, Jianhua

    2016-08-18

    Conformational effects on the proton affinity of oligopeptides have been studied using six alanine (A)-based acetylated dipeptides containing a basic probe that is placed closest to either the C- or the N-terminus. The basic probe includes Lysine (Lys) and two nonproteinogenic Lys-homologues, ornithine (Orn) and 2,3-diaminopropionic acid (Dap). The proton affinities of the peptides have been determined using the extended Cooks kinetic method in a triple quadrupole mass spectrometer. Computational studies have been carried out to search for the lowest energy conformers and to calculate theoretical proton affinities as well as various molecular properties using the density functional theory. The dipeptides containing a C-terminal probe, ALys, AOrn, and ADap, were determined to have a higher proton affinity by 1-4 kcal/mol than the corresponding dipeptides containing an N-terminal probe, LysA, OrnA, and DapA. For either the C-probe peptides or the N-probe peptides, the proton affinity reduces systematically as the side-chain of the probe residue is shortened. The difference in the proton affinities between isomeric peptides is largely associated with the variation of the conformations. The peptides with higher values of the proton affinity adopt a relatively compact conformation such that the protonated peptides can be stabilized through more efficient internal solvation. PMID:27459294

  5. Activity based models for countrywide electric vehicle power demand calculation

    OpenAIRE

    Knapen, Luk; Kochan, Bruno; BELLEMANS, Tom; JANSSENS, Davy; Wets, Geert

    2011-01-01

    Smart grid design depends on the availability of realistic data. In the near future, energy demand by electric vehicles will be a substantial component of the overall demand and peaks of required power could become critical in some regions. Transportation research has been using micro-simulation based activity-based models for traffic forecasting. The resulting trip length distribution allows to estimate to what extent internal combustion engine vehicles can be substituted...

  6. Development of new peptide-based receptor of fluorescent probe with femtomolar affinity for Cu(+) and detection of Cu(+) in Golgi apparatus.

    Science.gov (United States)

    Jung, Kwan Ho; Oh, Eun-Taex; Park, Heon Joo; Lee, Keun-Hyeung

    2016-11-15

    Developing fluorescent probes for monitoring intracellular Cu(+) is important for human health and disease, whereas a few types of their receptors showing a limited range of binding affinities for Cu(+) have been reported. In the present study, we first report a novel peptide receptor of a fluorescent probe for the detection of Cu(+). Dansyl-labeled tripeptide probe (Dns-LLC) formed a 1:1 complex with Cu(+) and showed a turn-on fluorescent response to Cu(+) in aqueous buffered solutions. The dissociation constant of Dns-LLC for Cu(+) was determined to be 12 fM, showing that Dns-LLC had more potent binding affinity for Cu(+) than those of previously reported chemical probes for Cu(+). The binding mode study showed that the thiol group of the peptide receptor plays a critical role in potent binding with Cu(+) and the sulfonamide and amide groups of the probe might cooperate to form a complex with Cu(+). Dns-LLC detected Cu(+) selectively by a turn-on response among various biologically relevant metal ions, including Cu(2+) and Zn(2+). The selectivity of the peptide-based probe for Cu(+) was strongly dependent on the position of the cysteine residue in the peptide receptor part. The fluorescent peptide-based probe penetrated the living RKO cells and successfully detected Cu(+) in the Golgi apparatus in live cells by a turn-on response. Given the growing interest in imaging Cu(+) in live cells, a novel peptide receptor of Cu(+) will offer the potential for developing a variety of fluorescent probes for Cu(+) in the field of copper biochemistry. PMID:27208475

  7. Statistical inference for discrete-time samples from affine stochastic delay differential equations

    DEFF Research Database (Denmark)

    Küchler, Uwe; Sørensen, Michael

    2013-01-01

    Statistical inference for discrete time observations of an affine stochastic delay differential equation is considered. The main focus is on maximum pseudo-likelihood estimators, which are easy to calculate in practice. A more general class of prediction-based estimating functions is investigated...

  8. Coupled-cluster based basis sets for valence correlation calculations

    Science.gov (United States)

    Claudino, Daniel; Gargano, Ricardo; Bartlett, Rodney J.

    2016-03-01

    Novel basis sets are generated that target the description of valence correlation in atoms H through Ar. The new contraction coefficients are obtained according to the Atomic Natural Orbital (ANO) procedure from CCSD(T) (coupled-cluster singles and doubles with perturbative triples correction) density matrices starting from the primitive functions of Dunning et al. [J. Chem. Phys. 90, 1007 (1989); ibid. 98, 1358 (1993); ibid. 100, 2975 (1993)] (correlation consistent polarized valence X-tuple zeta, cc-pVXZ). The exponents of the primitive Gaussian functions are subject to uniform scaling in order to ensure satisfaction of the virial theorem for the corresponding atoms. These new sets, named ANO-VT-XZ (Atomic Natural Orbital Virial Theorem X-tuple Zeta), have the same number of contracted functions as their cc-pVXZ counterparts in each subshell. The performance of these basis sets is assessed by the evaluation of the contraction errors in four distinct computations: correlation energies in atoms, probing the density in different regions of space via (-3 ≤ n ≤ 3) in atoms, correlation energies in diatomic molecules, and the quality of fitting potential energy curves as measured by spectroscopic constants. All energy calculations with ANO-VT-QZ have contraction errors within "chemical accuracy" of 1 kcal/mol, which is not true for cc-pVQZ, suggesting some improvement compared to the correlation consistent series of Dunning and co-workers.

  9. Vertical emission profiles for Europe based on plume rise calculations

    NARCIS (Netherlands)

    Bieser, J.; Aulinger, A.; Matthias, V.; Quante, M.; Denier Van Der Gon, H.A.C.

    2011-01-01

    The vertical allocation of emissions has a major impact on results of Chemistry Transport Models. However, in Europe it is still common to use fixed vertical profiles based on rough estimates to determine the emission height of point sources. This publication introduces a set of new vertical profile

  10. Virtual-real spatial information visualization registration using affine representations

    Science.gov (United States)

    Wu, Xueling; Ren, Fu; Du, Qingyun

    2009-10-01

    Virtual-real registration in Outdoor Augmented Reality is committed to enhance user's spatial cognition by overlaying virtual geographical objects on real scene. According to analyze fiducial detection registration method in indoor AR, for the purpose of avoiding complex and tedious process of position tracking and camera calibration in traditional registration methods, it puts forward and practices a virtual-real spatial information visualization registration method using affine representations. Based on the observation from Koenderink and van Doorn, Ullman and Basri in 1991 which is given a set of four or more non-coplanar 3D points, the projection of all points in the set can be computed as a linear combination of the projection of just four of the points, it sets up global affine coordinate system in light of world coordinates, camera coordinates and virtual coordinates and extracts four feature points from scene image and calculates the global affine coordinates of key points of virtual objects. Then according to a linear homogeneous coordinates of the four feature point's projection, it calculates projection pixel coordinates of key points of virtual objects. In addition, it proposes an approach to obtain pixel relative depth for hidden surface removal. Finally, by a case study, it verifies the feasibility and efficiency of the registration methods. The method would not only explore a new research direction for Geographical Information Science, but also would provide location-based information and services for outdoor AR.

  11. Inhibitor Ranking Through QM based Chelation Calculations for Virtual Screening of HIV-1 RNase H inhibition

    DEFF Research Database (Denmark)

    Poongavanam, Vasanthanathan; Svendsen, Casper Steinmann; Kongsted, Jacob

    2014-01-01

    of the methods based on the use of a training set of molecules, QM based chelation calculations were used as filter in virtual screening of compounds in the ZINC database. By this, we find, compared to regular docking, QM based chelation calculations to significantly reduce the large number of false...

  12. A New Optimization Method for Centrifugal Compressors Based on 1D Calculations and Analyses

    OpenAIRE

    Pei-Yuan Li; Chu-Wei Gu; Yin Song

    2015-01-01

    This paper presents an optimization design method for centrifugal compressors based on one-dimensional calculations and analyses. It consists of two parts: (1) centrifugal compressor geometry optimization based on one-dimensional calculations and (2) matching optimization of the vaned diffuser with an impeller based on the required throat area. A low pressure stage centrifugal compressor in a MW level gas turbine is optimized by this method. One-dimensional calculation results show that D3/D...

  13. A New Versatile Immobilization Tag Based on the Ultra High Affinity and Reversibility of the Calmodulin-Calmodulin Binding Peptide Interaction.

    Science.gov (United States)

    Mukherjee, Somnath; Ura, Marcin; Hoey, Robert J; Kossiakoff, Anthony A

    2015-08-14

    Reversible, high-affinity immobilization tags are critical tools for myriad biological applications. However, inherent issues are associated with a number of the current methods of immobilization. Particularly, a critical element in phage display sorting is functional immobilization of target proteins. To circumvent these problems, we have used a mutant (N5A) of calmodulin binding peptide (CBP) as an immobilization tag in phage display sorting. The immobilization relies on the ultra high affinity of calmodulin to N5A mutant CBP (RWKKNFIAVSAANRFKKIS) in presence of calcium (KD~2 pM), which can be reversed by EDTA allowing controlled "capture and release" of the specific binders. To evaluate the capabilities of this system, we chose eight targets, some of which were difficult to overexpress and purify with other tags and some had failed in sorting experiments. In all cases, specific binders were generated using a Fab phage display library with CBP-fused constructs. KD values of the Fabs were in subnanomolar to low nanomolar (nM) ranges and were successfully used to selectively recognize antigens in cell-based experiments. Some of these targets were problematic even without any tag; thus, the fact that all led to successful selection endpoints means that borderline cases can be worked on with a high probability of a positive outcome. Taken together with examples of successful case specific, high-level applications like generation of conformation-, epitope- and domain-specific Fabs, we feel that the CBP tag embodies all the attributes of covalent immobilization tags but does not suffer from some of their well-documented drawbacks. PMID:26159704

  14. CO2 Pipeline Cost Calculations, Based on Different Cost Models

    OpenAIRE

    Beáta Horánszky; Péter Forgács

    2013-01-01

    Carbon Capture and Storage (CCS) is considered to be a promising technology and an effective tool in the struggle against climate change. The method is based on the separation of air-polluting CO2 from fuel gases and its subsequent storage in different types of geological formations. The outlet points and formations used as CO2 storage sites are often very far from each other. According to certain recently announced, medium-term EU plans, a 20,000 km long pipeline system will be established f...

  15. Optimized Affinity Capture of Yeast Protein Complexes.

    Science.gov (United States)

    LaCava, John; Fernandez-Martinez, Javier; Hakhverdyan, Zhanna; Rout, Michael P

    2016-01-01

    Here, we describe an affinity isolation protocol. It uses cryomilled yeast cell powder for producing cell extracts and antibody-conjugated paramagnetic beads for affinity capture. Guidelines for determining the optimal extraction solvent composition are provided. Captured proteins are eluted in a denaturing solvent (sodium dodecyl sulfate polyacrylamide gel electrophoresis sample buffer) for gel-based proteomic analyses. Although the procedures can be modified to use other sources of cell extract and other forms of affinity media, to date we have consistently obtained the best results with the method presented. PMID:27371596

  16. Glass viscosity calculation based on a global statistical modeling approach

    International Nuclear Information System (INIS)

    A global statistical glass viscosity model was developed for predicting the complete viscosity curve, based on more than 2200 composition-property data of silicate glasses from the scientific literature, including soda-lime-silica container and float glasses, TV panel glasses, borosilicate fiber wool and E type glasses, low expansion borosilicate glasses, glasses for nuclear waste vitrification, lead crystal glasses, binary alkali silicates, and various further compositions from over half a century. It is shown that within a measurement series from a specific laboratory the reported viscosity values are often over-estimated at higher temperatures due to alkali and boron oxide evaporation during the measurement and glass preparation, including data by Lakatos et al. (1972) and the recently published High temperature glass melt property database for process modeling by Seward et al. (2005). Similarly, in the glass transition range many experimental data of borosilicate glasses are reported too high due to phase separation effects. The developed global model corrects those errors. The model standard error was 9-17 C, with R2 = 0.985-0.989. The prediction 95% confidence interval for glass in mass production largely depends on the glass composition of interest, the composition uncertainty, and the viscosity level. New insights in the mixed-alkali effect are provided

  17. Glass viscosity calculation based on a global statistical modelling approach

    Energy Technology Data Exchange (ETDEWEB)

    Fluegel, Alex

    2007-02-01

    A global statistical glass viscosity model was developed for predicting the complete viscosity curve, based on more than 2200 composition-property data of silicate glasses from the scientific literature, including soda-lime-silica container and float glasses, TV panel glasses, borosilicate fiber wool and E type glasses, low expansion borosilicate glasses, glasses for nuclear waste vitrification, lead crystal glasses, binary alkali silicates, and various further compositions from over half a century. It is shown that within a measurement series from a specific laboratory the reported viscosity values are often over-estimated at higher temperatures due to alkali and boron oxide evaporation during the measurement and glass preparation, including data by Lakatos et al. (1972) and the recently published High temperature glass melt property database for process modeling by Seward et al. (2005). Similarly, in the glass transition range many experimental data of borosilicate glasses are reported too high due to phase separation effects. The developed global model corrects those errors. The model standard error was 9-17°C, with R^2 = 0.985-0.989. The prediction 95% confidence interval for glass in mass production largely depends on the glass composition of interest, the composition uncertainty, and the viscosity level. New insights in the mixed-alkali effect are provided.

  18. Backtracking Based Integer Factorisation, Primality Testing and Square Root Calculation

    Directory of Open Access Journals (Sweden)

    Mohammed Golam Kaosar

    2014-02-01

    Full Text Available Breaking a big integer into two factors is a famous problem in the field of Mathematics and Cryptography for years. Many crypto-systems use suc h a big number as their key or part of a key with the assumption - it is too big that the fa stest factorisation algorithms running on the fastest computers would take impractically long per iod of time to factorise. Hence, many efforts have been provided to break those crypto-systems by finding two factors of an integer for decades. In this paper, a new factorisation techniq ue is proposed which is based on the concept of backtracking. Binary bit by bit operations are p erformed to find two factors of a given integer. This proposed solution can be applied in c omputing square root, primality test, finding prime factors of integer numbers etc. If the propos ed solution is proven to be efficient enough, it may break the security of many crypto-systems. Impl ementation and performance comparison of the technique is kept for future research.

  19. Validation of KENO-based criticality calculations at Rocky Flats

    International Nuclear Information System (INIS)

    In the absence of experimental data, it is necessary to rely on computer-based computational methods in evaluating the criticality condition of a nuclear system. The validity of the computer codes is established in a two-part procedure as outlined in ANSI/ANS 8.1. The first step, usually the responsibility of the code developer, involves verification that the algorithmic structure of the code is performing the intended mathematical operations correctly. The second step involves an assessment of the code's ability to realistically portray the governing physical processes in question. This is accomplished by determining the code's bias, or systematic error, through a comparison of computational results to accepted values obtained experimentally. In this paper, the authors discuss the validation process for KENO and the Hansen-Roach cross sections in use at EG and G Rocky Flats. The validation process at Rocky Flats consists of both global and local techniques. The global validation resulted in a maximum keff limit of 0.95 for the limiting-accident scanarios of a criticality evaluation

  20. On purely transmitting defects in affine Toda field theory

    CERN Document Server

    Corrigan, E

    2007-01-01

    Affine Toda field theories with a purely transmitting integrable defect are considered and the model based on a_2 is analysed in detail. After providing a complete characterization of the problem in a classical framework, a suitable quantum transmission matrix, able to describe the interaction between an integrable defect and solitons, is found. Two independent paths are taken to reach the result. One is an investigation of the triangle equations using the S-matrix for the imaginary coupling bulk affine Toda field theories proposed by Hollowood, and the other uses a functional integral approach together with a bootstrap procedure. Evidence to support the results is collected in various ways: for instance, through the calculation of the transmission factors for the lightest breathers. While previous discoveries within the sine-Gordon model motivated this study, there are several new phenomena displayed in the a_2 model including intriguing disparities between the classical and the quantum pictures. For example...

  1. Assessing high affinity binding to HLA-DQ2.5 by a novel peptide library based approach

    DEFF Research Database (Denmark)

    Jüse, Ulrike; Arntzen, Magnus; Højrup, Peter;

    2011-01-01

    Here we report on a novel peptide library based method for HLA class II binding motif identification. The approach is based on water soluble HLA class II molecules and soluble dedicated peptide libraries. A high number of different synthetic peptides are competing to interact with a limited amount...... of HLA molecules, giving a selective force in the binding. The peptide libraries can be designed so that the sequence length, the alignment of binding registers, the numbers and composition of random positions are controlled, and also modified amino acids can be included. Selected library peptides...... bound to HLA are then isolated by size exclusion chromatography and sequenced by tandem mass spectrometry online coupled to liquid chromatography. The MS/MS data are subsequently searched against a library defined database using a search engine such as Mascot, followed by manual inspection of the...

  2. Evolution of an interloop disulfide bond in high-affinity antibody mimics based on fibronectin type III domain and selected by yeast surface display: molecular convergence with single-domain camelid and shark antibodies.

    Science.gov (United States)

    Lipovsek, Dasa; Lippow, Shaun M; Hackel, Benjamin J; Gregson, Melissa W; Cheng, Paul; Kapila, Atul; Wittrup, K Dane

    2007-05-11

    The 10th human fibronectin type III domain ((10)Fn3) is one of several protein scaffolds used to design and select families of proteins that bind with high affinity and specificity to macromolecular targets. To date, the highest affinity (10)Fn3 variants have been selected by mRNA display of libraries generated by randomizing all three complementarity-determining region -like loops of the (10)Fn3 scaffold. The sub-nanomolar affinities of such antibody mimics have been attributed to the extremely large size of the library accessible by mRNA display (10(12) unique sequences). Here we describe the selection and affinity maturation of (10)Fn3-based antibody mimics with dissociation constants as low as 350 pM selected from significantly smaller libraries (10(7)-10(9) different sequences), which were constructed by randomizing only 14 (10)Fn3 residues. The finding that two adjacent loops in human (10)Fn3 provide a large enough variable surface area to select high-affinity antibody mimics is significant because a smaller deviation from wild-type (10)Fn3 sequence is associated with a higher stability of selected antibody mimics. Our results also demonstrate the utility of an affinity-maturation strategy that led to a 340-fold improvement in affinity by maximizing sampling of sequence space close to the original selected antibody mimic. A striking feature of the highest affinity antibody mimics selected against lysozyme is a pair of cysteines on adjacent loops, in positions 28 and 77, which are critical for the affinity of the (10)Fn3 variant for its target and are close enough to form a disulfide bond. The selection of this cysteine pair is structurally analogous to the natural evolution of disulfide bonds found in new antigen receptors of cartilaginous fish and in camelid heavy-chain variable domains. We propose that future library designs incorporating such an interloop disulfide will further facilitate the selection of high-affinity, highly stable antibody mimics from

  3. Spectral affinity in protein networks

    Directory of Open Access Journals (Sweden)

    Teng Shang-Hua

    2009-11-01

    Full Text Available Abstract Background Protein-protein interaction (PPI networks enable us to better understand the functional organization of the proteome. We can learn a lot about a particular protein by querying its neighborhood in a PPI network to find proteins with similar function. A spectral approach that considers random walks between nodes of interest is particularly useful in evaluating closeness in PPI networks. Spectral measures of closeness are more robust to noise in the data and are more precise than simpler methods based on edge density and shortest path length. Results We develop a novel affinity measure for pairs of proteins in PPI networks, which uses personalized PageRank, a random walk based method used in context-sensitive search on the Web. Our measure of closeness, which we call PageRank Affinity, is proportional to the number of times the smaller-degree protein is visited in a random walk that restarts at the larger-degree protein. PageRank considers paths of all lengths in a network, therefore PageRank Affinity is a precise measure that is robust to noise in the data. PageRank Affinity is also provably related to cluster co-membership, making it a meaningful measure. In our experiments on protein networks we find that our measure is better at predicting co-complex membership and finding functionally related proteins than other commonly used measures of closeness. Moreover, our experiments indicate that PageRank Affinity is very resilient to noise in the network. In addition, based on our method we build a tool that quickly finds nodes closest to a queried protein in any protein network, and easily scales to much larger biological networks. Conclusion We define a meaningful way to assess the closeness of two proteins in a PPI network, and show that our closeness measure is more biologically significant than other commonly used methods. We also develop a tool, accessible at http://xialab.bu.edu/resources/pnns, that allows the user to

  4. Antibody affinity maturation

    DEFF Research Database (Denmark)

    Skjødt, Mette Louise

    Yeast surface display is an effective tool for antibody affinity maturation because yeast can be used as an all-in-one workhorse to assemble, display and screen diversified antibody libraries. By employing the natural ability of yeast Saccharomyces cerevisiae to efficiently recombine multiple DNA...

  5. A MEMS Dielectric Affinity Glucose Biosensor

    OpenAIRE

    Xian HUANG; Li, SiQi; Davis, Erin; Li, Dachao; Wang, Qian; Lin, Qiao

    2013-01-01

    Continuous glucose monitoring (CGM) sensors based on affinity detection are desirable for long-term and stable glucose management. However, most affinity sensors contain mechanical moving structures and complex design in sensor actuation and signal readout, limiting their reliability in subcutaneously implantable glucose detection. We have previously demonstrated a proof-of-concept dielectric glucose sensor that measured pre-mixed glucose-sensitive polymer solutions at various glucose concent...

  6. Critical evaluation of sequential extraction and sink-float methods used for the determination of Ga and Ge affinity in lignite

    Energy Technology Data Exchange (ETDEWEB)

    Zdenek Klika; Lenka Ambruzova; Ivana Sykorova; Jana Seidlerova; Ivan Kolomaznik [VSB-Technical University Ostrava, Ostrava (Czech Republic)

    2009-10-15

    The affinities of Ga and Ge in lignite were determined using sequential extraction (SE) and element affinity calculation (EAC) based on sink-float data. For this study a bulk lignite sample was fractioned into two sets. The first set of samples (A) consisted of the different grain sizes fractions; the second one set (B) was prepared by density fractionation. Sequential extractions (1) were performed on both sets of fractions with very good agreement between determined organic elements affinities (OEA of Ga evaluated from A data is 32%, from B data 35%; OEA of Ge evaluated from A data is 31% and from B data 26%). The data of B lignite fractions were evaluated using two element affinity calculations: (a) EAC (I) of Klika and Kolomaznk (2) and (b) newly prepared subroutine EAC (II) based on quantitative contents of lignite macerals and minerals. There was also good agreement between both methods obtained (OEA of Ga calculated by EAC (I) is 83% and by EAC (II) 77%; OEA of Ge calculated by EAC (I) is 89% and by EAC (II) 97%). The significant differences of organic elements affinities of Ga and Ge evaluated by sequential extraction and by element affinity calculation based on sink-float data are discussed. 34 refs., 7 figs., 6 tabs.

  7. Structure-based model profiles affinity constant of drugs with hPEPT1 for rapid virtual screening of hPEPT1's substrate.

    Science.gov (United States)

    Sun, L; Meng, S

    2016-08-01

    The human proton-coupled peptide transporter (hPEPT1) with broad substrates is an important route for improving the pharmacokinetic performance of drugs. Thus, it is essential to predict the affinity constant between drug molecule and hPEPT1 for rapid virtual screening of hPEPT1's substrate during lead optimization, candidate selection and hPEPT1 prodrug design. Here, a structure-based in silico model for 114 compounds was constructed based on eight structural parameters. This model was built by the multiple linear regression method and satisfied all the prerequisites of the regression models. For the entire data set, the r(2) and adjusted r(2) values were 0.74 and 0.72, respectively. Then, this model was used to perform substrate/non-substrate classification. For 29 drugs from DrugBank database, all were correctly classified as substrates of hPEPT1. This model was also used to perform substrate/non-substrate classification for 18 drugs and their prodrugs; this QSAR model also can distinguish between the substrate and non-substrate. In conclusion, the QSAR model in this paper was validated by a large external data set, and all results indicated that the developed model was robust, stable, and can be used for rapid virtual screening of hPEPT1's substrate in the early stage of drug discovery. PMID:27586363

  8. Affine and degenerate affine BMW algebras: The center

    CERN Document Server

    Daugherty, Zajj; Virk, Rahbar

    2011-01-01

    The degenerate affine and affine BMW algebras arise naturally in the context of Schur-Weyl duality for orthogonal and symplectic Lie algebras and quantum groups, respectively. Cyclotomic BMW algebras, affine Hecke algebras, cyclotomic Hecke algebras, and their degenerate versions are quotients. In this paper the theory is unified by treating the orthogonal and symplectic cases simultaneously; we make an exact parallel between the degenerate affine and affine cases via a new algebra which takes the role of the affine braid group for the degenerate setting. A main result of this paper is an identification of the centers of the affine and degenerate affine BMW algebras in terms of rings of symmetric functions which satisfy a "cancellation property" or "wheel condition" (in the degenerate case, a reformulation of a result of Nazarov). Miraculously, these same rings also arise in Schubert calculus, as the cohomology and K-theory of isotropic Grassmanians and symplectic loop Grassmanians. We also establish new inte...

  9. Periodic cyclic homology of affine Hecke algebras

    CERN Document Server

    Solleveld, Maarten

    2009-01-01

    This is the author's PhD-thesis, which was written in 2006. The version posted here is identical to the printed one. Instead of an abstract, the short list of contents: Preface 5 1 Introduction 9 2 K-theory and cyclic type homology theories 13 3 Affine Hecke algebras 61 4 Reductive p-adic groups 103 5 Parameter deformations in affine Hecke algebras 129 6 Examples and calculations 169 A Crossed products 223 Bibliography 227 Index 237 Samenvatting 245 Curriculum vitae 253

  10. Investigation of conditional transport update in method of characteristics based coarse mesh finite difference transient calculation

    International Nuclear Information System (INIS)

    As an effort to achieve efficient yet accurate transport transient calculations for power reactors, the conditional transport update scheme in method of characteristics (MOC) based coarse mesh finite difference (CMFD) transient calculation is developed. In this scheme, the transport calculations serves as an online group constant generator for the 3-D CMFD transient calculation and the solution of 3-D transient problem is mainly obtained from the 3-D CMFD transient calculation. In order to reduce the computational burden of the intensive transport calculation, the transport updates is conditionally performed by monitoring change of composition and core condition. This efficient transient transport method is applied to 3x3 assembly rod ejection problem to examine the effectiveness and accuracy of the conditional transport calculation scheme. (author)

  11. Affinity driven social networks

    Science.gov (United States)

    Ruyú, B.; Kuperman, M. N.

    2007-04-01

    In this work we present a model for evolving networks, where the driven force is related to the social affinity between individuals of a population. In the model, a set of individuals initially arranged on a regular ordered network and thus linked with their closest neighbors are allowed to rearrange their connections according to a dynamics closely related to that of the stable marriage problem. We show that the behavior of some topological properties of the resulting networks follows a non trivial pattern.

  12. Integration of Affinity Selection-Mass Spectrometry and Functional Cell-Based Assays to Rapidly Triage Druggable Target Space within the NF-κB Pathway.

    Science.gov (United States)

    Kutilek, Victoria D; Andrews, Christine L; Richards, Matthew P; Xu, Zangwei; Sun, Tianxiao; Chen, Yiping; Hashke, Andrew; Smotrov, Nadya; Fernandez, Rafael; Nickbarg, Elliott B; Chamberlin, Chad; Sauvagnat, Berengere; Curran, Patrick J; Boinay, Ryan; Saradjian, Peter; Allen, Samantha J; Byrne, Noel; Elsen, Nathaniel L; Ford, Rachael E; Hall, Dawn L; Kornienko, Maria; Rickert, Keith W; Sharma, Sujata; Shipman, Jennifer M; Lumb, Kevin J; Coleman, Kevin; Dandliker, Peter J; Kariv, Ilona; Beutel, Bruce

    2016-07-01

    The primary objective of early drug discovery is to associate druggable target space with a desired phenotype. The inability to efficiently associate these often leads to failure early in the drug discovery process. In this proof-of-concept study, the most tractable starting points for drug discovery within the NF-κB pathway model system were identified by integrating affinity selection-mass spectrometry (AS-MS) with functional cellular assays. The AS-MS platform Automated Ligand Identification System (ALIS) was used to rapidly screen 15 NF-κB proteins in parallel against large-compound libraries. ALIS identified 382 target-selective compounds binding to 14 of the 15 proteins. Without any chemical optimization, 22 of the 382 target-selective compounds exhibited a cellular phenotype consistent with the respective target associated in ALIS. Further studies on structurally related compounds distinguished two chemical series that exhibited a preliminary structure-activity relationship and confirmed target-driven cellular activity to NF-κB1/p105 and TRAF5, respectively. These two series represent new drug discovery opportunities for chemical optimization. The results described herein demonstrate the power of combining ALIS with cell functional assays in a high-throughput, target-based approach to determine the most tractable drug discovery opportunities within a pathway. PMID:26969322

  13. Evaluation of valence band top and electron affinity of SiO2 and Si-based semiconductors using X-ray photoelectron spectroscopy

    Science.gov (United States)

    Fujimura, Nobuyuki; Ohta, Akio; Makihara, Katsunori; Miyazaki, Seiichi

    2016-08-01

    An evaluation method for the energy level of the valence band (VB) top from the vacuum level (VL) for metals, dielectrics, and semiconductors from the results of X-ray photoelectron spectroscopy (XPS) is presented for the accurate determination of the energy band diagram for materials of interest. In this method, the VB top can be determined by the energy difference between the onset of VB signals and the cut-off energy for secondary photoelectrons by considering the X-ray excitation energy (hν). The energy level of the VB top for three kinds of Si-based materials (H-terminated Si, wet-cleaned 4H-SiC, and thermally grown SiO2) has been investigated by XPS under monochromatized Al Kα radiation (hν = 1486.6 eV). We have also demonstrated the determination of the electron affinity for the samples by this measurement technique in combination with the measured and reported energy bandgaps (E g).

  14. Structures of native and affinity-enhanced WT1 epitopes bound to HLA-A*0201: Implications for WT1-based cancer therapeutics

    Energy Technology Data Exchange (ETDEWEB)

    Borbulevych, Oleg Y.; Do, Priscilla; Baker, Brian M. (Notre)

    2010-09-07

    Presentation of peptides by class I or class II major histocompatibility complex (MHC) molecules is required for the initiation and propagation of a T cell-mediated immune response. Peptides from the Wilms Tumor 1 transcription factor (WT1), upregulated in many hematopoetic and solid tumors, can be recognized by T cells and numerous efforts are underway to engineer WT1-based cancer vaccines. Here we determined the structures of the class I MHC molecule HLA-A*0201 bound to the native 126-134 epitope of the WT1 peptide and a recently described variant (R1Y) with improved MHC binding. The R1Y variant, a potential vaccine candidate, alters the positions of MHC charged side chains near the peptide N-terminus and significantly reduces the peptide/MHC electrostatic surface potential. These alterations indicate that the R1Y variant is an imperfect mimic of the native WT1 peptide, and suggest caution in its use as a therapeutic vaccine. Stability measurements revealed how the R1Y substitution enhances MHC binding affinity, and together with the structures suggest a strategy for engineering WT1 variants with improved MHC binding that retain the structural features of the native peptide/MHC complex.

  15. Structures of native and affinity-enhanced WT1 epitopes bound to HLA-A*0201: implications for WT1-based cancer therapeutics.

    Science.gov (United States)

    Borbulevych, Oleg Y; Do, Priscilla; Baker, Brian M

    2010-09-01

    Presentation of peptides by class I or class II major histocompatibility complex (MHC) molecules is required for the initiation and propagation of a T cell-mediated immune response. Peptides from the Wilms Tumor 1 transcription factor (WT1), upregulated in many hematopoetic and solid tumors, can be recognized by T cells and numerous efforts are underway to engineer WT1-based cancer vaccines. Here we determined the structures of the class I MHC molecule HLA-A*0201 bound to the native 126-134 epitope of the WT1 peptide and a recently described variant (R1Y) with improved MHC binding. The R1Y variant, a potential vaccine candidate, alters the positions of MHC charged side chains near the peptide N-terminus and significantly reduces the peptide/MHC electrostatic surface potential. These alterations indicate that the R1Y variant is an imperfect mimic of the native WT1 peptide, and suggest caution in its use as a therapeutic vaccine. Stability measurements revealed how the R1Y substitution enhances MHC binding affinity, and together with the structures suggest a strategy for engineering WT1 variants with improved MHC binding that retain the structural features of the native peptide/MHC complex. PMID:20619457

  16. Development of a Novel Optical Biosensor for Detection of Organophoshorus Pesticides Based on Methyl Parathion Hydrolase Immobilized by Metal-Chelate Affinity

    Directory of Open Access Journals (Sweden)

    Wensheng Lan

    2012-06-01

    Full Text Available We have developed a novel optical biosensor device using recombinant methyl parathion hydrolase (MPH enzyme immobilized on agarose by metal-chelate affinity to detect organophosphorus (OP compounds with a nitrophenyl group. The biosensor principle is based on the optical measurement of the product of OP catalysis by MPH (p-nitrophenol. Briefly, MPH containing six sequential histidines (6× His tag at its N-terminal was bound to nitrilotriacetic acid (NTA agarose with Ni ions, resulting in the flexible immobilization of the bio-reaction platform. The optical biosensing system consisted of two light-emitting diodes (LEDs and one photodiode. The LED that emitted light at the wavelength of the maximum absorption for p-nitrophenol served as the signal light, while the other LED that showed no absorbance served as the reference light. The optical sensing system detected absorbance that was linearly correlated to methyl parathion (MP concentration and the detection limit was estimated to be 4 μM. Sensor hysteresis was investigated and the results showed that at lower concentration range of MP the difference got from the opposite process curves was very small. With its easy immobilization of enzymes and simple design in structure, the system has the potential for development into a practical portable detector for field applications.

  17. Organ dose calculation in CT based on scout image data and automatic image registration

    Energy Technology Data Exchange (ETDEWEB)

    Kortesniemi, Mika; Salli, Eero; Seuri, Raija [HUS Helsinki Medical Imaging Center, Univ. of Helsinki, Helsinki (Finland)], E-mail: mika.kortesniemi@hus.fi

    2012-10-15

    Background Computed tomography (CT) has become the main contributor of the cumulative radiation exposure in radiology. Information on cumulative exposure history of the patient should be available for efficient management of radiation exposures and for radiological justification. Purpose To develop and evaluate automatic image registration for organ dose calculation in CT. Material and Methods Planning radiograph (scout) image data describing CT scan ranges from 15 thoracic CT examinations (9 men and 6 women) and 10 abdominal CT examinations (6 men and 4 women) were co-registered with the reference trunk CT scout image. 2-D affine transformation and normalized correlation metric was used for image registration. Longitudinal (z-axis) scan range coordinates on the reference scout image were converted into slice locations on the CT-Expo anthropomorphic male and female models, following organ and effective dose calculations. Results The average deviation of z-location of studied patient images from the corresponding location in the reference scout image was 6.2 mm. The ranges of organ and effective doses with constant exposure parameters were from 0 to 28.0 mGy and from 7.3 to 14.5 mSv, respectively. The mean deviation of the doses for fully irradiated organs (inside the scan range), partially irradiated organs and non-irradiated organs (outside the scan range) was 1%, 5%, and 22%, respectively, due to image registration. Conclusion The automated image processing method to registrate individual chest and abdominal CT scout radiograph with the reference scout radiograph is feasible. It can be used to determine the individual scan range coordinates in z-direction to calculate the organ dose values. The presented method could be utilized in automatic organ dose calculation in CT for radiation exposure tracking of the patients.

  18. Organ dose calculation in CT based on scout image data and automatic image registration

    International Nuclear Information System (INIS)

    Background Computed tomography (CT) has become the main contributor of the cumulative radiation exposure in radiology. Information on cumulative exposure history of the patient should be available for efficient management of radiation exposures and for radiological justification. Purpose To develop and evaluate automatic image registration for organ dose calculation in CT. Material and Methods Planning radiograph (scout) image data describing CT scan ranges from 15 thoracic CT examinations (9 men and 6 women) and 10 abdominal CT examinations (6 men and 4 women) were co-registered with the reference trunk CT scout image. 2-D affine transformation and normalized correlation metric was used for image registration. Longitudinal (z-axis) scan range coordinates on the reference scout image were converted into slice locations on the CT-Expo anthropomorphic male and female models, following organ and effective dose calculations. Results The average deviation of z-location of studied patient images from the corresponding location in the reference scout image was 6.2 mm. The ranges of organ and effective doses with constant exposure parameters were from 0 to 28.0 mGy and from 7.3 to 14.5 mSv, respectively. The mean deviation of the doses for fully irradiated organs (inside the scan range), partially irradiated organs and non-irradiated organs (outside the scan range) was 1%, 5%, and 22%, respectively, due to image registration. Conclusion The automated image processing method to registrate individual chest and abdominal CT scout radiograph with the reference scout radiograph is feasible. It can be used to determine the individual scan range coordinates in z-direction to calculate the organ dose values. The presented method could be utilized in automatic organ dose calculation in CT for radiation exposure tracking of the patients

  19. THEXSYST - a knowledge based system for the control and analysis of technical simulation calculations

    International Nuclear Information System (INIS)

    This system (THEXSYST) will be used for control, analysis and presentation of thermal hydraulic simulation calculations of light water reactors. THEXSYST is a modular system consisting of an expert shell with user interface, a data base, and a simulation program and uses techniques available in RSYST. A knowledge base, which was created to control the simulational calculation of pressurized water reactors, includes both the steady state calculation and the transient calculation in the domain of the depressurization, as a result of a small break loss of coolant accident. The methods developed are tested using a simulational calculation with RELAP5/Mod2. It will be seen that the application of knowledge base techniques may be a helpful tool to support existing solutions especially in graphical analysis. (orig./HP)

  20. Calculating Program for Decommissioning Work Productivity based on Decommissioning Activity Experience Data

    Energy Technology Data Exchange (ETDEWEB)

    Song, Chan-Ho; Park, Seung-Kook; Park, Hee-Seong; Moon, Jei-kwon [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2014-10-15

    KAERI is performing research to calculate a coefficient for decommissioning work unit productivity to calculate the estimated time decommissioning work and estimated cost based on decommissioning activity experience data for KRR-2. KAERI used to calculate the decommissioning cost and manage decommissioning activity experience data through systems such as the decommissioning information management system (DECOMMIS), Decommissioning Facility Characterization DB System (DEFACS), decommissioning work-unit productivity calculation system (DEWOCS). In particular, KAERI used to based data for calculating the decommissioning cost with the form of a code work breakdown structure (WBS) based on decommissioning activity experience data for KRR-2.. Defined WBS code used to each system for calculate decommissioning cost. In this paper, we developed a program that can calculate the decommissioning cost using the decommissioning experience of KRR-2, UCP, and other countries through the mapping of a similar target facility between NPP and KRR-2. This paper is organized as follows. Chapter 2 discusses the decommissioning work productivity calculation method, and the mapping method of the decommissioning target facility will be described in the calculating program for decommissioning work productivity. At KAERI, research on various decommissioning methodologies of domestic NPPs will be conducted in the near future. In particular, It is difficult to determine the cost of decommissioning because such as NPP facility have the number of variables, such as the material of the target facility decommissioning, size, radiographic conditions exist.

  1. Calculating Program for Decommissioning Work Productivity based on Decommissioning Activity Experience Data

    International Nuclear Information System (INIS)

    KAERI is performing research to calculate a coefficient for decommissioning work unit productivity to calculate the estimated time decommissioning work and estimated cost based on decommissioning activity experience data for KRR-2. KAERI used to calculate the decommissioning cost and manage decommissioning activity experience data through systems such as the decommissioning information management system (DECOMMIS), Decommissioning Facility Characterization DB System (DEFACS), decommissioning work-unit productivity calculation system (DEWOCS). In particular, KAERI used to based data for calculating the decommissioning cost with the form of a code work breakdown structure (WBS) based on decommissioning activity experience data for KRR-2.. Defined WBS code used to each system for calculate decommissioning cost. In this paper, we developed a program that can calculate the decommissioning cost using the decommissioning experience of KRR-2, UCP, and other countries through the mapping of a similar target facility between NPP and KRR-2. This paper is organized as follows. Chapter 2 discusses the decommissioning work productivity calculation method, and the mapping method of the decommissioning target facility will be described in the calculating program for decommissioning work productivity. At KAERI, research on various decommissioning methodologies of domestic NPPs will be conducted in the near future. In particular, It is difficult to determine the cost of decommissioning because such as NPP facility have the number of variables, such as the material of the target facility decommissioning, size, radiographic conditions exist

  2. Integrated Power Flow and Short Circuit Calculation Method for Distribution Network with Inverter Based Distributed Generation

    Directory of Open Access Journals (Sweden)

    Shan Yang

    2016-01-01

    Full Text Available Power flow calculation and short circuit calculation are the basis of theoretical research for distribution network with inverter based distributed generation. The similarity of equivalent model for inverter based distributed generation during normal and fault conditions of distribution network and the differences between power flow and short circuit calculation are analyzed in this paper. Then an integrated power flow and short circuit calculation method for distribution network with inverter based distributed generation is proposed. The proposed method let the inverter based distributed generation be equivalent to Iθ bus, which makes it suitable to calculate the power flow of distribution network with a current limited inverter based distributed generation. And the low voltage ride through capability of inverter based distributed generation can be considered as well in this paper. Finally, some tests of power flow and short circuit current calculation are performed on a 33-bus distribution network. The calculated results from the proposed method in this paper are contrasted with those by the traditional method and the simulation method, whose results have verified the effectiveness of the integrated method suggested in this paper.

  3. On the Affine Isoperimetric Inequalities

    Indian Academy of Sciences (India)

    Wuyang Yu; Gangsong Leng

    2011-11-01

    We obtain an isoperimetric inequality which estimate the affine invariant -surface area measure on convex bodies. We also establish the reverse version of -Petty projection inequality and an affine isoperimetric inequality of $_{-p}K$.

  4. Parameterization of an effective potential for protein-ligand binding from host-guest affinity data.

    Science.gov (United States)

    Wickstrom, Lauren; Deng, Nanjie; He, Peng; Mentes, Ahmet; Nguyen, Crystal; Gilson, Michael K; Kurtzman, Tom; Gallicchio, Emilio; Levy, Ronald M

    2016-01-01

    Force field accuracy is still one of the "stalemates" in biomolecular modeling. Model systems with high quality experimental data are valuable instruments for the validation and improvement of effective potentials. With respect to protein-ligand binding, organic host-guest complexes have long served as models for both experimental and computational studies because of the abundance of binding affinity data available for such systems. Binding affinity data collected for cyclodextrin (CD) inclusion complexes, a popular model for molecular recognition, is potentially a more reliable resource for tuning energy parameters than hydration free energy measurements. Convergence of binding free energy calculations on CD host-guest systems can also be obtained rapidly, thus offering the opportunity to assess the robustness of these parameters. In this work, we demonstrate how implicit solvent parameters can be developed using binding affinity experimental data and the binding energy distribution analysis method (BEDAM) and validated using the Grid Inhomogeneous Solvation Theory analysis. These new solvation parameters were used to study protein-ligand binding in two drug targets against the HIV-1 virus and improved the agreement between the calculated and the experimental binding affinities. This work illustrates how benchmark sets of high quality experimental binding affinity data and physics-based binding free energy models can be used to evaluate and optimize force fields for protein-ligand systems. Copyright © 2015 John Wiley & Sons, Ltd. PMID:26256816

  5. Development of windows based application for the calculation of atomic hyperfine spectrum of odd isotopes

    International Nuclear Information System (INIS)

    Windows based application has been developed for the calculation of atomic hyperfine spectrum of odd isotopes keeping in view of the needs of the atomic spectroscopists. The application can also calculate the hyperfine spectrum of another odd isotope if hyperfine structure constants of one isotope are known. Various features of the developed application are discussed. (author)

  6. Study on the Cost Calculation of Local Fixed Telecom Network Based on Unbundled Network Elements

    Institute of Scientific and Technical Information of China (English)

    XU Liang; LIANG Xiong-jian; HUANG Xiu-qing

    2005-01-01

    In this paper, according to the practical condition of local fixed telecom network, based on the method of the realistic total element long-run incremental cost, the practical methods of dividing the network elements, calculating the cost of network elements and services are given, to provide reference for the cost calculation in telecom industry.

  7. Adjoint affine fusion and tadpoles

    OpenAIRE

    Urichuk, Andrew; Walton, Mark A.

    2016-01-01

    We study affine fusion with the adjoint representation. For simple Lie algebras, elementary and universal formulas determine the decomposition of a tensor product of an integrable highest-weight representation with the adjoint representation. Using the (refined) affine depth rule, we prove that equally striking results apply to adjoint affine fusion. For diagonal fusion, a coefficient equals the number of nonzero Dynkin labels of the relevant affine highest weight, minus 1. A nice lattice-pol...

  8. Off-line hyphenation of boronate affinity monolith-based extraction with matrix-assisted laser desorption/ionization time-of-flight mass spectrometry for efficient analysis of glycoproteins/glycopeptides.

    Science.gov (United States)

    Bie, Zijun; Chen, Yang; Li, Hengye; Wu, Ronghu; Liu, Zhen

    2014-06-27

    Boronate affinity materials have attracted increasing attentions as sample enrichment platforms for glycoproteomic analysis in recent years. However, most of the boronate affinity materials that have already employed for proteomic analysis are suffering from apparent disadvantages, such as alkaline pH for binding, weak affinity, and relatively poor selectivity. Benzoboroxoles are a unique class of boronic acids which have showed excellent binding properties for the recognition of cis-diol-containing compounds. Recently, a 3-carboxy-benzoboroxole-functionalized monolithic column had been reported and it had exhibited the best selectivity and affinity as well as the lowest binding pH among all reported boronate affinity monolithic columns. In this study, an off-line hyphenation of this boronate affinity monolithic column-based extraction with matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF MS) was developed and the powerfulness of this hyphenated approach in the analysis of glycoproteins and glycopeptides in complex samples was investigated. The approach was first applied to the analysis of glycopeptides in the tryptic digest of horseradish peroxidase (HRP). Totally 22 glycopeptides were identified. To the best of our knowledge, this is the best performance among all the boronic acid-functionalized materials. We further employed this approach to the analysis of intact proteins in human saliva. Totally 6 intact glycoproteins were successfully identified. As comparison, when the samples were analyzed without extraction, only a few glycopeptides were identified from the tryptic digest of HRP while no glycoproteins were found from the saliva samples. PMID:24928239

  9. Affinity purification of aprotinin from bovine lung.

    Science.gov (United States)

    Xin, Yu; Liu, Lanhua; Chen, Beizhan; Zhang, Ling; Tong, Yanjun

    2015-05-01

    An affinity protocol for the purification of aprotinin from bovine lung was developed. To simulate the structure of sucrose octasulfate, a natural specific probe for aprotinin, the affinity ligand was composed of an acidic head and a hydrophobic stick, and was then linked with Sepharose. The sorbent was then subjected to adsorption analysis with pure aprotinin. The purification process consisted of one step of affinity chromatography and another step of ultrafiltration. Then purified aprotinin was subjected to sodium dodecyl sulfate polyacrylamide gel electrophoresis, trypsin inhibitor activity, gel-filtration, and thin-layer chromatography analysis. As calculated, the theoretical maximum adsorption (Qmax ) of the affinity sorbent was 25,476.0 ± 184.8 kallikrein inactivator unit/g wet gel; the dissociation constant of the complex "immobilized ligand-aprotinin" (Kd ) was 4.6 ± 0.1 kallikrein inactivator unit/mL. After the affinity separation of bovine lung aprotinin, reducing sodium dodecyl sulfate polyacrylamide gel electrophoresis analysis and gel-filtration chromatography revealed that the protein was a single polypeptide, and the purities were ∼ 97 and 100%, respectively; the purified peptide was also confirmed with aprotinin standard by gel-filtration chromatography and thin-layer chromatography. After the whole purification process, protein, and bioactivity recoveries were 2.2 and 92.6%, respectively; and the specific activity was up to 15,907.1 ± 10.2 kallikrein inactivator unit/mg. PMID:25677462

  10. Binding affinities of Schiff base Fe(II) complex with BSA and calf-thymus DNA: Spectroscopic investigations and molecular docking analysis.

    Science.gov (United States)

    Rudra, Suparna; Dasmandal, Somnath; Patra, Chiranjit; Kundu, Arjama; Mahapatra, Ambikesh

    2016-09-01

    The binding interaction of a synthesized Schiff base Fe(II) complex with biological macromolecules viz., bovine serum albumin (BSA) and calf thymus(ct)-DNA have been investigated using different spectroscopic techniques coupled with viscosity measurements at physiological pH and 298K. Regular amendments in emission intensities of BSA upon the action of the complex indicate significant interaction between them, and the binding interaction have been characterized by Stern Volmer plots and thermodynamic binding parameters. On the basis of this quenching technique one binding site with binding constant (Kb=(7.6±0.21)×10(5)) between complex and protein have been obtained at 298K. Time-resolved fluorescence studies have also been encountered to understand the mechanism of quenching induced by the complex. Binding affinities of the complex to the fluorophores of BSA namely tryptophan (Trp) and tyrosine (Tyr) have been judged by synchronous fluorescence studies. Secondary structural changes of BSA rooted by the complex has been revealed by CD spectra. On the other hand, hypochromicity of absorption spectra of the complex with the addition of ct-DNA and the gradual reduction in emission intensities of ethidium bromide bound ct-DNA in presence of the complex indicate noticeable interaction between ct-DNA and the complex with the binding constant (4.2±0.11)×10(6)M(-1). Life-time measurements have been studied to determine the relative amplitude of binding of the complex to ct-DNA base pairs. Mode of binding interaction of the complex with ct-DNA has been deciphered by viscosity measurements. CD spectra have also been used to understand the changes in ct-DNA structure upon binding with the metal complex. Density functional theory (DFT) and molecular docking analysis have been employed in highlighting the interactive phenomenon and binding location of the complex with the macromolecules. PMID:27214273

  11. Comparison of a Label-Free Quantitative Proteomic Method Based on Peptide Ion Current Area to the Isotope Coded Affinity Tag Method

    Directory of Open Access Journals (Sweden)

    Young Ah Goo

    2008-01-01

    Full Text Available Recently, several research groups have published methods for the determination of proteomic expression profiling by mass spectrometry without the use of exogenously added stable isotopes or stable isotope dilution theory. These so-called label-free, methods have the advantage of allowing data on each sample to be acquired independently from all other samples to which they can later be compared in silico for the purpose of measuring changes in protein expression between various biological states. We developed label free software based on direct measurement of peptide ion current area (PICA and compared it to two other methods, a simpler label free method known as spectral counting and the isotope coded affinity tag (ICAT method. Data analysis by these methods of a standard mixture containing proteins of known, but varying, concentrations showed that they performed similarly with a mean squared error of 0.09. Additionally, complex bacterial protein mixtures spiked with known concentrations of standard proteins were analyzed using the PICA label-free method. These results indicated that the PICA method detected all levels of standard spiked proteins at the 90% confidence level in this complex biological sample. This finding confirms that label-free methods, based on direct measurement of the area under a single ion current trace, performed as well as the standard ICAT method. Given the fact that the label-free methods provide ease in experimental design well beyond pair-wise comparison, label-free methods such as our PICA method are well suited for proteomic expression profiling of large numbers of samples as is needed in clinical analysis.

  12. Aptamer Affinity Maturation by Resampling and Microarray Selection.

    Science.gov (United States)

    Kinghorn, Andrew B; Dirkzwager, Roderick M; Liang, Shaolin; Cheung, Yee-Wai; Fraser, Lewis A; Shiu, Simon Chi-Chin; Tang, Marco S L; Tanner, Julian A

    2016-07-19

    Aptamers have significant potential as affinity reagents, but better approaches are critically needed to discover higher affinity nucleic acids to widen the scope for their diagnostic, therapeutic, and proteomic application. Here, we report aptamer affinity maturation, a novel aptamer enhancement technique, which combines bioinformatic resampling of aptamer sequence data and microarray selection to navigate the combinatorial chemistry binding landscape. Aptamer affinity maturation is shown to improve aptamer affinity by an order of magnitude in a single round. The novel aptamers exhibited significant adaptation, the complexity of which precludes discovery by other microarray based methods. Honing aptamer sequences using aptamer affinity maturation could help optimize a next generation of nucleic acid affinity reagents. PMID:27346322

  13. The effect of statistical uncertainty on inverse treatment planning based on Monte Carlo dose calculation

    Science.gov (United States)

    Jeraj, Robert; Keall, Paul

    2000-12-01

    The effect of the statistical uncertainty, or noise, in inverse treatment planning for intensity modulated radiotherapy (IMRT) based on Monte Carlo dose calculation was studied. Sets of Monte Carlo beamlets were calculated to give uncertainties at Dmax ranging from 0.2% to 4% for a lung tumour plan. The weights of these beamlets were optimized using a previously described procedure based on a simulated annealing optimization algorithm. Several different objective functions were used. It was determined that the use of Monte Carlo dose calculation in inverse treatment planning introduces two errors in the calculated plan. In addition to the statistical error due to the statistical uncertainty of the Monte Carlo calculation, a noise convergence error also appears. For the statistical error it was determined that apparently successfully optimized plans with a noisy dose calculation (3% 1σ at Dmax ), which satisfied the required uniformity of the dose within the tumour, showed as much as 7% underdose when recalculated with a noise-free dose calculation. The statistical error is larger towards the tumour and is only weakly dependent on the choice of objective function. The noise convergence error appears because the optimum weights are determined using a noisy calculation, which is different from the optimum weights determined for a noise-free calculation. Unlike the statistical error, the noise convergence error is generally larger outside the tumour, is case dependent and strongly depends on the required objectives.

  14. Short Circuit Calculation in Networks with a High Share of Inverter Based Distributed Generation

    OpenAIRE

    Margossian, Harag; Deconinck, Geert; Sachau, Jürgen

    2014-01-01

    Conventional short circuit calculation programs do not consider the actual behavior of inverter based distributed generation (DG). Several techniques to consider them have been suggested in literature and are briefly described in this paper. A tool is developed with the combination of these techniques. The approach uses standard short circuit calculation tools and accommodates inverter based DG with different fault responses. The approach is evaluated and compared against other techniques usi...

  15. Data base structure and Management for Automatic Calculation of 210Pb Dating Methods Applying Different Models

    International Nuclear Information System (INIS)

    The introduction of macros in try calculation sheets allows the automatic application of various dating models using unsupported ''210 Pb data from a data base. The calculation books the contain the models have been modified to permit the implementation of these macros. The Marine and Aquatic Radioecology group of CIEMAT (MARG) will be involved in new European Projects, thus new models have been developed. This report contains a detailed description of: a) the new implement macros b) the design of a dating Menu in the calculation sheet and c) organization and structure of the data base. (Author) 4 refs

  16. Can we beat the biotin-avidin pair?: cucurbit[7]uril-based ultrahigh affinity host-guest complexes and their applications.

    Science.gov (United States)

    Shetty, Dinesh; Khedkar, Jayshree K; Park, Kyeng Min; Kim, Kimoon

    2015-12-01

    The design of synthetic, monovalent host-guest molecular recognition pairs is still challenging and of particular interest to inquire into the limits of the affinity that can be achieved with designed systems. In this regard, cucurbit[7]uril (CB[7]), an important member of the host family cucurbit[n]uril (CB[n], n = 5-8, 10, 14), has attracted much attention because of its ability to form ultra-stable complexes with multiple guests. The strong hydrophobic effect between the host cavity and guests, ion-dipole and dipole-dipole interactions of guests with CB portals helps in cooperative and multiple noncovalent interactions that are essential for realizing such strong complexations. These highly selective, strong yet dynamic interactions can be exploited in many applications including affinity chromatography, biomolecule immobilization, protein isolation, biological catalysis, and sensor technologies. In this review, we summarize the progress in the development of high affinity guests for CB[7], factors affecting the stability of complexes, theoretical insights, and the utility of these high affinity pairs in different challenging applications. PMID:26434388

  17. A New Optimization Method for Centrifugal Compressors Based on 1D Calculations and Analyses

    Directory of Open Access Journals (Sweden)

    Pei-Yuan Li

    2015-05-01

    Full Text Available This paper presents an optimization design method for centrifugal compressors based on one-dimensional calculations and analyses. It consists of two parts: (1 centrifugal compressor geometry optimization based on one-dimensional calculations and (2 matching optimization of the vaned diffuser with an impeller based on the required throat area. A low pressure stage centrifugal compressor in a MW level gas turbine is optimized by this method. One-dimensional calculation results show that D3/D2 is too large in the original design, resulting in the low efficiency of the entire stage. Based on the one-dimensional optimization results, the geometry of the diffuser has been redesigned. The outlet diameter of the vaneless diffuser has been reduced, and the original single stage diffuser has been replaced by a tandem vaned diffuser. After optimization, the entire stage pressure ratio is increased by approximately 4%, and the efficiency is increased by approximately 2%.

  18. Implications to Postsecondary Faculty of Alternative Calculation Methods of Gender-Based Wage Differentials.

    Science.gov (United States)

    Hagedorn, Linda Serra

    1998-01-01

    A study explored two distinct methods of calculating a precise measure of gender-based wage differentials among college faculty. The first estimation considered wage differences using a formula based on human capital; the second included compensation for past discriminatory practices. Both measures were used to predict three specific aspects of…

  19. An affine framework for analytical mechanics

    OpenAIRE

    Urbanski, Pawel

    2003-01-01

    An affine Cartan calculus is developed. The concepts of special affine bundles and special affine duality are introduced. The canonical isomorphisms, fundamental for Lagrangian and Hamiltonian formulations of the dynamics in the affine setting are proved.

  20. Calculation of activities in some gallium-based systems with a miscibility gap

    Directory of Open Access Journals (Sweden)

    IWAO KATAYAMA

    2003-09-01

    Full Text Available The calculations of thermodynamic properties in some gallium-based systems with a miscibility gap – Ga–Tl, Ga–Hg and Ga–Pb are presented in this paper. The determination of the gallium activities in the mentioned liquid alloys was based on their known phase diagrams using the Zhang-Chou method for calculating activities from phase diagrams involving two liquid or solid coexisting phases. The activities of gallium in Ga–Tl, Ga–Hg and Ga–Pb system were calculated in the 973–1273 K, 573–873 K and 1000–1100 K, temperature ranges, respectively. The activities of the other component in all the investigated systems were obtained by the Gibbs-Duhem equation. The results of the calculations are compared with literature data.

  1. 基于理性设计的β-甘露聚糖酶底物亲和力的定向改造%Directed modification of β-mannanase substrate affinity based on rational design

    Institute of Scientific and Technical Information of China (English)

    魏喜换; 王春娟; 赵梅; 李剑芳; 邬敏辰

    2014-01-01

    β-mannanase sequences , respectively , forming a series of mutant enzymes . Lastly , binding free energies (ΔGbind ) of various β-mannanases with mannobiose were calculated by using the molecular mechanics Poisson-Boltzmann surface area ( MM-PBSA) method , respectively . The ΔGbind of AuMan5AY111F was -237.7 kJ/mol , lower than those of other enzymes .Based on the rational design ,an AuMan5AY111F-encoding gene , Auman5AY111F , was constructed by mutating a Tyr111-encoding codon ( TAC) of the Auman5A into a Phe111-encoding TTC by megaprimer PCR . Then , the Auman5AY111F and Auman5A were expressed in Pichia pastoris GS115 , and kinetic parameters of the purified recombinant AuMan5AY111F and AuMan5A ( reAuMan5AY111F and reAuMan5A ) were determined , respectively . The results displayed that the Km value of reAuMan5AY111F , towards guar gum , dropped to 2 .5 mg/mL from 3 .9 mg/mL of reAuMan5A , indicating the substrate affinity of reAuMan5A increased correspondingly . While , the Vmax value of reAuMan5A kept almost unchanged after site-directed mutagenesis . The directed modification of AuMan5A based on the rational design for enhancing its substrate affinity was firstly predicted by using various bioinformatics softwares , and then was confirmed by site-directed mutagenesis . This work provides a novel technology strategy for the directed modification of substrate affinities of β-mannanases and other enzymes .%以宇佐美曲霉(Aspergillus usamii)YL-01-78的5家族β-甘露聚糖酶(AuMan5A)为研究对象,对其底物亲和力的定向改造进行理性设计及定点突变以获取米氏常数 Km 值较低的突变酶AuMan5AY111F .首先采用同源建模和分子对接模拟等方法预测AuMan5A与甘露二糖对接复合物的空间结构;在此结构上使用PyMol软件统计到距甘露二糖8Å以内的38个氨基酸位点.其次对不同来源的、与AuMan5A一级结构序列全同率大于50%的β-甘露聚糖酶进行多序列比对;排除21个

  2. About calculation of crystal lattice parameters of iron base solid solutions

    International Nuclear Information System (INIS)

    Lattice parameters of iron base solid solutions (Fe-Be, Fe-Cr, Fe-Co, Fe-Mo, Fe-Nb, Fe-Ni, Fe-Ru, Fe-Ti, Fe-V) are calculated on the basis of previously proposed model with the use of the correction taking into account the chemical interaction between element atoms. calculation results allow the conclusion that the sue of the correction reduced the discrepancy between experimental and calculated data on lattice parameters down tp values comparable with an error of the experiment

  3. Calculation Model for Current-voltage Relation of Silicon Quantum-dots-based Nano-memory

    Institute of Scientific and Technical Information of China (English)

    YANG Hong-guan; DAI Da-kang; YU Biao; SHANG Lin-lin; GUO You-hong

    2007-01-01

    Based on the capacitive coupling formalism, an analytic model for calculating the drain currents of the quantum-dots floating-gate memory cell is proposed. Using this model, one can calculate numerically the drain currents of linear, saturation and subthreshold regions of the device with/without charges stored on the floating dots. The read operation process of an n-channel Si quantum-dots floating-gate nano-memory cell is discussed after calculating the drain currents versus the drain to source voltages and control gate voltages in both high and low threshold states respectively.

  4. Improvement of accuracy of resonance self-shielding calculation based on subgroup method

    International Nuclear Information System (INIS)

    Based on the neutron self-shielding calculation code SGMOC, which is the combination of subgroup method and characteristics method that developed by ourselves, we studied the two techniques to improve the SGMOC calculation accuracy. The numerical results prove that both techniques have the capability to improve the resonance self-shielding calculation accuracy. The resonance interference effect treatment which uses a new method to obtain the conditional probabilities, has a correction effect about 20∼230 pcm. When the impact of the resonance scattering is considered, the correction effect is about 100 Dcm. When utilizing the above two techniques simultaneously, the correction effect is about 30∼270 pcm. (authors)

  5. Simple atmospheric transmittance calculation based on a Fourier-transformed Voigt profile.

    Science.gov (United States)

    Kobayashi, Hirokazu

    2002-11-20

    A method of line-by-line transmission calculation for a homogeneous atmospheric layer that uses the Fourier-transformed Voigt profile is presented. The method is based on a pure Voigt function with no approximation and an interference term that takes into account the line-mixing effect. One can use the method to calculate transmittance, considering each line shape as it is affected by temperature and pressure, with a line database with an arbitrary wave-number range and resolution. To show that the method is feasible for practical model development, we compared the calculated transmittance with that obtained with a conventional model, and good consistency was observed. PMID:12463237

  6. Calculation of response of Chinese hamster cells to ions based on track structure theory

    Institute of Scientific and Technical Information of China (English)

    LiuXiao-Wei; ZhangChun-Xiang

    1997-01-01

    Considering biological cells as single target two-hit detectors,an analytic formula to calculate the response of cells to ions is developed based on track structure theory.In the calculation,the splitting deposition energy between ion kill mode and γ kill mode is not used.The results of calculation are in agreement with the experimental data for response of Chinese hamster cells,whose response to γ rays can be described by the response function of single target two hit detector to ions.

  7. A GIS extension model to calculate urban heat island intensity based on urban geometry

    OpenAIRE

    Nakata, C. M.; Souza, Léa Cristina Lucas; Rodrigues, Daniel Souto

    2015-01-01

    This paper presents a simulation model, which was incorporated into a Geographic Information System (GIS), in order to calculate the maximum intensity of urban heat islands based on urban geometry data. The method-ology of this study stands on a theoretical-numerical basis (Okeâ s model), followed by the study and selection of existing GIS tools, the design of the calculation model, the incorporation of the resulting algorithm into the GIS platform and the application of the tool, developed ...

  8. A transport based one-dimensional perturbation code for reactivity calculations in metal systems

    Energy Technology Data Exchange (ETDEWEB)

    Wenz, T.R.

    1995-02-01

    A one-dimensional reactivity calculation code is developed using first order perturbation theory. The reactivity equation is based on the multi-group transport equation using the discrete ordinates method for angular dependence. In addition to the first order perturbation approximations, the reactivity code uses only the isotropic scattering data, but cross section libraries with higher order scattering data can still be used with this code. The reactivity code obtains all the flux, cross section, and geometry data from the standard interface files created by ONEDANT, a discrete ordinates transport code. Comparisons between calculated and experimental reactivities were done with the central reactivity worth data for Lady Godiva, a bare uranium metal assembly. Good agreement is found for isotopes that do not violate the assumptions in the first order approximation. In general for cases where there are large discrepancies, the discretized cross section data is not accurately representing certain resonance regions that coincide with dominant flux groups in the Godiva assembly. Comparing reactivities calculated with first order perturbation theory and a straight {Delta}k/k calculation shows agreement within 10% indicating the perturbation of the calculated fluxes is small enough for first order perturbation theory to be applicable in the modeled system. Computation time comparisons between reactivities calculated with first order perturbation theory and straight {Delta}k/k calculations indicate considerable time can be saved performing a calculation with a perturbation code particularly as the complexity of the modeled problems increase.

  9. A transport based one-dimensional perturbation code for reactivity calculations in metal systems

    International Nuclear Information System (INIS)

    A one-dimensional reactivity calculation code is developed using first order perturbation theory. The reactivity equation is based on the multi-group transport equation using the discrete ordinates method for angular dependence. In addition to the first order perturbation approximations, the reactivity code uses only the isotropic scattering data, but cross section libraries with higher order scattering data can still be used with this code. The reactivity code obtains all the flux, cross section, and geometry data from the standard interface files created by ONEDANT, a discrete ordinates transport code. Comparisons between calculated and experimental reactivities were done with the central reactivity worth data for Lady Godiva, a bare uranium metal assembly. Good agreement is found for isotopes that do not violate the assumptions in the first order approximation. In general for cases where there are large discrepancies, the discretized cross section data is not accurately representing certain resonance regions that coincide with dominant flux groups in the Godiva assembly. Comparing reactivities calculated with first order perturbation theory and a straight Δk/k calculation shows agreement within 10% indicating the perturbation of the calculated fluxes is small enough for first order perturbation theory to be applicable in the modeled system. Computation time comparisons between reactivities calculated with first order perturbation theory and straight Δk/k calculations indicate considerable time can be saved performing a calculation with a perturbation code particularly as the complexity of the modeled problems increase

  10. Nanothermochromics with VO2-based core-shell structures : Calculated luminous and solar optical properties

    OpenAIRE

    Li, Shuyi; Niklasson, Gunnar A.; Granqvist, Claes-Göran

    2011-01-01

    Composites including VO2-based thermochromic nanoparticles are able to combine high luminous transmittance T-lum with a significant modulation of the solar energy transmittance Delta T-sol at a "critical" temperature in the vicinity of room temperature. Thus nanothermochromics is of much interest for energy efficient fenestration and offers advantages over thermochromic VO2-based thin films. This paper presents calculations based on effective medium theory applied to dilute suspensions of cor...

  11. Inverse treatment planning for radiation therapy based on fast Monte Carlo dose calculation

    International Nuclear Information System (INIS)

    An inverse treatment planning system based on fast Monte Carlo (MC) dose calculation is presented. It allows optimisation of intensity modulated dose distributions in 15 to 60 minutes on present day personal computers. If a multi-processor machine is available, parallel simulation of particle histories is also possible, leading to further calculation time reductions. The optimisation process is divided into two stages. The first stage results influence profiles based on pencil beam (PB) dose calculation. The second stage starts with MC verification and post-optimisation of the PB dose and fluence distributions. Because of the potential to accurately model beam modifiers, MC based inverse planning systems are able to optimise compensator thicknesses and leaf trajectories instead of intensity profiles only. The corresponding techniques, whose implementation is the subject for future work, are also presented here. (orig.)

  12. Evaluation of RSG-GAS Core Management Based on Burnup Calculation

    International Nuclear Information System (INIS)

    Evaluation of RSG-GAS Core Management Based on Burnup Calculation. Presently, U3Si2-Al dispersion fuel is used in RSG-GAS core and had passed the 60th core. At the beginning of each cycle the 5/1 fuel reshuffling pattern is used. Since 52nd core, operators did not use the core fuel management computer code provided by vendor for this activity. They use the manually calculation using excel software as the solving. To know the accuracy of the calculation, core calculation was carried out using two kinds of 2 dimension diffusion codes Batan-2DIFF and SRAC. The beginning of cycle burn-up fraction data were calculated start from 51st to 60th using Batan-EQUIL and SRAC COREBN. The analysis results showed that there is a disparity in reactivity values of the two calculation method. The 60th core critical position resulted from Batan-2DIFF calculation provide the reduction of positive reactivity 1.84 % Δk/k, while the manually calculation results give the increase of positive reactivity 2.19 % Δk/k. The minimum shutdown margin for stuck rod condition for manual and Batan-3DIFF calculation are -3.35 % Δk/k dan -1.13 % Δk/k respectively, it means that both values met the safety criteria, i.e <-0.5 % Δk/k. Excel program can be used for burn-up calculation, but it is needed to provide core management code to reach higher accuracy. (author)

  13. Case-Based Reasoning Topological Complexity Calculation of Design for Components

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    Directly calculating the topological and geometric complexity from the STEP (standard for the exchange of product model data, ISO 10303) file is a huge task. So, a case-based reasoning approach is presented, which is based on the similarity between the new component and the old one, to calculate the topological and geometric complexity of new components. In order to index, retrieve in historical component database, a new way of component representation is brought forth. And then an algorithm is given to extract topological graph from its STEP files. A mathematical model, which describes how to compare the similarity, is discussed. Finally, an example is given to show the result.

  14. Fast neutron fluence calculation benchmark analysis based on 3D MC-SN bidirectional coupling method

    International Nuclear Information System (INIS)

    The Monte Carlo (MC)-discrete ordinates (SN) bidirectional coupling method is an efficient approach to solve shielding calculation of the large complex nuclear facility. The test calculation was taken by the application of the MC-SN bidirectional coupling method on the shielding calculation of the large PWR nuclear facility. Based on the characteristics of NUREG/CR-6115 PWR benchmark model issued by the NRC, 3D Monte Carlo code was employed to accurately simulate the structure from the core to the thermal shield and the dedicated model of the calculation parts locating in the pressure vessel, while the TORT was used for the calculation from the thermal shield to the second down-comer region. The transform between particle probability distribution of MC and angular flux density of SN was realized by the interface program to achieve the coupling calculation. The calculation results were compared with MCNP and DORT solutions of benchmark report and satisfactory agreements were obtained. The preliminary validity of feasibility by using the method to solve shielding problem of a large complex nuclear device was proved. (authors)

  15. Calculation method of radiation shielding in the nuclear medicine facility. Evaluation based on the reasonable calculation method

    International Nuclear Information System (INIS)

    According to the acceptance of ICRP Publication 60 (1990), the dose equivalent limit for the boarder of controlled area will be defined as 1.3 mSv/3 months in the Regulation for the Enforcement of the Medical Service Law which is scheduled to be revised. The calculating methods of radiation shielding to be considered are as follows: The first method is calculating the dose equivalent for each nuclide using 3-month maximum estimated use dose. The second method is calculating the dose equivalent using 3-month maximum estimated use dose after the conversion of all nuclide dose into that of 131I. The third method is calculating the dose equivalent using 1 day maximum estimated use dose after the conversion of all nuclide dose into that of 131I. We've investigated which of methods can meet the new regulation value (1.3 mSv/3 months). In modeled facility, we've tried to calculate the dose by the first method to confirm if we can perform the reasonable control in safe. Total dose equivalent for the boarder of controlled area (B) was 883 μSv/3 months by the first method, and its value turned out to be about 1/4 of that of the third method. Only the result by the first method was found to be within the confines of new dose equivalent limit of 1.3 mSv/3 months. The results of both method the second and the third were found to be within the confines of existing dose equivalent limit. The method as to calculate the shielding for each nuclide by using 3-month maximum estimated use dose has been accepted in the Law Concerning Prevention from Radiation Hazards due to Radioisotopes, etc. As the method is practically in accordance with the current use of radioisotope in nuclear medicine facility, the possibility of it coping with the new dose equivalent limit was indicated. (author)

  16. Affinity-based enrichment strategies to assay methyl-CpG binding activity and DNA methylation in early Xenopus embryos

    OpenAIRE

    Bogdanović Ozren; Veenstra Gert Jan C

    2011-01-01

    Abstract Background DNA methylation is a widespread epigenetic modification in vertebrate genomes. Genomic sites of DNA methylation can be bound by methyl-CpG-binding domain proteins (MBDs) and specific zinc finger proteins, which can recruit co-repressor complexes to silence transcription on targeted loci. The binding to methylated DNA may be regulated by post-translational MBD modifications. Findings A methylated DNA affinity precipitation method was implemented to assay binding of proteins...

  17. Development and Characterization of Protective Haemophilus parasuis Subunit Vaccines Based on Native Proteins with Affinity to Porcine Transferrin and Comparison with Other Subunit and Commercial Vaccines ▿

    OpenAIRE

    Frandoloso, Rafael; Martínez, Sonia; Rodríguez-Ferri, Elías F.; García-Iglesias, María José; Pérez-Martínez, Claudia; Martínez-Fernández, Beatriz; Gutiérrez-Martín, César B.

    2010-01-01

    Haemophilus parasuis is the agent responsible for causing Glässer's disease, which is characterized by fibrinous polyserositis, polyarthritis, and meningitis in pigs. In this study, we have characterized native outer membrane proteins with affinity to porcine transferrin (NPAPT) from H. parasuis serovar 5, Nagasaki strain. This pool of proteins was used as antigen to developed two vaccine formulations: one was adjuvanted with a mineral oil (Montanide IMS 2215 VG PR), while the other was poten...

  18. Accurate Assessment of RSET for Building Fire Based on Engineering Calculation and Numerical Simulation

    Directory of Open Access Journals (Sweden)

    Yan Zhenzhen

    2016-01-01

    Full Text Available In order to obtain the Required Safety Egress Time (RSET accurately, traditional engineering calculation method of evacuation time has been optimized in this paper. Several principles and fact situations were used to optimize the method, such as detecting principle of the fire detecting system, reaction characteristics of staff being in urgent situation, evacuating queuing theory, building structure and the plugging at the porthole. Taking a three-storey KTV as an example, two methods are used to illustrate the reliability and scientific reasonability of the calculation result. The result is deduced by comparing the error (less than 2% at an allowable range between two results. One result is calculated by a modified method of engineering calculation method, and the other one is given based on a Steering model of Pathfinder evacuation simulation software. The optimized RSET has a good feasibility and Accuracy.

  19. The effects of calculator-based laboratories on standardized test scores

    Science.gov (United States)

    Stevens, Charlotte Bethany Rains

    Nationwide, the goal of providing a productive science and math education to our youth in today's educational institutions is centering itself around the technology being utilized in these classrooms. In this age of digital technology, educational software and calculator-based laboratories (CBL) have become significant devices in the teaching of science and math for many states across the United States. Among the technology, the Texas Instruments graphing calculator and Vernier Labpro interface, are among some of the calculator-based laboratories becoming increasingly popular among middle and high school science and math teachers in many school districts across this country. In Tennessee, however, it is reported that this type of technology is not regularly utilized at the student level in most high school science classrooms, especially in the area of Physical Science (Vernier, 2006). This research explored the effect of calculator based laboratory instruction on standardized test scores. The purpose of this study was to determine the effect of traditional teaching methods versus graphing calculator teaching methods on the state mandated End-of-Course (EOC) Physical Science exam based on ability, gender, and ethnicity. The sample included 187 total tenth and eleventh grade physical science students, 101 of which belonged to a control group and 87 of which belonged to the experimental group. Physical Science End-of-Course scores obtained from the Tennessee Department of Education during the spring of 2005 and the spring of 2006 were used to examine the hypotheses. The findings of this research study suggested the type of teaching method, traditional or calculator based, did not have an effect on standardized test scores. However, the students' ability level, as demonstrated on the End-of-Course test, had a significant effect on End-of-Course test scores. This study focused on a limited population of high school physical science students in the middle Tennessee

  20. Nonlinear Predictor Feedback for Input-Affine Systems with Distributed Input Delays

    OpenAIRE

    Ponomarev, Anton

    2016-01-01

    Prediction-based transformation is applied to control-affine systems with distributed input delays. Transformed system state is calculated as a prediction of the system's future response to the past input with future input set to zero. Stabilization of the new system leads to Lyapunov-Krasovskii proven stabilization of the original one. Conditions on the original system are: smooth linearly bounded open-loop vector field and smooth uniformly bounded input vectors. About the transformed system...

  1. Applying Activity Based Costing (ABC Method to Calculate Cost Price in Hospital and Remedy Services

    Directory of Open Access Journals (Sweden)

    A Dabiri

    2012-04-01

    Full Text Available Background: Activity Based Costing (ABC is one of the new methods began appearing as a costing methodology in the 1990. It calculates cost price by determining the usage of resources. In this study, ABC method was used for calculating cost price of remedial services in hospitals.Methods: To apply ABC method, Shahid Faghihi Hospital was selected. First, hospital units were divided into three main departments: administrative, diagnostic, and hospitalized. Second, activity centers were defined by the activity analysis method. Third, costs of administrative activity centers were allocated into diagnostic and operational departments based on the cost driver. Finally, with regard to the usage of cost objectives from services of activity centers, the cost price of medical services was calculated.Results: The cost price from ABC method significantly differs from tariff method. In addition, high amount of indirect costs in the hospital indicates that capacities of resources are not used properly.Conclusion: Cost price of remedial services with tariff method is not properly calculated when compared with ABC method. ABC calculates cost price by applying suitable mechanisms but tariff method is based on the fixed price. In addition, ABC represents useful information about the amount and combination of cost price services.

  2. Code accuracy evaluation of ISP 35 calculations based on NUPEC M-7-1 test

    International Nuclear Information System (INIS)

    Quantitative evaluation of code uncertainties is a necessary step in the code assessment process, above all if best-estimate codes are utilised for licensing purposes. Aiming at quantifying the code accuracy, an integral methodology based on the Fast Fourier Transform (FFT) has been developed at the University of Pisa (DCMN) and has been already applied to several calculations related to primary system test analyses. This paper deals with the first application of the FFT based methodology to containment code calculations based on a hydrogen mixing and distribution test performed in the NUPEC (Nuclear Power Engineering Corporation) facility. It is referred to pre-test and post-test calculations submitted for the International Standard Problem (ISP) n. 35. This is a blind exercise, simulating the effects of steam injection and spray behaviour on gas distribution and mixing. The result of the application of this methodology to nineteen selected variables calculated by ten participants are here summarized, and the comparison (where possible) of the accuracy evaluated for the pre-test and for the post-test calculations of a same user is also presented. (author)

  3. Applicability of the cross section adjustment method based on random sampling technique for burnup calculation

    International Nuclear Information System (INIS)

    Applicability of the cross section adjustment method based on random sampling (RS) technique to burnup calculations is investigated. The cross section adjustment method is a technique for reduction of prediction uncertainties in reactor core analysis and has been widely applied to fast reactors. As a practical method, the cross section adjustment method based on RS technique is newly developed for application to light water reactors (LWRs). In this method, covariance among cross sections and neutronics parameters are statistically estimated by the RS technique and cross sections are adjusted without calculation of sensitivity coefficients of neutronics parameters, which are necessary in the conventional cross section adjustment method. Since sensitivity coefficients are not used, the RS-based method is expected to be practically applied to LWR core analysis, in which considerable computational costs are required for estimation of sensitivity coefficients. Through a simple pin-cell burnup calculation, applicability of the present method to burnup calculations is investigated. The calculation results indicate that the present method can adequately adjust cross sections including burnup characteristics. (author)

  4. Quantum-mechanical calculation of H on Ni(001) using a model potential based on first-principles calculations

    DEFF Research Database (Denmark)

    Mattsson, T.R.; Wahnström, G.; Bengtsson, L.;

    1997-01-01

    First-principles density-functional calculations of hydrogen adsorption on the Ni (001) surface have been performed in order to get a better understanding of adsorption and diffusion of hydrogen on metal surfaces. We find good agreement with experiments for the adsorption energy, binding distance...

  5. Optimization Method for Indoor Thermal Comfort Based on Interactive Numerical Calculation

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    In order to implement the optimal design of the indoor thermal comfort based on the numerical modeling method, the numerical calculation platform is combined seamlessly with the data-processing platform, and an interactive numerical calculation platform which includes the functions of numerical simulation and optimization is established. The artificial neural network (ANN) and the greedy strategy are introduced into the hill-climbing pattern heuristic search process, and the optimizing search direction can be predicted by using small samples; when searching along the direction using the greedy strategy, the optimal values can be quickly approached. Therefore, excessive external calling of the numerical modeling process can be avoided,and the optimization time is decreased obviously. The experimental results indicate that the satisfied output parameters of air conditioning can be quickly given out based on the interactive numerical calculation platform and the improved search method, and the optimization for indoor thermal comfort can be completed.

  6. Ab initio Calculations of Electronic Fingerprints of DNA bases on Graphene

    Science.gov (United States)

    Ahmed, Towfiq; Rehr, John J.; Kilina, Svetlana; Das, Tanmoy; Haraldsen, Jason T.; Balatsky, Alexander V.

    2012-02-01

    We have carried out first principles DFT calculations of the electronic local density of states (LDOS) of DNA nucleotide bases (A,C,G,T) adsorbed on graphene using LDA with ultra-soft pseudo-potentials. We have also calculated the longitudinal transmission currents T(E) through graphene nano-pores as an individual DNA base passes through it, using a non-equilibrium Green's function (NEGF) formalism. We observe several dominant base-dependent features in the LDOS and T(E) in an energy range within a few eV of the Fermi level. These features can serve as electronic fingerprints for the identification of individual bases from dI/dV measurements in scanning tunneling spectroscopy (STS) and nano-pore experiments. Thus these electronic signatures can provide an alternative approach to DNA sequencing.

  7. Effects of van der Waals interaction for first-principles calculations on iron-based superconductors

    International Nuclear Information System (INIS)

    Highlights: •van der Waals density functional calculations for iron-based superconductors. •Optimized structures are evaluated. •van der Waals density functional reproduce lattice constants of FeSe. •Except for FeSe, van der Waals interaction hardly affects crystal structures. -- Abstract: We investigate effects of van der Waals (vdW) interaction on various iron-based superconductors by first-principles calculations based on the van der Waals density functional (vdW-DF) taking account of non-local and long range interaction. vdW-DF reproduces well the lattice constants of FeSe, while the crystal structure of other iron-based superconductors are not so sensitive to vdW interaction. These results suggest that the effects of vdW interaction on layered superconductors are often essential although they depend on the characters of the interlayer couplings

  8. Rotor windings temperature monitoring based on calculation for large brushless excitation generator

    International Nuclear Information System (INIS)

    This article mainly introduces Rotor Windings Temperature Monitoring based on calculation for large brushless excitation Generator. It needn't extra equipment and can be realized easily, so it will enhance the safe reliability of generator and will do good to the operation of generator. (authors)

  9. An initio calculations for defects in silicon-based amorphous semiconductors

    OpenAIRE

    Ishii, Nobuhiko; Shimizu, Tatsuo

    1992-01-01

    We have calclulated the ESR hyperfine parameters of threefold-coordinated Si atoms and twofold-coordinated P and N atoms in Si-based amorphous semiconductors using the density functional theory with a local-spin-density approximation. These calculated results have been compared with the observed ESR results.

  10. 19 CFR 351.405 - Calculation of normal value based on constructed value.

    Science.gov (United States)

    2010-04-01

    ... constructed value as the basis for normal value where: neither the home market nor a third country market is... 19 Customs Duties 3 2010-04-01 2010-04-01 false Calculation of normal value based on constructed value. 351.405 Section 351.405 Customs Duties INTERNATIONAL TRADE ADMINISTRATION, DEPARTMENT OF...

  11. Two-parameter quantum affine algebra Ur,s(sln-circumflex), Drinfeld realization and quantum affine Lyndon basis

    International Nuclear Information System (INIS)

    We further find the defining structure of a two-parameter quantum affine algebra Ur,s(sln-circumflex) (n > 2) in the sense of Benkart-Witherspoon [BW1] after the work of [BGH1], [HS] and [BH], which turns out to be a Drinfeld double. Of more importance for the 'affine' cases is that we work out the compatible two-parameter version of the Drinfeld realization as a quantum affinization of Ur,s(sln) and establish the Drinfeld isomorphism Theorem in the two-parameter setting via developing a new remarkable combinatorial approach - quantum 'affine' Lyndon basis with an explicit valid algorithm, based on the Drinfeld realization. (author)

  12. Slope excavation quality assessment and excavated volume calculation in hydraulic projects based on laser scanning technology

    Directory of Open Access Journals (Sweden)

    Chao Hu

    2015-04-01

    Full Text Available Slope excavation is one of the most crucial steps in the construction of a hydraulic project. Excavation project quality assessment and excavated volume calculation are critical in construction management. The positioning of excavation projects using traditional instruments is inefficient and may cause error. To improve the efficiency and precision of calculation and assessment, three-dimensional laser scanning technology was used for slope excavation quality assessment. An efficient data acquisition, processing, and management workflow was presented in this study. Based on the quality control indices, including the average gradient, slope toe elevation, and overbreak and underbreak, cross-sectional quality assessment and holistic quality assessment methods were proposed to assess the slope excavation quality with laser-scanned data. An algorithm was also presented to calculate the excavated volume with laser-scanned data. A field application and a laboratory experiment were carried out to verify the feasibility of these methods for excavation quality assessment and excavated volume calculation. The results show that the quality assessment indices can be obtained rapidly and accurately with design parameters and scanned data, and the results of holistic quality assessment are consistent with those of cross-sectional quality assessment. In addition, the time consumption in excavation project quality assessment with the laser scanning technology can be reduced by 70%−90%, as compared with the traditional method. The excavated volume calculated with the scanned data only slightly differs from measured data, demonstrating the applicability of the excavated volume calculation method presented in this study.

  13. Quantum-mechanical calculation of H on Ni(001) using a model potential based on first-principles calculations

    OpenAIRE

    Mattsson, T. R.; Wahnström, G.; Bengtsson, L.; Hammer, Bjørk

    1997-01-01

    First-principles density-functional calculations of hydrogen adsorption on the Ni (001) surface have been performed in order to get a better understanding of adsorption and diffusion of hydrogen on metal surfaces. We find good agreement with experiments for the adsorption energy, binding distance, and barrier height for diffusion at room temperature, A model potential is fitted to the first-principles data points using the simulated annealing technique and the hydrogen band structure is deriv...

  14. CCSD(T)/CBS fragment-based calculations of lattice energy of molecular crystals.

    Science.gov (United States)

    Červinka, Ctirad; Fulem, Michal; Růžička, Květoslav

    2016-02-14

    A comparative study of the lattice energy calculations for a data set of 25 molecular crystals is performed using an additive scheme based on the individual energies of up to four-body interactions calculated using the coupled clusters with iterative treatment of single and double excitations and perturbative triples correction (CCSD(T)) with an estimated complete basis set (CBS) description. The CCSD(T)/CBS values on lattice energies are used to estimate sublimation enthalpies which are compared with critically assessed and thermodynamically consistent experimental values. The average absolute percentage deviation of calculated sublimation enthalpies from experimental values amounts to 13% (corresponding to 4.8 kJ mol(-1) on absolute scale) with unbiased distribution of positive to negative deviations. As pair interaction energies present a dominant contribution to the lattice energy and CCSD(T)/CBS calculations still remain computationally costly, benchmark calculations of pair interaction energies defined by crystal parameters involving 17 levels of theory, including recently developed methods with local and explicit treatment of electronic correlation, such as LCC and LCC-F12, are also presented. Locally and explicitly correlated methods are found to be computationally effective and reliable methods enabling the application of fragment-based methods for larger systems. PMID:26874495

  15. CCSD(T)/CBS fragment-based calculations of lattice energy of molecular crystals

    Science.gov (United States)

    Červinka, Ctirad; Fulem, Michal; Růžička, Květoslav

    2016-02-01

    A comparative study of the lattice energy calculations for a data set of 25 molecular crystals is performed using an additive scheme based on the individual energies of up to four-body interactions calculated using the coupled clusters with iterative treatment of single and double excitations and perturbative triples correction (CCSD(T)) with an estimated complete basis set (CBS) description. The CCSD(T)/CBS values on lattice energies are used to estimate sublimation enthalpies which are compared with critically assessed and thermodynamically consistent experimental values. The average absolute percentage deviation of calculated sublimation enthalpies from experimental values amounts to 13% (corresponding to 4.8 kJ mol-1 on absolute scale) with unbiased distribution of positive to negative deviations. As pair interaction energies present a dominant contribution to the lattice energy and CCSD(T)/CBS calculations still remain computationally costly, benchmark calculations of pair interaction energies defined by crystal parameters involving 17 levels of theory, including recently developed methods with local and explicit treatment of electronic correlation, such as LCC and LCC-F12, are also presented. Locally and explicitly correlated methods are found to be computationally effective and reliable methods enabling the application of fragment-based methods for larger systems.

  16. Magnetic entropy change calculated from first principles based statistical sampling technique: Ni2 MnGa

    Science.gov (United States)

    Odbadrakh, Khorgolkhuu; Nicholson, Don; Eisenbach, Markus; Brown, Gregory; Rusanu, Aurelian; Materials Theory Group Team

    2014-03-01

    Magnetic entropy change in Magneto-caloric Effect materials is one of the key parameters in choosing materials appropriate for magnetic cooling and offers insight into the coupling between the materials' thermodynamic and magnetic degrees of freedoms. We present computational workflow to calculate the change of magnetic entropy due to a magnetic field using the DFT based statistical sampling of the energy landscape of Ni2MnGa. The statistical density of magnetic states is calculated with Wang-Landau sampling, and energies are calculated with the Locally Self-consistent Multiple Scattering technique. The high computational cost of calculating energies of each state from first principles is tempered by exploiting a model Hamiltonian fitted to the DFT based sampling. The workflow is described and justified. The magnetic adiabatic temperature change calculated from the statistical density of states agrees with the experimentally obtained value in the absence of structural transformation. The study also reveals that the magnetic subsystem alone cannot explain the large MCE observed in Ni2MnGa alloys. This work was performed at the ORNL, which is managed by UT-Batelle for the U.S. DOE. It was sponsored by the Division of Material Sciences and Engineering, OBES. This research used resources of the OLCF at ORNL, which is supported by the Office of Science of the U.S. DOE under Contract DE-AC05-00OR22725.

  17. Striving for Empathy: Affinities, Alliances and Peer Sexuality Educators

    Science.gov (United States)

    Fields, Jessica; Copp, Martha

    2015-01-01

    Peer sexuality educators' accounts of their work reveal two approaches to empathy with their students: affinity and alliance. "Affinity-based empathy" rests on the idea that the more commonalities sexuality educators and students share (or perceive they share), the more they will be able to empathise with one another, while…

  18. Window-based MU calculator for independent dosimetry check in routine radiation oncology practice

    International Nuclear Information System (INIS)

    Full text: It is estimated that over one hundred thousand deaths are associated with medical errors each year in the USA alone. Most of these errors are preventable. Calculation errors in medical physics are no exception which are mostly preventable through sound quality assurance programmes. Preventable radiation dosimetry errors in Panama have resulted in numerous deaths. Confirmation of Monitor Unit (MU)/treatment time on a radiation producing machine by a second check forms the backbone of a dosimetry QA programme in any radiation oncology setup. The existing MU computer programs are either incorporated in treatment planning systems or they are marketed as stand alone programs to double check the calculations. Such programs, though robust in nature, are not affordable for most developing countries because of their cost. A trend has been evolving to use window based MU calculators for photon and electron dosimetry. A simple window based monitor unit program has been designed and developed using Visual C++ software for independent MU check. The program reads TMR data from a data file. The data file is organized for each scanned field size and depth in a two dimensional matrix. Field size and depth in between the existing data are interpolated by the program. The pull-down menus allow the user to select tray, compensator and wedges, if used. Field sizes, depth and other information are typed in for computation. It has been tested against our existing dosimetry calculation and found within 1% of hand calculation for different field sizes and depth interpolations. The computed results may be printed out as hard copy for record. The calculator is easily programmable for a particular radiation machine by tailoring the TMR/PDD data tables and other parameters. The existing programming platform may be modified for contour based planning system in future. The existing module provides a second check to improve the QA by verifying the computed MU independently. (author)

  19. A method for three-dimensional multizone reactor calculations based on Nordheim-Scalettar approach

    International Nuclear Information System (INIS)

    A two-group diffusion procedure is proposed in this report as a contribution to the three-dimensional heterogeneous formalism for the multizone reactor criticality calculation. This method is based on Nordheim-Scalettar approach as well as on the procedure for three-dimensional analysis proposed by Chermak. The formalism presented here may be used for the calculations of the effect of absorber rods partly inserted in any zone of the multizone reactor system. The absorber rods may be divided in segments having various properties (author)

  20. A sol-gel-integrated protein array system for affinity analysis of aptamer-target protein interaction.

    Science.gov (United States)

    Ahn, Ji-Young; Kim, Eunkyung; Kang, Jeehye; Kim, Soyoun

    2011-06-01

    A sol-gel microarray system was developed for a protein interaction assay with high activity. Comparing to 2-dimensional microarray surfaces, sol-gel can offer a more dynamic and broad range for proteins. In the present study, this sol-gel-integrated protein array was used in binding affinity analysis for aptamers. Six RNA aptamers and their target protein, yeast TBP (TATA-binding protein), were used to evaluate this method. A TBP-containing sol-gel mixture was spotted using a dispensing workstation under high-humidity conditions and each Cy-3-labeled aptamer was incubated. The dissociation constants (K(d)) were calculated by plotting the fluorescent intensity of the bound aptamers as a function of the TBP concentrations. The K(d) value of the control aptamer was found to be 8 nM, which agrees well with the values obtained using the conventional method, electric mobility shift assay. The sol-gel-based binding affinity measurements fit well with conventional binding affinity measurements, suggesting their possible use as an alternative to the conventional method. In addition, aptamer affinity measurements by the sol-gel-integrated protein chip make it possible to develop a simple high-throughput affinity method for screening high-affinity aptamers. PMID:21749295

  1. Theoretical and Experimental Determination of the Proton Affinity of (CF3CH2)2O

    Science.gov (United States)

    Zehe, Michael J.; Ball, David W.

    1998-01-01

    We report the experimental determination of the proton affinity of the molecule (CF3CH2)2O using chemical ionization mass spectrometry, and we compare it to the theoretical value obtained for protonation at the oxygen atom using the calculational methodology (MP2/6-31G**//MP2/3-21G). The proton affinity for this molecule as measured by bracketing experiments was between 724 kJ/mole and 741 kJ/mole. Ab initio (MP2/6-31G**//MP2/3-21G) calculations yield a value of about 729 kJ/mole, in agreement with the chemical ionization experiments. The results of these and related calculations suggest that the (MP2/6-31G**//MP2/3-21G) methodology is acceptable for estimating the proton affinities of partially-and fully-fluorinated methyl and ethyl ethers. We submit that any conclusions about the chemistry of fluoroether polymer lubricants based on their basicity can also be predicted reliably with such calculations.

  2. Classification of neocortical interneurons using affinity propagation

    Directory of Open Access Journals (Sweden)

    Roberto eSantana

    2013-12-01

    Full Text Available In spite of over a century of research on cortical circuits, it is still unknown how many classes of cortical neurons exist. Neuronal classification has been a difficult problem because it is unclear what a neuronal cell class actually is and what are the best characteristics are to define them. Recently, unsupervised classifications using cluster analysis based on morphological, physiological or molecular characteristics, when applied to selected datasets, have provided quantitative and unbiased identification of distinct neuronal subtypes. However, better and more robust classification methods are needed for increasingly complex and larger datasets. We explored the use of affinity propagation, a recently developed unsupervised classification algorithm imported from machine learning, which gives a representative example or exemplar for each cluster. As a case study, we applied affinity propagation to a test dataset of 337 interneurons belonging to four subtypes, previously identified based on morphological and physiological characteristics. We found that affinity propagation correctly classified most of the neurons in a blind, non-supervised manner. In fact, using a combined anatomical/physiological dataset, our algorithm differentiated parvalbumin from somatostatin interneurons in 49 out of 50 cases. Affinity propagation could therefore be used in future studies to validly classify neurons, as a first step to help reverse engineer neural circuits.

  3. Thermodynamic calculations in the development of high-temperature Co–Re-based alloys

    Energy Technology Data Exchange (ETDEWEB)

    Gorr, Bronislava, E-mail: gorr@ifwt.mb.uni-siegen.de [University of Siegen, Institut für Werkstofftechnik, Siegen (Germany); Christ, Hans-Jürgen [University of Siegen, Institut für Werkstofftechnik, Siegen (Germany); Mukherji, Debashis; Rösler, Joachim [TU Braunschweig, Institut für Werkstoffe, Braunschweig (Germany)

    2014-01-05

    Highlights: • Phase diagram as a starting point for alloy development. • Design of pre-oxidation treatments by means of thermodynamic assessment. • Contribution of thermodynamic calculations to the general understanding of materials chemistry. -- Abstract: The experimental Co–Re-based alloys are being developed for high-temperature applications for service temperatures beyond 1100 °C. One of the main tasks of this research is to find the optimal chemical composition. Thermodynamic calculations are very helpful for composition selection and optimization. In this study, thermodynamic calculations were used to identify potential alloying elements and to determine suitable concentration ranges to improve properties, such as strength and oxidation resistance that are essential for high-temperature structural materials. The calculated ternary phase diagram of the Co–Re–Cr system was used to design the reference model alloy. Corrosion products formed under different atmospheric conditions were reliably predicted for a number of model Co–Re-based alloys. Pre-oxidation treatment, a common method used to improve the oxidation resistance of alloys in aggressive atmosphere, was successfully designed based on thermodynamic considerations.

  4. Modeling and Ab initio Calculations of Thermal Transport in Si-Based Clathrates and Solar Perovskites

    Science.gov (United States)

    He, Yuping

    2015-03-01

    We present calculations of the thermal transport coefficients of Si-based clathrates and solar perovskites, as obtained from ab initio calculations and models, where all input parameters derived from first principles. We elucidated the physical mechanisms responsible for the measured low thermal conductivity in Si-based clatherates and predicted their electronic properties and mobilities, which were later confirmed experimentally. We also predicted that by appropriately tuning the carrier concentration, the thermoelectric figure of merit of Sn and Pb based perovskites may reach values ranging between 1 and 2, which could possibly be further increased by optimizing the lattice thermal conductivity through engineering perovskite superlattices. Work done in collaboration with Prof. G. Galli, and supported by DOE/BES Grant No. DE-FG0206ER46262.

  5. Calculation of FEPE of Scintillation Detector Using an Empirical Formula Based on Experimental Measurements

    International Nuclear Information System (INIS)

    The full energy peak efficiency (FEPE) curves of the (3*3 in) NaI(Tl) detector at different seven axial distances from it, were measured in a wide energy range from 59.53 up to 1408 keV using calibration point sources. The distinction was based on the effects of the source energy and the source-todetector distance. This work provides an empirical formula to calculate the (FEPE) for different detectors using the effective solid angle derived from experimental measurements. Comparison between the calculated and the measured efficiency values for the detectors due to the source-to-detector distances of 20, 25, 30, 35, 40, 45 and 50 cm observed that, the calculated values are in agreement with that of the experimental ones.

  6. Calculation Scheme Based on a Weighted Primitive: Application to Image Processing Transforms

    Directory of Open Access Journals (Sweden)

    Gregorio de Miguel Casado

    2007-01-01

    Full Text Available This paper presents a method to improve the calculation of functions which specially demand a great amount of computing resources. The method is based on the choice of a weighted primitive which enables the calculation of function values under the scope of a recursive operation. When tackling the design level, the method shows suitable for developing a processor which achieves a satisfying trade-off between time delay, area costs, and stability. The method is particularly suitable for the mathematical transforms used in signal processing applications. A generic calculation scheme is developed for the discrete fast Fourier transform (DFT and then applied to other integral transforms such as the discrete Hartley transform (DHT, the discrete cosine transform (DCT, and the discrete sine transform (DST. Some comparisons with other well-known proposals are also provided.

  7. FragIt: A Tool to Prepare Input Files for Fragment Based Quantum Chemical Calculations

    CERN Document Server

    Steinmann, Casper; Hansen, Anne S; Jensen, Jan H

    2012-01-01

    Near linear scaling fragment based quantum chemical calculations are becoming increasingly popular for treating large systems with high accuracy and is an active field of research. However, it remains difficult to set up these calculations without expert knowledge. To facilitate the use of such methods, software tools need to be available for support, setup and lower the barrier of entry for usage by non-experts. We present a fragmentation methodology and accompanying tools called FragIt to help setup these calculations. It uses the SMARTS language to find chemically appropriate substructures in structures and is used to prepare input files for the fragment molecular orbital method in the GAMESS program package. We present patterns of fragmentation for proteins and polysaccharides, specifically D-galactopyranose for use in cyclodextrins.

  8. Relativistic mean field interaction with density dependent meson-nucleon vertices based on microscopical calculations

    CERN Document Server

    Roca-Maza, X; Centelles, M; Ring, P; Schuck, P

    2011-01-01

    Although ab-initio calculations of relativistic Brueckner theory lead to large scalar isovector fields in nuclear matter, at present, successful versions of covariant density functional theory neglect the interactions in this channel. A new high precision density functional DD-ME$\\delta$ is presented which includes four mesons $\\sigma$, $\\omega$, $\\delta$, and $\\rho$ with density dependent meson-nucleon couplings. It is based to a large extent on microscopic ab-initio calculations in nuclear matter. Only four of its parameters are determined by adjusting to binding energies and charge radii of finite nuclei. The other parameters, in particular the density dependence of the meson-nucleon vertices, are adjusted to non-relativistic and relativistic Brueckner calculations of symmetric and asymmetric nuclear matter. The isovector effective mass $m_{p}^{\\ast}-m_{n}^{\\ast}$ derived from relativistic Brueckner theory is used to determine the coupling strength of the $\\delta$-meson and its density dependence.

  9. Efficient algorithms for semiclassical instanton calculations based on discretized path integrals

    International Nuclear Information System (INIS)

    Path integral instanton method is a promising way to calculate the tunneling splitting of energies for degenerated two state systems. In order to calculate the tunneling splitting, we need to take the zero temperature limit, or the limit of infinite imaginary time duration. In the method developed by Richardson and Althorpe [J. Chem. Phys. 134, 054109 (2011)], the limit is simply replaced by the sufficiently long imaginary time. In the present study, we have developed a new formula of the tunneling splitting based on the discretized path integrals to take the limit analytically. We have applied our new formula to model systems, and found that this approach can significantly reduce the computational cost and gain the numerical accuracy. We then developed the method combined with the electronic structure calculations to obtain the accurate interatomic potential on the fly. We present an application of our ab initio instanton method to the ammonia umbrella flip motion

  10. Review of dynamical models for external dose calculations based on Monte Carlo simulations in urbanised areas

    International Nuclear Information System (INIS)

    After an accidental release of radionuclides to the inhabited environment the external gamma irradiation from deposited radioactivity contributes significantly to the radiation exposure of the population for extended periods. For evaluating this exposure pathway, three main model requirements are needed: (i) to calculate the air kerma value per photon emitted per unit source area, based on Monte Carlo (MC) simulations; (ii) to describe the distribution and dynamics of radionuclides on the diverse urban surfaces; and (iii) to combine all these elements in a relevant urban model to calculate the resulting doses according to the actual scenario. This paper provides an overview about the different approaches to calculate photon transport in urban areas and about several dose calculation codes published. Two types of Monte Carlo simulations are presented using the global and the local approaches of photon transport. Moreover, two different philosophies of the dose calculation, the 'location factor method' and a combination of relative contamination of surfaces with air kerma values are described. The main features of six codes (ECOSYS, EDEM2M, EXPURT, PARATI, TEMAS, URGENT) are highlighted together with a short model-model features intercomparison

  11. A Self-Adaptive Steered Molecular Dynamics Method Based on Minimization of Stretching Force Reveals the Binding Affinity of Protein–Ligand Complexes

    Directory of Open Access Journals (Sweden)

    Junfeng Gu

    2015-10-01

    Full Text Available Binding affinity prediction of protein–ligand complexes has attracted widespread interest. In this study, a self-adaptive steered molecular dynamics (SMD method is proposed to reveal the binding affinity of protein–ligand complexes. The SMD method is executed through adjusting pulling direction to find an optimum trajectory of ligand dissociation, which is realized by minimizing the stretching force automatically. The SMD method is then used to simulate the dissociations of 19 common protein–ligand complexes which are derived from two homology families, and the binding free energy values are gained through experimental techniques. Results show that the proposed SMD method follows a different dissociation pathway with lower a rupture force and energy barrier when compared with the conventional SMD method, and further analysis indicates the rupture forces of the complexes in the same protein family correlate well with their binding free energy, which reveals the possibility of using the proposed SMD method to identify the active ligand.

  12. Heavy Ion SEU Cross Section Calculation Based on Proton Experimental Data, and Vice Versa

    CERN Document Server

    Wrobel, F; Pouget, V; Dilillo, L; Ecoffet, R; Lorfèvre, E; Bezerra, F; Brugger, M; Saigné, F

    2014-01-01

    The aim of this work is to provide a method to calculate single event upset (SEU) cross sections by using experimental data. Valuable tools such as PROFIT and SIMPA already focus on the calculation of the proton cross section by using heavy ions cross-section experiments. However, there is no available tool that calculates heavy ion cross sections based on measured proton cross sections with no knowledge of the technology. We based our approach on the diffusion-collection model with the aim of analyzing the characteristics of transient currents that trigger SEUs. We show that experimental cross sections could be used to characterize the pulses that trigger an SEU. Experimental results allow yet defining an empirical rule to identify the transient current that are responsible for an SEU. Then, the SEU cross section can be calculated for any kind of particle and any energy with no need to know the Spice model of the cell. We applied our method to some technologies (250 nm, 90 nm and 65 nm bulk SRAMs) and we sho...

  13. LWR decay heat calculations using a GRS improved ENDF/B-6 based ORIGEN data library

    International Nuclear Information System (INIS)

    The known ORNL ORIGEN code is widely spread over the world for inventory, activity and decay heat tasks and is used stand-alone or implemented in activation, shielding or burn-up systems. More than 1000 isotopes with more than six coupled neutron capture and radioactive decay channels are handled simultaneously by the code. The characteristics of the calculated inventories, e.g., masses, activities, neutron and photon source terms or the decay heat during short or long decay time steps are achieved by summing over all isotopes, characterized in the ORIGEN libraries. An extended nuclear GRS-ORIGENX data library is now developed for practical appliance. The library was checked for activation tasks of structure material isotopes and for actinide and fission product burn-up calculations compared with experiments and standard methods. The paper is directed to the LWR decay heat calculation features of the new library and shows the differences of dynamical and time integrated results of Endf/B-6 based and older Endf/B-5 based libraries for decay heat tasks compared to fission burst experiments, ANS curves and some other published data. A multi-group time exponential evaluation is given for the fission burst power of 235U, 238U, 239Pu and 241Pu, to be used in quick LWR reactor accident decay heat calculation tools. (authors)

  14. Calculation of the Instream Ecological Flow of the Wei River Based on Hydrological Variation

    Directory of Open Access Journals (Sweden)

    Shengzhi Huang

    2014-01-01

    Full Text Available It is of great significance for the watershed management department to reasonably allocate water resources and ensure the sustainable development of river ecosystems. The greatly important issue is to accurately calculate instream ecological flow. In order to precisely compute instream ecological flow, flow variation is taken into account in this study. Moreover, the heuristic segmentation algorithm that is suitable to detect the mutation points of flow series is employed to identify the change points. Besides, based on the law of tolerance and ecological adaptation theory, the maximum instream ecological flow is calculated, which is the highest frequency of the monthly flow based on the GEV distribution and very suitable for healthy development of the river ecosystems. Furthermore, in order to guarantee the sustainable development of river ecosystems under some bad circumstances, minimum instream ecological flow is calculated by a modified Tennant method which is improved by replacing the average flow with the highest frequency of flow. Since the modified Tennant method is more suitable to reflect the law of flow, it has physical significance, and the calculation results are more reasonable.

  15. Geological affinity of reflecting boundaries in the intermediate structural stage of the Chu Sarysuyskiy depression based on results of vertical seismic profilling

    Energy Technology Data Exchange (ETDEWEB)

    Davydov, N.G.; Kiselevskiy, Yu.N.

    1983-01-01

    A computer (EVM) and an ASOI-VSP-SK program complex are used to analyze data from seismic exploration and acoustical logging with interval by interval calculation of the velocity every four meters. Vertical seismic profilling (VSP) results are used to identify all the upper layers as reference layers. The basic reference level, the third, which corresponds to the floor of the carbonate middle to upper Visean series, is not sustained due to the thin layered state of the terrigeneous section. Based on data from vertical seismic profilling, the reflected wave method (MOV) and the common depth point method (MOGT), the reference 3-a and 6-a levels are identified. Deep reflections of the seventh, 7-a and Rf, approximately confined to the roof and floor of the lower Paleozoic deposits and the upper part of the upper reef series, are noted in the series of the Caledonian cap of the Prebaykal massifs based on vertical seismic profilling. Collector levels are noted on the basis of the frequency of the wave spectra and from the absorption coefficient in the Testas structure and in other low amplitude structures. The insufficiency of the depth capability of the common depth point method and the poor knowledge level of seismic exploration of the section of the lower Paleozoa and the upper Proterozoa of the Chu Sarysuyskiy depresion are noted.

  16. A CAD based automatic modeling method for primitive solid based Monte Carlo calculation geometry

    International Nuclear Information System (INIS)

    The Multi-Physics Coupling Analysis Modeling Program (MCAM), developed by FDS Team, China, is an advanced modeling tool aiming to solve the modeling challenges for multi-physics coupling simulation. The automatic modeling method for SuperMC, the Super Monte Carlo Calculation Program for Nuclear and Radiation Process, was recently developed and integrated in MCAM5.2. This method could bi-convert between CAD model and SuperMC input file. While converting from CAD model to SuperMC model, the CAD model was decomposed into several convex solids set, and then corresponding SuperMC convex basic solids were generated and output. While inverting from SuperMC model to CAD model, the basic primitive solids was created and related operation was done to according the SuperMC model. This method was benchmarked with ITER Benchmark model. The results showed that the method was correct and effective. (author)

  17. Scaling analysis of affinity propagation.

    Science.gov (United States)

    Furtlehner, Cyril; Sebag, Michèle; Zhang, Xiangliang

    2010-06-01

    We analyze and exploit some scaling properties of the affinity propagation (AP) clustering algorithm proposed by Frey and Dueck [Science 315, 972 (2007)]. Following a divide and conquer strategy we setup an exact renormalization-based approach to address the question of clustering consistency, in particular, how many cluster are present in a given data set. We first observe that the divide and conquer strategy, used on a large data set hierarchically reduces the complexity O(N2) to O(N((h+2)/(h+1))) , for a data set of size N and a depth h of the hierarchical strategy. For a data set embedded in a d -dimensional space, we show that this is obtained without notably damaging the precision except in dimension d=2 . In fact, for d larger than 2 the relative loss in precision scales such as N((2-d)/(h+1)d). Finally, under some conditions we observe that there is a value s* of the penalty coefficient, a free parameter used to fix the number of clusters, which separates a fragmentation phase (for ss*) of the underlying hidden cluster structure. At this precise point holds a self-similarity property which can be exploited by the hierarchical strategy to actually locate its position, as a result of an exact decimation procedure. From this observation, a strategy based on AP can be defined to find out how many clusters are present in a given data set. PMID:20866473

  18. Modelling lateral beam quality variations in pencil kernel based photon dose calculations

    Science.gov (United States)

    Nyholm, T.; Olofsson, J.; Ahnesjö, A.; Karlsson, M.

    2006-08-01

    Standard treatment machines for external radiotherapy are designed to yield flat dose distributions at a representative treatment depth. The common method to reach this goal is to use a flattening filter to decrease the fluence in the centre of the beam. A side effect of this filtering is that the average energy of the beam is generally lower at a distance from the central axis, a phenomenon commonly referred to as off-axis softening. The off-axis softening results in a relative change in beam quality that is almost independent of machine brand and model. Central axis dose calculations using pencil beam kernels show no drastic loss in accuracy when the off-axis beam quality variations are neglected. However, for dose calculated at off-axis positions the effect should be considered, otherwise errors of several per cent can be introduced. This work proposes a method to explicitly include the effect of off-axis softening in pencil kernel based photon dose calculations for arbitrary positions in a radiation field. Variations of pencil kernel values are modelled through a generic relation between half value layer (HVL) thickness and off-axis position for standard treatment machines. The pencil kernel integration for dose calculation is performed through sampling of energy fluence and beam quality in sectors of concentric circles around the calculation point. The method is fully based on generic data and therefore does not require any specific measurements for characterization of the off-axis softening effect, provided that the machine performance is in agreement with the assumed HVL variations. The model is verified versus profile measurements at different depths and through a model self-consistency check, using the dose calculation model to estimate HVL values at off-axis positions. A comparison between calculated and measured profiles at different depths showed a maximum relative error of 4% without explicit modelling of off-axis softening. The maximum relative error

  19. Dose calculation from a D-D-reaction-based BSA for boron neutron capture synovectomy

    International Nuclear Information System (INIS)

    Monte Carlo simulations were carried out to calculate dose in a knee phantom from a D-D-reaction-based Beam Shaping Assembly (BSA) for Boron Neutron Capture Synovectomy (BNCS). The BSA consists of a D(d,n)-reaction-based neutron source enclosed inside a polyethylene moderator and graphite reflector. The polyethylene moderator and graphite reflector sizes were optimized to deliver the highest ratio of thermal to fast neutron yield at the knee phantom. Then neutron dose was calculated at various depths in a knee phantom loaded with boron and therapeutic ratios of synovium dose/skin dose and synovium dose/bone dose were determined. Normalized to same boron loading in synovium, the values of the therapeutic ratios obtained in the present study are 12-30 times higher than the published values.

  20. Dose calculation from a D-D-reaction-based BSA for boron neutron capture synovectomy

    Energy Technology Data Exchange (ETDEWEB)

    Abdalla, Khalid [Department of Physics, Hail University, Hail (Saudi Arabia)], E-mail: khalidafnan@uoh.edu.sa; Naqvi, A.A. [Department of Physics, King Fahd University of Petroleum and Minerals and Center for Applied Physical Sciences, Box No. 1815, Dhahran 31261 (Saudi Arabia)], E-mail: aanaqvi@kfupm.edu.sa; Maalej, N.; Elshahat, B. [Department of Physics, King Fahd University of Petroleum and Minerals and Center for Applied Physical Sciences, Box No. 1815, Dhahran 31261 (Saudi Arabia)

    2010-04-15

    Monte Carlo simulations were carried out to calculate dose in a knee phantom from a D-D-reaction-based Beam Shaping Assembly (BSA) for Boron Neutron Capture Synovectomy (BNCS). The BSA consists of a D(d,n)-reaction-based neutron source enclosed inside a polyethylene moderator and graphite reflector. The polyethylene moderator and graphite reflector sizes were optimized to deliver the highest ratio of thermal to fast neutron yield at the knee phantom. Then neutron dose was calculated at various depths in a knee phantom loaded with boron and therapeutic ratios of synovium dose/skin dose and synovium dose/bone dose were determined. Normalized to same boron loading in synovium, the values of the therapeutic ratios obtained in the present study are 12-30 times higher than the published values.

  1. A general end point free energy calculation method based on microscopic configurational space coarse-graining

    CERN Document Server

    Tian, Pu

    2015-01-01

    Free energy is arguably the most important thermodynamic property for physical systems. Despite the fact that free energy is a state function, presently available rigorous methodologies, such as those based on thermodynamic integration (TI) or non-equilibrium work (NEW) analysis, involve energetic calculations on path(s) connecting the starting and the end macrostates. Meanwhile, presently widely utilized approximate end-point free energy methods lack rigorous treatment of conformational variation within end macrostates, and are consequently not sufficiently reliable. Here we present an alternative and rigorous end point free energy calculation formulation based on microscopic configurational space coarse graining, where the configurational space of a high dimensional system is divided into a large number of sufficiently fine and uniform elements, which were termed conformers. It was found that change of free energy is essentially decided by change of the number of conformers, with an error term that accounts...

  2. CT-based dose calculations and in vivo dosimetry for lung cancer treatment

    International Nuclear Information System (INIS)

    Reliable CT-based dose calculations and dosimetric quality control are essential for the introduction of new conformal techniques for the treatment of lung cancer. The first aim of this study was therefore to check the accuracy of dose calculations based on CT-densities, using a simple inhomogeneity correction model, for lung cancer patients irradiated with an AP-PA treatment technique. Second, the use of diodes for absolute exit dose measurements and an Electronic Portal Imaging Device (EPID) for relative transmission dose verification was investigated for 22 and 12 patients, respectively. The measured dose values were compared with calculations performed using our 3-dimensional treatment planning system, using CT-densities or assuming the patient to be water-equivalent. Using water-equivalent calculations, the actual exit dose value under lung was, on average, underestimated by 30%, with an overall spread of 10% (1 SD). Using inhomogeneity corrections, the exit dose was, on average, overestimated by 4%, with an overall spread of 6% (1 SD). Only 2% of the average deviation was due to the inhomogeneity correction model. An uncertainty in exit dose calculation of 2.5% (1 SD) could be explained by organ motion, resulting from the ventilatory or cardiac cycle. The most important reason for the large overall spread was, however, the uncertainty involved in performing point measurements: about 4% (1 SD). This difference resulted from the systematic and random deviation in patient set-up and therefore in diode position with respect to patient anatomy. Transmission and exit dose values agreed with an average difference of 1.1%. Transmission dose profiles also showed good agreement with calculated exit dose profiles. Our study shows that, for this treatment technique, the dose in the thorax region is quite accurately predicted using CT-based dose calculations, even if a simple inhomogeneity correction model is used. Point detectors such as diodes are not suitable for exit

  3. A CT-based analytical dose calculation method for HDR 192Ir brachytherapy

    International Nuclear Information System (INIS)

    % for both calculation methods. Conclusions: A correction-based dose calculation method has been validated for HDR 192Ir brachytherapy. Its high calculation efficiency makes it feasible for use in treatment planning. Because tissue inhomogeneity effects are small and primary dose predominates in the near-source region, TG-43 is adequate for target dose estimation provided shielding and contrast solution are not used.

  4. A Cultural Study of a Science Classroom and Graphing Calculator-based Technology

    OpenAIRE

    Casey, Dennis Alan

    2001-01-01

    Social, political, and technological events of the past two decades have had considerable bearing on science education. While sociological studies of scientists at work have seriously questioned traditional histories of science, national and state educational systemic reform initiatives have been enacted, stressing standards and accountability. Recently, powerful instructional technologies have become part of the landscape of the classroom. One example, graphing calculator-based technology...

  5. Off-site dose calculation computer code based on ICRP-60(II) - liquid radioactive effluents -

    International Nuclear Information System (INIS)

    The development of computer code for calculating off-site doses(K-DOSE60) was based on ICRP-60 and the dose calculationi equations of Reg. Guide 1.109. In this paper, the methodology to compute dose for liquid effluents was described. To examine reliability of the K-DOSE60 code the results obtained from K-DOSE60 were compared with analytic solutions. For liquid effluents. The results by K-DOSE60 are in agreement with analytic solution

  6. Stiffness of Diphenylalanine-Based Molecular Solids from First Principles Calculations

    Science.gov (United States)

    Azuri, Ido; Hod, Oded; Gazit, Ehud; Kronik, Leeor

    2013-03-01

    Diphenylalanine-based peptide nanotubes were found to be unexpectedly stiff, with a Young modulus of 19 GPa. Here, we calculate the Young modulus from first principles, using density functional theory with dispersive corrections. This allows us to show that at least half of the stiffness of the material comes from dispersive interactions and to identify the nature of the interactions that contribute most to the stiffness. This presents a general strategy for the analysis of bioinspired functional materials.

  7. Evaluating the accuracy of sampling-based approaches to the calculation of posterior moments

    OpenAIRE

    Geweke, John F.

    1991-01-01

    Data augmentation and Gibbs sampling are two closely related, sampling-based approaches to the calculation of posterior moments. The fact that each produces a sample whose constituents are neither independent nor identically distributed complicates the assessment of convergence and numerical accuracy of the approximations to the expected value of functions of interest under the posterior. In this paper methods for spectral analysis are used to evaluate numerical accuracy formally and construc...

  8. An AIS-based approach to calculate atmospheric emissions from the UK fishing fleet

    Science.gov (United States)

    Coello, Jonathan; Williams, Ian; Hudson, Dominic A.; Kemp, Simon

    2015-08-01

    The fishing industry is heavily reliant on the use of fossil fuel and emits large quantities of greenhouse gases and other atmospheric pollutants. Methods used to calculate fishing vessel emissions inventories have traditionally utilised estimates of fuel efficiency per unit of catch. These methods have weaknesses because they do not easily allow temporal and geographical allocation of emissions. A large proportion of fishing and other small commercial vessels are also omitted from global shipping emissions inventories such as the International Maritime Organisation's Greenhouse Gas Studies. This paper demonstrates an activity-based methodology for the production of temporally- and spatially-resolved emissions inventories using data produced by Automatic Identification Systems (AIS). The methodology addresses the issue of how to use AIS data for fleets where not all vessels use AIS technology and how to assign engine load when vessels are towing trawling or dredging gear. The results of this are compared to a fuel-based methodology using publicly available European Commission fisheries data on fuel efficiency and annual catch. The results show relatively good agreement between the two methodologies, with an estimate of 295.7 kilotons of fuel used and 914.4 kilotons of carbon dioxide emitted between May 2012 and May 2013 using the activity-based methodology. Different methods of calculating speed using AIS data are also compared. The results indicate that using the speed data contained directly in the AIS data is preferable to calculating speed from the distance and time interval between consecutive AIS data points.

  9. Methodology of Ni-base Superalloy Development for VHTR using Design of Experiments and Thermodynamic Calculation

    International Nuclear Information System (INIS)

    This work is concerning a methodology of Ni-base superalloy development for a very high temperature gas-cooled reactor(VHTR) using design of experiments(DOE) and thermodynamic calculations. Total 32 sets of the Ni-base superalloys with various chemical compositions were formulated based on a fractional factorial design of DOE, and the thermodynamic stability of topologically close-packed(TCP) phases of those alloys was calculated by using the THERMO-CALC software. From the statistical evaluation of the effect of the chemical composition on the formation of TCP phase up to a temperature of 950 .deg. C, which should be suppressed for prolonged service life when it used as the structural components of VHTR, 16 sets were selected for further calculation of the mechanical properties. Considering the yield and ultimate tensile strengths of the selected alloys estimated by using the JMATPRO software, the optimized chemical composition of the alloys for VHTR application, especially intermediate heat exchanger, was proposed for a succeeding experimental study

  10. Infinite transitivity on affine varieties

    OpenAIRE

    Arzhantsev, Ivan; Flenner, Hubert; Kaliman, Shulim; Kutzschebauch, Frank; ZAIDENBERG, MIKHAIL

    2012-01-01

    In this note we survey recent results on automorphisms of affine algebraic varieties, infinitely transitive group actions and flexibility. We present related constructions and examples, and discuss geometric applications and open problems.

  11. Affine toric SL(2)-embeddings

    International Nuclear Information System (INIS)

    In the theory of affine SL(2)-embeddings, which was constructed in 1973 by Popov, a locally transitive action of the group SL(2) on a normal affine three-dimensional variety X is determined by a pair (p/q,r), where 0GV//T-hat. In the substantiation of this result a key role is played by Cox's construction in toric geometry. Bibliography: 12 titles

  12. GPU-based ultra fast dose calculation using differential convolution/superposition algorithm

    International Nuclear Information System (INIS)

    Background: Dose calculation plays a key role in treatment planning for radiotherapy, its performance and accuracy are crucial to the quality of treatment plans. Differential convolution/superposition algorithm is considered as an accurate algorithm for photon dose calculation; however, improvement on its computational efficiency is still desirable for such purpose as real time treatment planning. Purpose: The goal of this work is to boost the performance of differential convolution/superposition algorithm by devising a graphics processing unit (GPU) implementation so as to make the method practical for daily usage. Methods: In this work, we implemented a GPU-based version of the differential convolution/superposition algorithm, by which the most time-consuming parts are implemented on GPU. In order to fully utilize the GPU computing power, the algorithm is modified to match the GPU hardware architecture. Results: Compared with the algorithm completely running on CPU, the GPU-based algorithm can speed up 30-60 times on a Tesla C1060 with higher values corresponding to larger field size. Finally, we use γ index to analyze the accuracy of calculation results, no matter one field or multi-field, homogeneous phantom or inhomogeneous phantom, the GPU implementation has the same accuracy as the CPU implementation. Conclusions: GPU is a useful solution for satisfying the increasing demands on computation speed and accuracy of dose calculation. The GPU-based differential convolution/superposition can be feasible and cost-efficient for satisfying the increasing demands for either computation speed or accuracy by advanced radiation therapy technologies. (authors)

  13. MRI-Based Computed Tomography Metal Artifact Correction Method for Improving Proton Range Calculation Accuracy

    Energy Technology Data Exchange (ETDEWEB)

    Park, Peter C. [Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Schreibmann, Eduard; Roper, Justin; Elder, Eric; Crocker, Ian [Department of Radiation Oncology, Winship Cancer Institute of Emory University, Atlanta, Georgia (United States); Fox, Tim [Varian Medical Systems, Palo Alto, California (United States); Zhu, X. Ronald [Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Dong, Lei [Scripps Proton Therapy Center, San Diego, California (United States); Dhabaan, Anees, E-mail: anees.dhabaan@emory.edu [Department of Radiation Oncology, Winship Cancer Institute of Emory University, Atlanta, Georgia (United States)

    2015-03-15

    Purpose: Computed tomography (CT) artifacts can severely degrade dose calculation accuracy in proton therapy. Prompted by the recently increased popularity of magnetic resonance imaging (MRI) in the radiation therapy clinic, we developed an MRI-based CT artifact correction method for improving the accuracy of proton range calculations. Methods and Materials: The proposed method replaces corrupted CT data by mapping CT Hounsfield units (HU number) from a nearby artifact-free slice, using a coregistered MRI. MRI and CT volumetric images were registered with use of 3-dimensional (3D) deformable image registration (DIR). The registration was fine-tuned on a slice-by-slice basis by using 2D DIR. Based on the intensity of paired MRI pixel values and HU from an artifact-free slice, we performed a comprehensive analysis to predict the correct HU for the corrupted region. For a proof-of-concept validation, metal artifacts were simulated on a reference data set. Proton range was calculated using reference, artifactual, and corrected images to quantify the reduction in proton range error. The correction method was applied to 4 unique clinical cases. Results: The correction method resulted in substantial artifact reduction, both quantitatively and qualitatively. On respective simulated brain and head and neck CT images, the mean error was reduced from 495 and 370 HU to 108 and 92 HU after correction. Correspondingly, the absolute mean proton range errors of 2.4 cm and 1.7 cm were reduced to less than 2 mm in both cases. Conclusions: Our MRI-based CT artifact correction method can improve CT image quality and proton range calculation accuracy for patients with severe CT artifacts.

  14. A design of a DICOM-RT-based tool box for nonrigid 4D dose calculation.

    Science.gov (United States)

    Wong, Victy Y W; Baker, Colin R; Leung, T W; Tung, Stewart Y

    2016-01-01

    The study was aimed to introduce a design of a DICOM-RT-based tool box to facilitate 4D dose calculation based on deformable voxel-dose registration. The computational structure and the calculation algorithm of the tool box were explicitly discussed in the study. The tool box was written in MATLAB in conjunction with CERR. It consists of five main functions which allow a) importation of DICOM-RT-based 3D dose plan, b) deformable image registration, c) tracking voxel doses along breathing cycle, d) presentation of temporal dose distribution at different time phase, and e) derivation of 4D dose. The efficacy of using the tool box for clinical application had been verified with nine clinical cases on retrospective-study basis. The logistic and the robustness of the tool box were tested with 27 applications and the results were shown successful with no computational errors encountered. In the study, the accumulated dose coverage as a function of planning CT taken at end-inhale, end-exhale, and mean tumor position were assessed. The results indicated that the majority of the cases (67%) achieved maximum target coverage, while the planning CT was taken at the temporal mean tumor position and 56% at the end-exhale position. The comparable results to the literature imply that the studied tool box can be reliable for 4D dose calculation. The authors suggest that, with proper application, 4D dose calculation using deformable registration can provide better dose evaluation for treatment with moving target. PMID:27074476

  15. MRI-Based Computed Tomography Metal Artifact Correction Method for Improving Proton Range Calculation Accuracy

    International Nuclear Information System (INIS)

    Purpose: Computed tomography (CT) artifacts can severely degrade dose calculation accuracy in proton therapy. Prompted by the recently increased popularity of magnetic resonance imaging (MRI) in the radiation therapy clinic, we developed an MRI-based CT artifact correction method for improving the accuracy of proton range calculations. Methods and Materials: The proposed method replaces corrupted CT data by mapping CT Hounsfield units (HU number) from a nearby artifact-free slice, using a coregistered MRI. MRI and CT volumetric images were registered with use of 3-dimensional (3D) deformable image registration (DIR). The registration was fine-tuned on a slice-by-slice basis by using 2D DIR. Based on the intensity of paired MRI pixel values and HU from an artifact-free slice, we performed a comprehensive analysis to predict the correct HU for the corrupted region. For a proof-of-concept validation, metal artifacts were simulated on a reference data set. Proton range was calculated using reference, artifactual, and corrected images to quantify the reduction in proton range error. The correction method was applied to 4 unique clinical cases. Results: The correction method resulted in substantial artifact reduction, both quantitatively and qualitatively. On respective simulated brain and head and neck CT images, the mean error was reduced from 495 and 370 HU to 108 and 92 HU after correction. Correspondingly, the absolute mean proton range errors of 2.4 cm and 1.7 cm were reduced to less than 2 mm in both cases. Conclusions: Our MRI-based CT artifact correction method can improve CT image quality and proton range calculation accuracy for patients with severe CT artifacts

  16. Thermal conductivity calculation of bio-aggregates based materials using finite and discrete element methods

    Science.gov (United States)

    Pennec, Fabienne; Alzina, Arnaud; Tessier-Doyen, Nicolas; Naitali, Benoit; Smith, David S.

    2012-11-01

    This work is about the calculation of thermal conductivity of insulating building materials made from plant particles. To determine the type of raw materials, the particle sizes or the volume fractions of plant and binder, a tool dedicated to calculate the thermal conductivity of heterogeneous materials has been developped, using the discrete element method to generate the volume element and the finite element method to calculate the homogenized properties. A 3D optical scanner has been used to capture plant particle shapes and convert them into a cluster of discret elements. These aggregates are initially randomly distributed but without any overlap, and then fall down in a container due to the gravity force and collide with neighbour particles according to a velocity Verlet algorithm. Once the RVE is built, the geometry is exported in the open-source Salome-Meca platform to be meshed. The calculation of the effective thermal conductivity of the heterogeneous volume is then performed using a homogenization technique, based on an energy method. To validate the numerical tool, thermal conductivity measurements have been performed on sunflower pith aggregates and on packed beds of the same particles. The experimental values have been compared satisfactorily with a batch of numerical simulations.

  17. A GIS-based method for flooded area calculation and damage evaluation

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    Using geographic information system to study flooded area and damage evaluation has been a hotspot in environmental disaster research for years. In this paper, a model for flooded area calculation and damage evaluation is presented. Flooding is divided into two types:‘source flood' and ‘non-source flood'. The source-flood area calculation is based on seed spread algorithm. The flood damage evaluation is calculated by overlaying the flooded area range with thematic maps and relating the result to other social and economic data. To raise the operational efficiency of the model, a skipping approach is used to speed seed spread algorithm and all thematic maps are converted to raster format before overlay analysis. The accuracy of flooded area calculation and damage evaluation is mainly dependent upon the resolution and precision of the digital elevation model (DEM) data, upon the accuracy of registering all raster layers, and upon the quality of economic information. This model has been successfully used in the Zhejiang Province Comprehensive Water Management Information System developed by the authors. The applications show that this model is especially useful for most counties of China and other developing countries.

  18. Development of facile property calculation model for adsorption chillers based on equilibrium adsorption cycle

    Science.gov (United States)

    Yano, Masato; Hirose, Kenji; Yoshikawa, Minoru; Thermal management technology Team

    Facile property calculation model for adsorption chillers was developed based on equilibrium adsorption cycles. Adsorption chillers are one of promising systems that can use heat energy efficiently because adsorption chillers can generate cooling energy using relatively low temperature heat energy. Properties of adsorption chillers are determined by heat source temperatures, adsorption/desorption properties of adsorbent, and kinetics such as heat transfer rate and adsorption/desorption rate etc. In our model, dependence of adsorption chiller properties on heat source temperatures was represented using approximated equilibrium adsorption cycles instead of solving conventional time-dependent differential equations for temperature changes. In addition to equilibrium cycle calculations, we calculated time constants for temperature changes as functions of heat source temperatures, which represent differences between equilibrium cycles and real cycles that stemmed from kinetic adsorption processes. We found that the present approximated equilibrium model could calculate properties of adsorption chillers (driving energies, cooling energies, and COP etc.) under various driving conditions quickly and accurately within average errors of 6% compared to experimental data.

  19. Thermal conductivity calculation of bio-aggregates based materials using finite and discrete element methods

    International Nuclear Information System (INIS)

    This work is about the calculation of thermal conductivity of insulating building materials made from plant particles. To determine the type of raw materials, the particle sizes or the volume fractions of plant and binder, a tool dedicated to calculate the thermal conductivity of heterogeneous materials has been developped, using the discrete element method to generate the volume element and the finite element method to calculate the homogenized properties. A 3D optical scanner has been used to capture plant particle shapes and convert them into a cluster of discret elements. These aggregates are initially randomly distributed but without any overlap, and then fall down in a container due to the gravity force and collide with neighbour particles according to a velocity Verlet algorithm. Once the RVE is built, the geometry is exported in the open-source Salome-Meca platform to be meshed. The calculation of the effective thermal conductivity of the heterogeneous volume is then performed using a homogenization technique, based on an energy method. To validate the numerical tool, thermal conductivity measurements have been performed on sunflower pith aggregates and on packed beds of the same particles. The experimental values have been compared satisfactorily with a batch of numerical simulations.

  20. Calculator: A Hardware Design, Math and Software Programming Project Base Learning

    Directory of Open Access Journals (Sweden)

    F. Criado

    2015-03-01

    Full Text Available This paper presents the implementation by the students of a complex calculator in hardware. This project meets hardware design goals, and also highly motivates them to use competences learned in others subjects. The learning process, associated to System Design, is hard enough because the students have to deal with parallel execution, signal delay, synchronization … Then, to strengthen the knowledge of hardware design a methodology as project based learning (PBL is proposed. Moreover, it is also used to reinforce cross subjects like math and software programming. This methodology creates a course dynamics that is closer to a professional environment where they will work with software and mathematics to resolve the hardware design problems. The students design from zero the functionality of the calculator. They are who make the decisions about the math operations that it is able to resolve it, and also the operands format or how to introduce a complex equation into the calculator. This will increase the student intrinsic motivation. In addition, since the choices may have consequences on the reliability of the calculator, students are encouraged to program in software the decisions about how implement the selected mathematical algorithm. Although math and hardware design are two tough subjects for students, the perception that they get at the end of the course is quite positive.

  1. Affine group formulation of the Standard Model coupled to gravity

    International Nuclear Information System (INIS)

    In this work we apply the affine group formalism for four dimensional gravity of Lorentzian signature, which is based on Klauder’s affine algebraic program, to the formulation of the Hamiltonian constraint of the interaction of matter and all forces, including gravity with non-vanishing cosmological constant Λ, as an affine Lie algebra. We use the hermitian action of fermions coupled to gravitation and Yang–Mills theory to find the density weight one fermionic super-Hamiltonian constraint. This term, combined with the Yang–Mills and Higgs energy densities, are composed with York’s integrated time functional. The result, when combined with the imaginary part of the Chern–Simons functional Q, forms the affine commutation relation with the volume element V(x). Affine algebraic quantization of gravitation and matter on equal footing implies a fundamental uncertainty relation which is predicated upon a non-vanishing cosmological constant. -- Highlights: •Wheeler–DeWitt equation (WDW) quantized as affine algebra, realizing Klauder’s program. •WDW formulated for interaction of matter and all forces, including gravity, as affine algebra. •WDW features Hermitian generators in spite of fermionic content: Standard Model addressed. •Constructed a family of physical states for the full, coupled theory via affine coherent states. •Fundamental uncertainty relation, predicated on non-vanishing cosmological constant

  2. GPU-based Monte Carlo radiotherapy dose calculation using phase-space sources

    International Nuclear Information System (INIS)

    A novel phase-space source implementation has been designed for graphics processing unit (GPU)-based Monte Carlo dose calculation engines. Short of full simulation of the linac head, using a phase-space source is the most accurate method to model a clinical radiation beam in dose calculations. However, in GPU-based Monte Carlo dose calculations where the computation efficiency is very high, the time required to read and process a large phase-space file becomes comparable to the particle transport time. Moreover, due to the parallelized nature of GPU hardware, it is essential to simultaneously transport particles of the same type and similar energies but separated spatially to yield a high efficiency. We present three methods for phase-space implementation that have been integrated into the most recent version of the GPU-based Monte Carlo radiotherapy dose calculation package gDPM v3.0. The first method is to sequentially read particles from a patient-dependent phase-space and sort them on-the-fly based on particle type and energy. The second method supplements this with a simple secondary collimator model and fluence map implementation so that patient-independent phase-space sources can be used. Finally, as the third method (called the phase-space-let, or PSL, method) we introduce a novel source implementation utilizing pre-processed patient-independent phase-spaces that are sorted by particle type, energy and position. Position bins located outside a rectangular region of interest enclosing the treatment field are ignored, substantially decreasing simulation time with little effect on the final dose distribution. The three methods were validated in absolute dose against BEAMnrc/DOSXYZnrc and compared using gamma-index tests (2%/2 mm above the 10% isodose). It was found that the PSL method has the optimal balance between accuracy and efficiency and thus is used as the default method in gDPM v3.0. Using the PSL method, open fields of 4 × 4, 10 × 10 and 30 × 30 cm

  3. GPU-based Monte Carlo radiotherapy dose calculation using phase-space sources.

    Science.gov (United States)

    Townson, Reid W; Jia, Xun; Tian, Zhen; Graves, Yan Jiang; Zavgorodni, Sergei; Jiang, Steve B

    2013-06-21

    A novel phase-space source implementation has been designed for graphics processing unit (GPU)-based Monte Carlo dose calculation engines. Short of full simulation of the linac head, using a phase-space source is the most accurate method to model a clinical radiation beam in dose calculations. However, in GPU-based Monte Carlo dose calculations where the computation efficiency is very high, the time required to read and process a large phase-space file becomes comparable to the particle transport time. Moreover, due to the parallelized nature of GPU hardware, it is essential to simultaneously transport particles of the same type and similar energies but separated spatially to yield a high efficiency. We present three methods for phase-space implementation that have been integrated into the most recent version of the GPU-based Monte Carlo radiotherapy dose calculation package gDPM v3.0. The first method is to sequentially read particles from a patient-dependent phase-space and sort them on-the-fly based on particle type and energy. The second method supplements this with a simple secondary collimator model and fluence map implementation so that patient-independent phase-space sources can be used. Finally, as the third method (called the phase-space-let, or PSL, method) we introduce a novel source implementation utilizing pre-processed patient-independent phase-spaces that are sorted by particle type, energy and position. Position bins located outside a rectangular region of interest enclosing the treatment field are ignored, substantially decreasing simulation time with little effect on the final dose distribution. The three methods were validated in absolute dose against BEAMnrc/DOSXYZnrc and compared using gamma-index tests (2%/2 mm above the 10% isodose). It was found that the PSL method has the optimal balance between accuracy and efficiency and thus is used as the default method in gDPM v3.0. Using the PSL method, open fields of 4 × 4, 10 × 10 and 30 × 30 cm

  4. Synthesis and Image Matching On Structural Patterns Using Affine Transformation

    Directory of Open Access Journals (Sweden)

    S.Vandana

    2012-06-01

    Full Text Available This paper focuses in explaining a Fourier based affine estimator which is applied to the task of Image Synthesis. An affine transformation is an important class of linear 2-D geometric transformations which maps variables into new by applying a linear combination of translation, rotation, scaling and/or shearing operations. Conventional retrieval systems are very effective when knowledge information and query information are in a uniform orientation but fails in recognition when effects such as scaling, orientation exist. But as this technique is based on texture analysis, which is termed the affine estimator, it will even match the images with non-uniform orientation.

  5. Staircase models from affine Toda field theory

    International Nuclear Information System (INIS)

    The authors propose a class of purely elastic scattering theories generalizing the staircase model of Al. B. Zamolodchikov, based on the affine Toda field theories for simply-laced Lie algebras g = A,D,E at suitable complex values of their coupling constants. Considering their Thermodynamic Bethe Ansatz equations, they give analytic arguments in support of a conjectured renormalization group flow visiting the neighborhood of each Wg minimal model in turn

  6. Vertex based missing mass calculator for 3-prong hadronically decaying tau leptons in the ATLAS detector

    CERN Document Server

    Maddocks, Harvey

    In this thesis my personal contributions to the ATLAS experiment are presented, these consist of studies and analyses relating to tau leptons. The first main section contains work on the identification of hadronically decaying tau leptons, and my specific contribution the electron veto. This work involved improving the choice of variables to discriminate against electrons that had been incorrectly identified as tau leptons. These variables were optimised to be robust against increasing pile-up, which is present in this data period. The resulting efficiencies are independent of this pile-up. The second main section contains an analysis of Z → τ τ decays, my specific contribution was the calculation of the detector acceptance factors and systematics. The third, and final section contains an analysis of the performance of a new vertex based missing mass calculator for 3-prong hadronically decaying tau leptons. It was found that in its current state it performs just as well as the existing methods. However it...

  7. Hypothesis testing and power calculations for taxonomic-based human microbiome data.

    Directory of Open Access Journals (Sweden)

    Patricio S La Rosa

    Full Text Available This paper presents new biostatistical methods for the analysis of microbiome data based on a fully parametric approach using all the data. The Dirichlet-multinomial distribution allows the analyst to calculate power and sample sizes for experimental design, perform tests of hypotheses (e.g., compare microbiomes across groups, and to estimate parameters describing microbiome properties. The use of a fully parametric model for these data has the benefit over alternative non-parametric approaches such as bootstrapping and permutation testing, in that this model is able to retain more information contained in the data. This paper details the statistical approaches for several tests of hypothesis and power/sample size calculations, and applies them for illustration to taxonomic abundance distribution and rank abundance distribution data using HMP Jumpstart data on 24 subjects for saliva, subgingival, and supragingival samples. Software for running these analyses is available.

  8. Calculation and analysis of heat source of PWR assemblies based on Monte Carlo method

    International Nuclear Information System (INIS)

    When fission occurs in nuclear fuel in reactor core, it releases numerous neutron and γ radiation, which takes energy deposition in fuel components and yields many factors such as thermal stressing and radiation damage influencing the safe operation of a reactor. Using the three-dimensional Monte Carlo transport calculation program MCNP and continuous cross-section database based on ENDF/B series to calculate the heat rate of the heat source on reference assemblies of a PWR when loading with 18-month short refueling cycle mode, and get the precise values of the control rod, thimble plug and new burnable poison rod within Gd, so as to provide basis for reactor design and safety verification. (authors)

  9. GPU-based fast Monte Carlo simulation for radiotherapy dose calculation

    CERN Document Server

    Jia, Xun; Graves, Yan Jiang; Folkerts, Michael; Jiang, Steve B

    2011-01-01

    Monte Carlo (MC) simulation is commonly considered to be the most accurate dose calculation method in radiotherapy. However, its efficiency still requires improvement for many routine clinical applications. In this paper, we present our recent progress towards the development a GPU-based MC dose calculation package, gDPM v2.0. It utilizes the parallel computation ability of a GPU to achieve high efficiency, while maintaining the same particle transport physics as in the original DPM code and hence the same level of simulation accuracy. In GPU computing, divergence of execution paths between threads can considerably reduce the efficiency. Since photons and electrons undergo different physics and hence attain different execution paths, we use a simulation scheme where photon transport and electron transport are separated to partially relieve the thread divergence issue. High performance random number generator and hardware linear interpolation are also utilized. We have also developed various components to hand...

  10. High accuracy modeling for advanced nuclear reactor core designs using Monte Carlo based coupled calculations

    Science.gov (United States)

    Espel, Federico Puente

    The main objective of this PhD research is to develop a high accuracy modeling tool using a Monte Carlo based coupled system. The presented research comprises the development of models to include the thermal-hydraulic feedback to the Monte Carlo method and speed-up mechanisms to accelerate the Monte Carlo criticality calculation. Presently, deterministic codes based on the diffusion approximation of the Boltzmann transport equation, coupled with channel-based (or sub-channel based) thermal-hydraulic codes, carry out the three-dimensional (3-D) reactor core calculations of the Light Water Reactors (LWRs). These deterministic codes utilize nuclear homogenized data (normally over large spatial zones, consisting of fuel assembly or parts of fuel assembly, and in the best case, over small spatial zones, consisting of pin cell), which is functionalized in terms of thermal-hydraulic feedback parameters (in the form of off-line pre-generated cross-section libraries). High accuracy modeling is required for advanced nuclear reactor core designs that present increased geometry complexity and material heterogeneity. Such high-fidelity methods take advantage of the recent progress in computation technology and coupled neutron transport solutions with thermal-hydraulic feedback models on pin or even on sub-pin level (in terms of spatial scale). The continuous energy Monte Carlo method is well suited for solving such core environments with the detailed representation of the complicated 3-D problem. The major advantages of the Monte Carlo method over the deterministic methods are the continuous energy treatment and the exact 3-D geometry modeling. However, the Monte Carlo method involves vast computational time. The interest in Monte Carlo methods has increased thanks to the improvements of the capabilities of high performance computers. Coupled Monte-Carlo calculations can serve as reference solutions for verifying high-fidelity coupled deterministic neutron transport methods

  11. Effects of CT based Voxel Phantoms on Dose Distribution Calculated with Monte Carlo Method

    Institute of Scientific and Technical Information of China (English)

    Chen Chaobin; Huang Qunying; Wu Yican

    2005-01-01

    A few CT-based voxel phantoms were produced to investigate the sensitivity of Monte Carlo simulations of X-ray beam and electron beam to the proportions of elements and the mass densities of the materials used to express the patient's anatomical structure. The human body can be well outlined by air, lung, adipose, muscle, soft bone and hard bone to calculate the dose distribution with Monte Carlo method. The effects of the calibration curves established by using various CT scanners are not clinically significant based on our investigation. The deviation from the values of cumulative dose volume histogram derived from CT-based voxel phantoms is less than 1% for the given target.

  12. Effects of CT based Voxel Phantoms on Dose Distribution Calculated with Monte Carlo Method

    Science.gov (United States)

    Chen, Chaobin; Huang, Qunying; Wu, Yican

    2005-04-01

    A few CT-based voxel phantoms were produced to investigate the sensitivity of Monte Carlo simulations of x-ray beam and electron beam to the proportions of elements and the mass densities of the materials used to express the patient's anatomical structure. The human body can be well outlined by air, lung, adipose, muscle, soft bone and hard bone to calculate the dose distribution with Monte Carlo method. The effects of the calibration curves established by using various CT scanners are not clinically significant based on our investigation. The deviation from the values of cumulative dose volume histogram derived from CT-based voxel phantoms is less than 1% for the given target.

  13. Validating activity prescription schemes in radionuclide therapy based on TCP and NTCP indexes calculation

    International Nuclear Information System (INIS)

    Full text: In this work a formulation for evaluation and acceptance of activity prescription schemes (single or multiple administrations) in radionuclide therapy based on the calculation of Tumor Control Probability (TCP) and Normal Tissue Complication Probability (NTCP) is presented. The Poisson model was used for TCP calculation and NTCP by using the Lyman-Kutcher-Burman's (LKB) model. All calculations for biological evaluation of the activity prescription schemes are made from the absorbed dose in mGy/MBq of injected activity calculated from the gammagraphic images. The input data for calculations are activity (MBq) per administration, the number of administration proposed and time interval between administrations (equally spaced). The TCP (Poisson model) calculation was made by determination of Biological Equivalent Dose (BED) using a formulation of linear-quadratic (LQ) model in which cell repair and proliferation during the irradiation at low dose rate (LDR) were considered [2]. Similarly, NTCP (LKB's model) calculation was also done from BED determination, but those calculated for LDR were converted to 2Gy-equivalent dose at high dose rate in order to use the tolerance values tabulated [8] and because it is more understandable for physicians. Kidneys, bone marrow and whole body were considered as critical organs. Proliferation was considered only for bone marrow during the BED calculations. The BED model for LDR reported was extended for multi-exponential dose rate function with any number of terms. A formulation for multiple administrations, equally time-spaced where cumulative dose effects are included, is also tested. The dose distribution was considered homogeneous in tumor volume, keeping in mind that some dose distribution parameters, like equivalent uniform dose (EUD), could be used for description of irradiation effects for non-homogeneous dose distribution, as we could find in applications of radionuclide therapy in real clinical situation

  14. Fast CPU-based Monte Carlo simulation for radiotherapy dose calculation

    Science.gov (United States)

    Ziegenhein, Peter; Pirner, Sven; Kamerling, Cornelis Ph; Oelfke, Uwe

    2015-08-01

    Monte-Carlo (MC) simulations are considered to be the most accurate method for calculating dose distributions in radiotherapy. Its clinical application, however, still is limited by the long runtimes conventional implementations of MC algorithms require to deliver sufficiently accurate results on high resolution imaging data. In order to overcome this obstacle we developed the software-package PhiMC, which is capable of computing precise dose distributions in a sub-minute time-frame by leveraging the potential of modern many- and multi-core CPU-based computers. PhiMC is based on the well verified dose planning method (DPM). We could demonstrate that PhiMC delivers dose distributions which are in excellent agreement to DPM. The multi-core implementation of PhiMC scales well between different computer architectures and achieves a speed-up of up to 37× compared to the original DPM code executed on a modern system. Furthermore, we could show that our CPU-based implementation on a modern workstation is between 1.25× and 1.95× faster than a well-known GPU implementation of the same simulation method on a NVIDIA Tesla C2050. Since CPUs work on several hundreds of GB RAM the typical GPU memory limitation does not apply for our implementation and high resolution clinical plans can be calculated.

  15. GPU-based Monte Carlo radiotherapy dose calculation using phase-space sources

    CERN Document Server

    Townson, Reid; Tian, Zhen; Graves, Yan Jiang; Zavgorodni, Sergei; Jiang, Steve B

    2013-01-01

    A novel phase-space source implementation has been designed for GPU-based Monte Carlo dose calculation engines. Due to the parallelized nature of GPU hardware, it is essential to simultaneously transport particles of the same type and similar energies but separated spatially to yield a high efficiency. We present three methods for phase-space implementation that have been integrated into the most recent version of the GPU-based Monte Carlo radiotherapy dose calculation package gDPM v3.0. The first method is to sequentially read particles from a patient-dependent phase-space and sort them on-the-fly based on particle type and energy. The second method supplements this with a simple secondary collimator model and fluence map implementation so that patient-independent phase-space sources can be used. Finally, as the third method (called the phase-space-let, or PSL, method) we introduce a novel strategy to pre-process patient-independent phase-spaces and bin particles by type, energy and position. Position bins l...

  16. Rapid calculation of RMSDs using a quaternion-based characteristic polynomial.

    Science.gov (United States)

    Theobald, Douglas L

    2005-07-01

    A common measure of conformational similarity in structural bioinformatics is the minimum root mean square deviation (RMSD) between the coordinates of two macromolecules. In many applications, the rotations relating the structures are not needed. Several common algorithms for calculating RMSDs require the computationally costly procedures of determining either the eigen decomposition or matrix inversion of a 3x3 or 4x4 matrix. Using a quaternion-based method, here a simple algorithm is developed that rapidly and stably determines RMSDs by circumventing the decomposition and inversion problems. PMID:15973002

  17. Interest of thermochemical data bases linked to complex equilibria calculation codes for practical applications

    International Nuclear Information System (INIS)

    Since 1974, Thermodata has been working on developing an Integrated Information System in Inorganic Chemistry. A major effort was carried on the thermochemical data assessment of both pure substances and multicomponent solution phases. The available data bases are connected to powerful calculation codes (GEMINI = Gibbs Energy Minimizer), which allow to determine the thermodynamical equilibrium state in multicomponent systems. The high interest of such an approach is illustrated by recent applications in as various fields as semi-conductors, chemical vapor deposition, hard alloys and nuclear safety. (author). 26 refs., 6 figs

  18. The Calculation Model for Operation Cost of Coal Resources Development Based on ANN

    Institute of Scientific and Technical Information of China (English)

    刘海滨

    2004-01-01

    On the basis of analysis and selection of factors influencing operation cost of coal resources development, fuzzy set method and artificial neural network (ANN) were adopted to set up the classification analysis model of coal resources. The collected samples were classified by using this model. Meanwhile, the pattern recognition model for classifying of the coal resources was built according to the factors influencing operation cost. Based on the results achieved above, in the light of the theory of information diffusion, the calculation model for operation cost of coal resources development has been presented and applied in practice, showing that these models are reasonable.

  19. CRX: a transport theory code for cell and assembly calculations based on characteristic method

    International Nuclear Information System (INIS)

    A transport theory code CRX based on characteristic method with a general geometric tracking routine for rectangular and hexagonal geometrical problems is developed and tested for heterogeneous cell and assembly calculations. Since the characteristic method treats explicitly (analytically) the streaming portion of the transport equation, CRX treats strong absorbers well and has no practical limitations placed on the geometry of the problem. To test the code, it was applied to three benchmark problems which consist of complex meshes and compared with other codes. (author)

  20. Calculations of internal and external radiation exposure based on voxel models. Final report

    International Nuclear Information System (INIS)

    Dose estimations of internal and external radiation exposure were based so far on mathematical phantoms with rather simple geometrical descriptions of the human body and teh organs. Recently the mathematical phantoms are replaced by more realistic voxel models that allow a more realistic dose estimation for professional radiation exposed personnel, individuals and patients. The projects is aimed to calculate organ doses for exposure to environmental radiation, organ doses for patients during computed tomography and to develop a voxel model for pregnant (24th week of pregnancy) woman for the estimation of radiation doses for the unborn child.

  1. Excel pour l'ingénieur bases, graphiques, calculs, macros, VBA

    CERN Document Server

    Bellan, Philippe

    2010-01-01

    Excel, utilisé par tout possesseur d'un ordinateur personnel pour effectuer des manipulations élémentaires de tableaux et de chiffres, est en réalité un outil beaucoup plus puissant, aux potentialités souvent ignorées. A tous ceux, étudiants scientifiques, élèves-ingénieurs ou ingénieurs en exercice qui pensaient le calcul numérique seulement possible à travers des logiciels lourds et coûteux, ce livre montrera qu'un grand nombre de problèmes mathématiques courants de l'ingénieur peuvent être résolus numériquement en utilisant les outils de calcul et la capacité graphique d'Excel. A cet effet, après avoir introduit les notions de base, l'ouvrage décrit les fonctions disponibles avec Excel, puis quelques méthodes numériques simples permettant de calculer des intégrales, de résoudre des équations différentielles, d'obtenir les solutions de systèmes linéaires ou non, de traiter des problèmes d'optimisation... Les méthodes numériques présentées, qui sont très simples, peuvent...

  2. GPU-based ultra fast dose calculation using a finite pencil beam model

    CERN Document Server

    Gu, Xuejun; Men, Chunhua; Pan, Hubert; Majumdar, Amitava; Jiang, Steve B

    2009-01-01

    Online adaptive radiation therapy (ART) is an attractive concept that promises the ability to deliver an optimal treatment in response to the inter-fraction variability in patient anatomy. However, it has yet to be realized due to technical limitations. Fast dose deposit coefficient calculation is a critical component of the online planning process that is required for plan optimization of intensity modulated radiation therapy (IMRT). Computer graphics processing units (GPUs) are well-suited to provide the requisite fast performance for the data-parallel nature of dose calculation. In this work, we develop a dose calculation engine based on a finite-size pencil beam (FSPB) algorithm and a GPU parallel computing framework. The developed framework can accommodate any FSPB model. We test our implementation on a case of a water phantom and a case of a prostate cancer patient with varying beamlet and voxel sizes. All testing scenarios achieved speedup ranging from 200~400 times when using a NVIDIA Tesla C1060 card...

  3. a Novel Sub-Pixel Matching Algorithm Based on Phase Correlation Using Peak Calculation

    Science.gov (United States)

    Xie, Junfeng; Mo, Fan; Yang, Chao; Li, Pin; Tian, Shiqiang

    2016-06-01

    The matching accuracy of homonymy points of stereo images is a key point in the development of photogrammetry, which influences the geometrical accuracy of the image products. This paper presents a novel sub-pixel matching method phase correlation using peak calculation to improve the matching accuracy. The peak theoretic centre that means to sub-pixel deviation can be acquired by Peak Calculation (PC) according to inherent geometrical relationship, which is generated by inverse normalized cross-power spectrum, and the mismatching points are rejected by two strategies: window constraint, which is designed by matching window and geometric constraint, and correlation coefficient, which is effective for satellite images used for mismatching points removing. After above, a lot of high-precise homonymy points can be left. Lastly, three experiments are taken to verify the accuracy and efficiency of the presented method. Excellent results show that the presented method is better than traditional phase correlation matching methods based on surface fitting in these aspects of accuracy and efficiency, and the accuracy of the proposed phase correlation matching algorithm can reach 0.1 pixel with a higher calculation efficiency.

  4. A CNS calculation line based on a Monte-Carlo method

    International Nuclear Information System (INIS)

    The neutronic design of the moderator cell of a Cold Neutron Source (CNS) involves many different considerations regarding geometry, location, and materials. The decisions taken in this sense affect not only the neutron flux in the source neighbourhood, which can be evaluated by a standard deterministic method, but also the neutron flux values in experimental positions far away from the neutron source. At long distances from the CNS, very time consuming 3D deterministic methods or Monte Carlo transport methods are necessary in order to get accurate figures of standard and typical magnitudes such as average neutron flux, neutron current, angular flux, and luminosity. The Monte Carlo method is a unique and powerful tool to calculate the transport of neutrons and photons. Its use in a bootstrap scheme appears to be an appropriate solution for this type of systems. The use of MCNP as the main neutronic design tool leads to a fast and reliable method to perform calculations in a relatively short time with low statistical errors, if the proper scheme is applied. The design goal is to evaluate the performance of the CNS, its beam tubes and neutron guides, at specific experimental locations in the reactor hall and in the neutron or experimental hall. In this work, the calculation methodology used to design a CNS and its associated Neutron Beam Transport Systems (NBTS), based on the use of the MCNP code, is presented. (author)

  5. Adaptation of GEANT4 to Monte Carlo dose calculations based on CT data

    International Nuclear Information System (INIS)

    The GEANT4 Monte Carlo code provides many powerful functions for conducting particle transport simulations with great reliability and flexibility. However, as a general purpose Monte Carlo code, not all the functions were specifically designed and fully optimized for applications in radiation therapy. One of the primary issues is the computational efficiency, which is especially critical when patient CT data have to be imported into the simulation model. In this paper we summarize the relevant aspects of the GEANT4 tracking and geometry algorithms and introduce our work on using the code to conduct dose calculations based on CT data. The emphasis is focused on modifications of the GEANT4 source code to meet the requirements for fast dose calculations. The major features include a quick voxel search algorithm, fast volume optimization, and the dynamic assignment of material density. These features are ready to be used for tracking the primary types of particles employed in radiation therapy such as photons, electrons, and heavy charged particles. Re-calculation of a proton therapy treatment plan generated by a commercial treatment planning program for a paranasal sinus case is presented as an example

  6. Physiologically based pharmacokinetic modeling of inhaled radon to calculate absorbed doses in mice, rats, and humans

    International Nuclear Information System (INIS)

    This is the first report to provide radiation doses, arising from inhalation of radon itself, in mice and rats. To quantify absorbed doses to organs and tissues in mice, rats, and humans, we computed the behavior of inhaled radon in their bodies on the basis of a physiologically based pharmacokinetic (PBPK) model. It was assumed that radon dissolved in blood entering the gas exchange compartment is transported to any tissue by the blood circulation to be instantaneously distributed according to a tissue/blood partition coefficient. The calculated concentrations of radon in the adipose tissue and red bone marrow following its inhalation were much higher than those in the others, because of the higher partition coefficients. Compared with a previous experimental data for rats and model calculation for humans, the present calculation was proved to be valid. Absorbed dose rates to organs and tissues were estimated to be within the range of 0.04-1.4 nGy (Bqm-3)-1 day-1 for all the species. Although the dose rates are not so high, it may be better to pay attention to the dose to the red bone marrow from the perspective of radiation protection. For more accurate dose assessment, it is necessary to update tissue/blood partition coefficients of radon that strongly govern the result of the PBPK modeling. (author)

  7. Calculation of temperature distribution in adiabatic shear band based on gradient-dependent plasticity

    Institute of Scientific and Technical Information of China (English)

    王学滨

    2004-01-01

    A method for calculation of temperature distribution in adiabatic shear band is proposed in terms of gradient-dependent plasticity where the characteristic length describes the interactions and interplaying among microstructures. First, the increment of the plastic shear strain distribution in adiabatic shear band is obtained based on gradient-dependent plasticity. Then, the plastic work distribution is derived according to the current flow shear stress and the obtained increment of plastic shear strain distribution. In the light of the well-known assumption that 90% of plastic work is converted into the heat resulting in increase in temperature in adiabatic shear band, the increment of the temperature distribution is presented. Next, the average temperature increment in the shear band is calculated to compute the change in flow shear stress due to the thermal softening effect. After the actual flow shear stress considering the thermal softening effect is obtained according to the Johnson-Cook constitutive relation, the increment of the plastic shear strain distribution, the plastic work and the temperature in the next time step are recalculated until the total time is consumed. Summing the temperature distribution leads to rise in the total temperature distribution. The present calculated maximum temperature in adiabatic shear band in titanium agrees with the experimental observations. Moreover, the temperature profiles for different flow shear stresses are qualitatively consistent with experimental and numerical results. Effects of some related parameters on the temperature distribution are also predicted.

  8. Analysis of Distortional Effects of Taxation on Financial and Investment Decision Based on the Methodology of Effective Tax Rates Calculation

    OpenAIRE

    Jaroslava Holečková

    2012-01-01

    The objective of this paper is to examine the use of effective tax rates on different types of capital assets and sources of financing and to assess on the base of calculation of the tax wedges the degree to which taxation affects the incentive to undertake investment. The methodology used to calculate effective tax rates on investments is based on an approach developed by the King and Fullerton methodology (1984), which has become the most widely accepted method adopted to calculating effect...

  9. Chasing polys: Interdisciplinary affinity and its connection to physics identity

    Science.gov (United States)

    Scott, Tyler D.

    This research is based on two motivations that merge by means of the frameworks of interdisciplinary affinity and physics identity. First, a goal of education is to develop interdisciplinary abilities in students' thinking and work. But an often ignored factor is students interests and beliefs about being interdisciplinary. Thus, this work develops and uses a framework called interdisciplinary affinity. It encompasses students interests in making connections across disciplines and their beliefs about their abilities to make those connections. The second motivation of this research is to better understand how to engage more students with physics. Physics identity describes how a student sees themselves in relation to physics. By understanding how physics identity is developed, researchers and educators can identify factors that increase interest and engagement in physics classrooms. Therefore, physics identity was used in conjunction with interdisciplinary affinity. Using a mixed methods approach, this research used quantitative data to identify the relationships interdisciplinary affinity has with physics identity and the physics classroom. These connections were explored in more detail using a case study of three students in a high school physics class. Results showed significant and positive relationships between interdisciplinary affinity and physics identity, including the individual interest and recognition components of identity. It also identified characteristics of physics classrooms that had a significant, positive relationship with interdisciplinary affinity. The qualitative case study highlighted the importance of student interest to the relationship between interdisciplinary affinity and physics identity. It also identified interest and mastery orientation as key to understanding the link between interdisciplinary affinity and the physics classroom. These results are a positive sign that by understanding interdisciplinary affinity and physics identity

  10. Dual-energy CT-based material extraction for tissue segmentation in Monte Carlo dose calculations

    Science.gov (United States)

    Bazalova, Magdalena; Carrier, Jean-François; Beaulieu, Luc; Verhaegen, Frank

    2008-05-01

    Monte Carlo (MC) dose calculations are performed on patient geometries derived from computed tomography (CT) images. For most available MC codes, the Hounsfield units (HU) in each voxel of a CT image have to be converted into mass density (ρ) and material type. This is typically done with a (HU; ρ) calibration curve which may lead to mis-assignment of media. In this work, an improved material segmentation using dual-energy CT-based material extraction is presented. For this purpose, the differences in extracted effective atomic numbers Z and the relative electron densities ρe of each voxel are used. Dual-energy CT material extraction based on parametrization of the linear attenuation coefficient for 17 tissue-equivalent inserts inside a solid water phantom was done. Scans of the phantom were acquired at 100 kVp and 140 kVp from which Z and ρe values of each insert were derived. The mean errors on Z and ρe extraction were 2.8% and 1.8%, respectively. Phantom dose calculations were performed for 250 kVp and 18 MV photon beams and an 18 MeV electron beam in the EGSnrc/DOSXYZnrc code. Two material assignments were used: the conventional (HU; ρ) and the novel (HU; ρ, Z) dual-energy CT tissue segmentation. The dose calculation errors using the conventional tissue segmentation were as high as 17% in a mis-assigned soft bone tissue-equivalent material for the 250 kVp photon beam. Similarly, the errors for the 18 MeV electron beam and the 18 MV photon beam were up to 6% and 3% in some mis-assigned media. The assignment of all tissue-equivalent inserts was accurate using the novel dual-energy CT material assignment. As a result, the dose calculation errors were below 1% in all beam arrangements. Comparable improvement in dose calculation accuracy is expected for human tissues. The dual-energy tissue segmentation offers a significantly higher accuracy compared to the conventional single-energy segmentation.

  11. A MEMS Dielectric Affinity Glucose Biosensor.

    Science.gov (United States)

    Huang, Xian; Li, Siqi; Davis, Erin; Li, Dachao; Wang, Qian; Lin, Qiao

    2013-06-20

    Continuous glucose monitoring (CGM) sensors based on affinity detection are desirable for long-term and stable glucose management. However, most affinity sensors contain mechanical moving structures and complex design in sensor actuation and signal readout, limiting their reliability in subcutaneously implantable glucose detection. We have previously demonstrated a proof-of-concept dielectric glucose sensor that measured pre-mixed glucose-sensitive polymer solutions at various glucose concentrations. This sensor features simplicity in sensor design, and possesses high specificity and accuracy in glucose detection. However, lack of glucose diffusion passage, this device is unable to fulfill real-time in-vivo monitoring. As a major improvement to this device, we present in this paper a fully implantable MEMS dielectric affinity glucose biosensor that contains a perforated electrode embedded in a suspended diaphragm. This capacitive-based sensor contains no moving parts, and enables glucose diffusion and real-time monitoring. The experimental results indicate that this sensor can detect glucose solutions at physiological concentrations and possesses good reversibility and reliability. This sensor has a time constant to glucose concentration change at approximately 3 min, which is comparable to commercial systems. The sensor has potential applications in fully implantable CGM that require excellent long-term stability and reliability. PMID:24511215

  12. GPU-based fast Monte Carlo dose calculation for proton therapy

    Science.gov (United States)

    Jia, Xun; Schümann, Jan; Paganetti, Harald; Jiang, Steve B.

    2012-12-01

    Accurate radiation dose calculation is essential for successful proton radiotherapy. Monte Carlo (MC) simulation is considered to be the most accurate method. However, the long computation time limits it from routine clinical applications. Recently, graphics processing units (GPUs) have been widely used to accelerate computationally intensive tasks in radiotherapy. We have developed a fast MC dose calculation package, gPMC, for proton dose calculation on a GPU. In gPMC, proton transport is modeled by the class II condensed history simulation scheme with a continuous slowing down approximation. Ionization, elastic and inelastic proton nucleus interactions are considered. Energy straggling and multiple scattering are modeled. Secondary electrons are not transported and their energies are locally deposited. After an inelastic nuclear interaction event, a variety of products are generated using an empirical model. Among them, charged nuclear fragments are terminated with energy locally deposited. Secondary protons are stored in a stack and transported after finishing transport of the primary protons, while secondary neutral particles are neglected. gPMC is implemented on the GPU under the CUDA platform. We have validated gPMC using the TOPAS/Geant4 MC code as the gold standard. For various cases including homogeneous and inhomogeneous phantoms as well as a patient case, good agreements between gPMC and TOPAS/Geant4 are observed. The gamma passing rate for the 2%/2 mm criterion is over 98.7% in the region with dose greater than 10% maximum dose in all cases, excluding low-density air regions. With gPMC it takes only 6-22 s to simulate 10 million source protons to achieve ˜1% relative statistical uncertainty, depending on the phantoms and energy. This is an extremely high efficiency compared to the computational time of tens of CPU hours for TOPAS/Geant4. Our fast GPU-based code can thus facilitate the routine use of MC dose calculation in proton therapy.

  13. A brief comparison between grid based real space algorithms and spectrum algorithms for electronic structure calculations

    International Nuclear Information System (INIS)

    Quantum mechanical ab initio calculation constitutes the biggest portion of the computer time in material science and chemical science simulations. As a computer center like NERSC, to better serve these communities, it will be very useful to have a prediction for the future trends of ab initio calculations in these areas. Such prediction can help us to decide what future computer architecture can be most useful for these communities, and what should be emphasized on in future supercomputer procurement. As the size of the computer and the size of the simulated physical systems increase, there is a renewed interest in using the real space grid method in electronic structure calculations. This is fueled by two factors. First, it is generally assumed that the real space grid method is more suitable for parallel computation for its limited communication requirement, compared with spectrum method where a global FFT is required. Second, as the size N of the calculated system increases together with the computer power, O(N) scaling approaches become more favorable than the traditional direct O(N3) scaling methods. These O(N) methods are usually based on localized orbital in real space, which can be described more naturally by the real space basis. In this report, the author compares the real space methods versus the traditional plane wave (PW) spectrum methods, for their technical pros and cons, and the possible of future trends. For the real space method, the author focuses on the regular grid finite different (FD) method and the finite element (FE) method. These are the methods used mostly in material science simulation. As for chemical science, the predominant methods are still Gaussian basis method, and sometime the atomic orbital basis method. These two basis sets are localized in real space, and there is no indication that their roles in quantum chemical simulation will change anytime soon. The author focuses on the density functional theory (DFT), which is the most

  14. Evaluation of on-board kV cone beam CT (CBCT)-based dose calculation

    Science.gov (United States)

    Yang, Yong; Schreibmann, Eduard; Li, Tianfang; Wang, Chuang; Xing, Lei

    2007-02-01

    On-board CBCT images are used to generate patient geometric models to assist patient setup. The image data can also, potentially, be used for dose reconstruction in combination with the fluence maps from treatment plan. Here we evaluate the achievable accuracy in using a kV CBCT for dose calculation. Relative electron density as a function of HU was obtained for both planning CT (pCT) and CBCT using a Catphan-600 calibration phantom. The CBCT calibration stability was monitored weekly for 8 consecutive weeks. A clinical treatment planning system was employed for pCT- and CBCT-based dose calculations and subsequent comparisons. Phantom and patient studies were carried out. In the former study, both Catphan-600 and pelvic phantoms were employed to evaluate the dosimetric performance of the full-fan and half-fan scanning modes. To evaluate the dosimetric influence of motion artefacts commonly seen in CBCT images, the Catphan-600 phantom was scanned with and without cyclic motion using the pCT and CBCT scanners. The doses computed based on the four sets of CT images (pCT and CBCT with/without motion) were compared quantitatively. The patient studies included a lung case and three prostate cases. The lung case was employed to further assess the adverse effect of intra-scan organ motion. Unlike the phantom study, the pCT of a patient is generally acquired at the time of simulation and the anatomy may be different from that of CBCT acquired at the time of treatment delivery because of organ deformation. To tackle the problem, we introduced a set of modified CBCT images (mCBCT) for each patient, which possesses the geometric information of the CBCT but the electronic density distribution mapped from the pCT with the help of a BSpline deformable image registration software. In the patient study, the dose computed with the mCBCT was used as a surrogate of the 'ground truth'. We found that the CBCT electron density calibration curve differs moderately from that of pCT. No

  15. A robust force field based method for calculating conformational energies of charged drug-like molecules

    DEFF Research Database (Denmark)

    Pøhlsgaard, Jacob; Harpsøe, Kasper; Jørgensen, Flemming Steen;

    2012-01-01

    The binding affinity of a drug like molecule depends among other things on the availability of the bioactive conformation. If the bioactive conformation has a significantly higher energy than the global minimum energy conformation, the molecule is unlikely to bind to its target. Determination of ...... zwitterionic compounds generated by conformational analysis with modified electrostatics are good approximations of the conformational distributions predicted by experimental data and in simulated annealing performed in explicit solvent....

  16. SDT: a virus classification tool based on pairwise sequence alignment and identity calculation.

    Directory of Open Access Journals (Sweden)

    Brejnev Muhizi Muhire

    Full Text Available The perpetually increasing rate at which viral full-genome sequences are being determined is creating a pressing demand for computational tools that will aid the objective classification of these genome sequences. Taxonomic classification approaches that are based on pairwise genetic identity measures are potentially highly automatable and are progressively gaining favour with the International Committee on Taxonomy of Viruses (ICTV. There are, however, various issues with the calculation of such measures that could potentially undermine the accuracy and consistency with which they can be applied to virus classification. Firstly, pairwise sequence identities computed based on multiple sequence alignments rather than on multiple independent pairwise alignments can lead to the deflation of identity scores with increasing dataset sizes. Also, when gap-characters need to be introduced during sequence alignments to account for insertions and deletions, methodological variations in the way that these characters are introduced and handled during pairwise genetic identity calculations can cause high degrees of inconsistency in the way that different methods classify the same sets of sequences. Here we present Sequence Demarcation Tool (SDT, a free user-friendly computer program that aims to provide a robust and highly reproducible means of objectively using pairwise genetic identity calculations to classify any set of nucleotide or amino acid sequences. SDT can produce publication quality pairwise identity plots and colour-coded distance matrices to further aid the classification of sequences according to ICTV approved taxonomic demarcation criteria. Besides a graphical interface version of the program for Windows computers, command-line versions of the program are available for a variety of different operating systems (including a parallel version for cluster computing platforms.

  17. Assessment of Calculation Procedures for Piles in Clay Based on Static Loading Tests

    DEFF Research Database (Denmark)

    Augustesen, Anders; Andersen, Lars

    2008-01-01

    the interpretation of the field tests is of paramount importance, both with regard to the soil profile and the loading conditions. Based on analyses of 253 static pile loading tests distributed on 111 sites, API-RP2A provides the better description of the data. However, it should be emphasised that......Numerous methods are available for the prediction of the axial capacity of piles in clay. In this paper, two well-known models are considered, namely the current API-RP2A (1987 to present) and the recently developed ICP method. The latter is developed by Jardine and his co-workers at Imperial...... College in London. The calculation procedures are assessed based on an established database of static loading tests. To make a consistent evaluation of the design methods, corrections related to undrained shear strength and time between pile driving and testing have been employed. The study indicates that...

  18. Implementation of a Web-Based Spatial Carbon Calculator for Latin America and the Caribbean

    Science.gov (United States)

    Degagne, R. S.; Bachelet, D. M.; Grossman, D.; Lundin, M.; Ward, B. C.

    2013-12-01

    A multi-disciplinary team from the Conservation Biology Institute is creating a web-based tool for the InterAmerican Development Bank (IDB) to assess the impact of potential development projects on carbon stocks in Latin America and the Caribbean. Funded by the German Society for International Cooperation (GIZ), this interactive carbon calculator is an integrated component of the IDB Decision Support toolkit which is currently utilized by the IDB's Environmental Safeguards Group. It is deployed on the Data Basin (www.databasin.org) platform and provides a risk screening function to indicate the potential carbon impact of various types of projects, based on a user-delineated development footprint. The tool framework employs the best available geospatial carbon data to quantify above-ground carbon stocks and highlights potential below-ground and soil carbon hotspots in the proposed project area. Results are displayed in the web mapping interface, as well as summarized in PDF documents generated by the tool.

  19. Free Probability based Capacity Calculation of Multiantenna Gaussian Fading Channels with Cochannel Interference

    CERN Document Server

    Chatzinotas, Symeon

    2010-01-01

    During the last decade, it has been well understood that communication over multiple antennas can increase linearly the multiplexing capacity gain and provide large spectral efficiency improvements. However, the majority of studies in this area were carried out ignoring cochannel interference. Only a small number of investigations have considered cochannel interference, but even therein simple channel models were employed, assuming identically distributed fading coefficients. In this paper, a generic model for a multi-antenna channel is presented incorporating four impairments, namely additive white Gaussian noise, flat fading, path loss and cochannel interference. Both point-to-point and multiple-access MIMO channels are considered, including the case of cooperating Base Station clusters. The asymptotic capacity limit of this channel is calculated based on an asymptotic free probability approach which exploits the additive and multiplicative free convolution in the R- and S-transform domain respectively, as ...

  20. Computing Moment-Based Probability Tables for Self-Shielding Calculations in Lattice Codes

    International Nuclear Information System (INIS)

    As part of the self-shielding model used in the APOLLO2 lattice code, probability tables are required to compute self-shielded cross sections for coarse energy groups (typically with 99 or 172 groups). This paper describes the replacement of the multiband tables (typically with 51 subgroups) with moment-based tables in release 2.5 of APOLLO2. An improved Ribon method is proposed to compute moment-based probability tables, allowing important savings in CPU resources while maintaining the accuracy of the self-shielding algorithm. Finally, a validation is presented where the absorption rates obtained with each of these techniques are compared with exact values obtained using a fine-group elastic slowing-down calculation in the resolved energy domain. Other results, relative to the Rowland's benchmark and to three assembly production cases, are also presented

  1. Development of a Carbon Emission Calculations System for Optimizing Building Plan Based on the LCA Framework

    Directory of Open Access Journals (Sweden)

    Feifei Fu

    2014-01-01

    Full Text Available Life cycle thinking has become widely applied in the assessment for building environmental performance. Various tool are developed to support the application of life cycle assessment (LCA method. This paper focuses on the carbon emission during the building construction stage. A partial LCA framework is established to assess the carbon emission in this phase. Furthermore, five typical LCA tools programs have been compared and analyzed for demonstrating the current application of LCA tools and their limitations in the building construction stage. Based on the analysis of existing tools and sustainability demands in building, a new computer calculation system has been developed to calculate the carbon emission for optimizing the sustainability during the construction stage. The system structure and detail functions are described in this paper. Finally, a case study is analyzed to demonstrate the designed LCA framework and system functions. This case is based on a typical building in UK with different plans of masonry wall and timber frame to make a comparison. The final results disclose that a timber frame wall has less embodied carbon emission than a similar masonry structure. 16% reduction was found in this study.

  2. Numerical Calculations of WR-40 Boiler Based on its Zero-Dimensional Model

    Directory of Open Access Journals (Sweden)

    Hernik Bartłomiej

    2014-06-01

    Full Text Available Generally, the temperature of flue gases at the furnace outlet is not measured. Therefore, a special computation procedure is needed to determine it. This paper presents a method for coordination of the numerical model of a pulverised fuel boiler furnace chamber with the measuring data in a situation when CFD calculations are made in regard to the furnace only. This paper recommends the use of the classical 0-dimensional balance model of a boiler, based on the use of measuring data. The average temperature of flue gases at the furnace outlet tk" obtained using the model may be considered as highly reliable. The numerical model has to show the same value of tk" . This paper presents calculations for WR-40 boiler. The CFD model was matched to the 0-dimensional tk" value by means of a selection of the furnace wall emissivity. As a result of CFD modelling, the flue gas temperature and the concentration of CO, CO2, O2 and NOx were obtained at the furnace chamber outlet. The results of numerical modelling of boiler combustion based on volumetric reactions and using the Finite-Rate/Eddy-Dissipation Model are presented.

  3. Three Dimensional Gait Analysis Using Wearable Acceleration and Gyro Sensors Based on Quaternion Calculations

    Directory of Open Access Journals (Sweden)

    Hiroaki Miyagawa

    2013-07-01

    Full Text Available This paper proposes a method for three dimensional gait analysis using wearable sensors and quaternion calculations. Seven sensor units consisting of a tri-axial acceleration and gyro sensors, were fixed to the lower limbs. The acceleration and angular velocity data of each sensor unit were measured during level walking. The initial orientations of the sensor units were estimated using acceleration data during upright standing position and the angular displacements were estimated afterwards using angular velocity data during gait. Here, an algorithm based on quaternion calculation was implemented for orientation estimation of the sensor units. The orientations of the sensor units were converted to the orientations of the body segments by a rotation matrix obtained from a calibration trial. Body segment orientations were then used for constructing a three dimensional wire frame animation of the volunteers during the gait. Gait analysis was conducted on five volunteers, and results were compared with those from a camera-based motion analysis system. Comparisons were made for the joint trajectory in the horizontal and sagittal plane. The average RMSE and correlation coefficient (CC were 10.14 deg and 0.98, 7.88 deg and 0.97, 9.75 deg and 0.78 for the hip, knee and ankle flexion angles, respectively.

  4. Three dimensional gait analysis using wearable acceleration and gyro sensors based on quaternion calculations.

    Science.gov (United States)

    Tadano, Shigeru; Takeda, Ryo; Miyagawa, Hiroaki

    2013-01-01

    This paper proposes a method for three dimensional gait analysis using wearable sensors and quaternion calculations. Seven sensor units consisting of a tri-axial acceleration and gyro sensors, were fixed to the lower limbs. The acceleration and angular velocity data of each sensor unit were measured during level walking. The initial orientations of the sensor units were estimated using acceleration data during upright standing position and the angular displacements were estimated afterwards using angular velocity data during gait. Here, an algorithm based on quaternion calculation was implemented for orientation estimation of the sensor units. The orientations of the sensor units were converted to the orientations of the body segments by a rotation matrix obtained from a calibration trial. Body segment orientations were then used for constructing a three dimensional wire frame animation of the volunteers during the gait. Gait analysis was conducted on five volunteers, and results were compared with those from a camera-based motion analysis system. Comparisons were made for the joint trajectory in the horizontal and sagittal plane. The average RMSE and correlation coefficient (CC) were 10.14 deg and 0.98, 7.88 deg and 0.97, 9.75 deg and 0.78 for the hip, knee and ankle flexion angles, respectively. PMID:23877128

  5. Affinity- and topology-dependent bound on current fluctuations

    CERN Document Server

    Pietzonka, Patrick; Seifert, Udo

    2016-01-01

    We provide a proof of a recently conjectured universal bound on current fluctuations in Markovian processes. This bound establishes a link between the fluctuations of an individual observable current, the cycle affinities driving the system into a non-equilibrium steady state, and the topology of the network. The proof is based on a decomposition of the network into independent cycles with both positive affinity and positive stationary cycle current. This formalism allows for a refinement of the bound for systems in equilibrium or with locally vanishing affinities.

  6. GPU Based Fast Free-Wake Calculations For Multiple Horizontal Axis Wind Turbine Rotors

    International Nuclear Information System (INIS)

    Unsteady free-wake solutions of wind turbine flow fields involve computationally intensive interaction calculations, which generally limit the total amount of simulation time or the number of turbines that can be simulated by the method. This problem, however, can be addressed easily using high-level of parallelization. Especially when exploited with a GPU, a Graphics Processing Unit, this property can provide a significant computational speed-up, rendering the most intensive engineering problems realizable in hours of computation time. This paper presents the results of the simulation of the flow field for the NREL Phase VI turbine using a GPU-based in-house free-wake panel method code. Computational parallelism involved in the free-wake methodology is exploited using a GPU, allowing thousands of similar operations to be performed simultaneously. The results are compared to experimental data as well as to those obtained by running a corresponding CPU-based code. Results show that the GPU based code is capable of producing wake and load predictions similar to the CPU- based code and in a substantially reduced amount of time. This capability could allow free- wake based analysis to be used in the possible design and optimization studies of wind farms as well as prediction of multiple turbine flow fields and the investigation of the effects of using different vortex core models, core expansion and stretching models on the turbine rotor interaction problems in multiple turbine wake flow fields

  7. A GPU-based Monte Carlo dose calculation code for photon transport in a voxel phantom

    Energy Technology Data Exchange (ETDEWEB)

    Bellezzo, M.; Do Nascimento, E.; Yoriyaz, H., E-mail: mbellezzo@gmail.br [Instituto de Pesquisas Energeticas e Nucleares / CNEN, Av. Lineu Prestes 2242, Cidade Universitaria, 05508-000 Sao Paulo (Brazil)

    2014-08-15

    As the most accurate method to estimate absorbed dose in radiotherapy, Monte Carlo method has been widely used in radiotherapy treatment planning. Nevertheless, its efficiency can be improved for clinical routine applications. In this paper, we present the CUBMC code, a GPU-based Mc photon transport algorithm for dose calculation under the Compute Unified Device Architecture platform. The simulation of physical events is based on the algorithm used in Penelope, and the cross section table used is the one generated by the Material routine, als present in Penelope code. Photons are transported in voxel-based geometries with different compositions. To demonstrate the capabilities of the algorithm developed in the present work four 128 x 128 x 128 voxel phantoms have been considered. One of them is composed by a homogeneous water-based media, the second is composed by bone, the third is composed by lung and the fourth is composed by a heterogeneous bone and vacuum geometry. Simulations were done considering a 6 MeV monoenergetic photon point source. There are two distinct approaches that were used for transport simulation. The first of them forces the photon to stop at every voxel frontier, the second one is the Woodcock method, where the photon stop in the frontier will be considered depending on the material changing across the photon travel line. Dose calculations using these methods are compared for validation with Penelope and MCNP5 codes. Speed-up factors are compared using a NVidia GTX 560-Ti GPU card against a 2.27 GHz Intel Xeon CPU processor. (Author)

  8. A GPU-based Monte Carlo dose calculation code for photon transport in a voxel phantom

    International Nuclear Information System (INIS)

    As the most accurate method to estimate absorbed dose in radiotherapy, Monte Carlo method has been widely used in radiotherapy treatment planning. Nevertheless, its efficiency can be improved for clinical routine applications. In this paper, we present the CUBMC code, a GPU-based Mc photon transport algorithm for dose calculation under the Compute Unified Device Architecture platform. The simulation of physical events is based on the algorithm used in Penelope, and the cross section table used is the one generated by the Material routine, als present in Penelope code. Photons are transported in voxel-based geometries with different compositions. To demonstrate the capabilities of the algorithm developed in the present work four 128 x 128 x 128 voxel phantoms have been considered. One of them is composed by a homogeneous water-based media, the second is composed by bone, the third is composed by lung and the fourth is composed by a heterogeneous bone and vacuum geometry. Simulations were done considering a 6 MeV monoenergetic photon point source. There are two distinct approaches that were used for transport simulation. The first of them forces the photon to stop at every voxel frontier, the second one is the Woodcock method, where the photon stop in the frontier will be considered depending on the material changing across the photon travel line. Dose calculations using these methods are compared for validation with Penelope and MCNP5 codes. Speed-up factors are compared using a NVidia GTX 560-Ti GPU card against a 2.27 GHz Intel Xeon CPU processor. (Author)

  9. Development of 3-D detailed FBR core calculation method based on method of characteristics

    International Nuclear Information System (INIS)

    A new detailed 3-D transport calculation method taking into account the heterogeneity of fuel assemblies has been developed in hexagonal-z geometry by combining the method of characteristics (MOC) and the nodal transport method. From the nodal transport calculation which uses assembly homogenized cross sections, the axial leakage is calculated, and it is used for the MOC calculation which treats the heterogeneity of fuel assemblies. Series of homogeneous MOC calculations which use assembly homogeneous cross sections are carried out to obtain effective cross sections, which preserve assembly reaction rates. This effective cross sections are again used in the 3-dimensional nodal transport calculation. The numerical calculations have been performed to verify 3-dimensional radial calculations of FBR (fast breeder reactor) assemblies and partial core calculations. Results are compared with the reference Monte-Carlo calculations. A good agreement has been achieved. It is shown that the present method has an advantage in calculating reaction rates in a small region. (authors)

  10. An analytic linear accelerator source model for GPU-based Monte Carlo dose calculations

    Science.gov (United States)

    Tian, Zhen; Li, Yongbao; Folkerts, Michael; Shi, Feng; Jiang, Steve B.; Jia, Xun

    2015-10-01

    Recently, there has been a lot of research interest in developing fast Monte Carlo (MC) dose calculation methods on graphics processing unit (GPU) platforms. A good linear accelerator (linac) source model is critical for both accuracy and efficiency considerations. In principle, an analytical source model should be more preferred for GPU-based MC dose engines than a phase-space file-based model, in that data loading and CPU-GPU data transfer can be avoided. In this paper, we presented an analytical field-independent source model specifically developed for GPU-based MC dose calculations, associated with a GPU-friendly sampling scheme. A key concept called phase-space-ring (PSR) was proposed. Each PSR contained a group of particles that were of the same type, close in energy and reside in a narrow ring on the phase-space plane located just above the upper jaws. The model parameterized the probability densities of particle location, direction and energy for each primary photon PSR, scattered photon PSR and electron PSR. Models of one 2D Gaussian distribution or multiple Gaussian components were employed to represent the particle direction distributions of these PSRs. A method was developed to analyze a reference phase-space file and derive corresponding model parameters. To efficiently use our model in MC dose calculations on GPU, we proposed a GPU-friendly sampling strategy, which ensured that the particles sampled and transported simultaneously are of the same type and close in energy to alleviate GPU thread divergences. To test the accuracy of our model, dose distributions of a set of open fields in a water phantom were calculated using our source model and compared to those calculated using the reference phase-space files. For the high dose gradient regions, the average distance-to-agreement (DTA) was within 1 mm and the maximum DTA within 2 mm. For relatively low dose gradient regions, the root-mean-square (RMS) dose difference was within 1.1% and the maximum

  11. Integrable defects in affine Toda field theory and infinite-dimensional representations of quantum groups

    Energy Technology Data Exchange (ETDEWEB)

    Corrigan, E., E-mail: edward.corrigan@durham.ac.u [Department of Mathematical Sciences, University of Durham, Durham DH1 3LE (United Kingdom); Zambon, C., E-mail: cristina.zambon@durham.ac.u [Department of Mathematical Sciences, University of Durham, Durham DH1 3LE (United Kingdom)

    2011-07-21

    Transmission matrices for two types of integrable defect are calculated explicitly, first by solving directly the nonlinear transmission Yang-Baxter equations, and second by solving a linear intertwining relation between a finite-dimensional representation of the relevant Borel subalgebra of the quantum group underpinning the integrable quantum field theory and a particular infinite-dimensional representation expressed in terms of sets of generalised 'quantum' annihilation and creation operators. The principal examples analysed are based on the a{sub 2}{sup (2)} and a{sub n}{sup (1)} affine Toda models but examples of similar infinite-dimensional representations for quantum Borel algebras for all other affine Toda theories are also provided.

  12. Accurate Ionization Potentials and Electron Affinities of Acceptor Molecules IV: Electron-Propagator Methods.

    Science.gov (United States)

    Dolgounitcheva, O; Díaz-Tinoco, Manuel; Zakrzewski, V G; Richard, Ryan M; Marom, Noa; Sherrill, C David; Ortiz, J V

    2016-02-01

    Comparison of ab initio electron-propagator predictions of vertical ionization potentials and electron affinities of organic, acceptor molecules with benchmark calculations based on the basis set-extrapolated, coupled cluster single, double, and perturbative triple substitution method has enabled identification of self-energy approximations with mean, unsigned errors between 0.1 and 0.2 eV. Among the self-energy approximations that neglect off-diagonal elements in the canonical, Hartree-Fock orbital basis, the P3 method for electron affinities, and the P3+ method for ionization potentials provide the best combination of accuracy and computational efficiency. For approximations that consider the full self-energy matrix, the NR2 methods offer the best performance. The P3+ and NR2 methods successfully identify the correct symmetry label of the lowest cationic state in two cases, naphthalenedione and benzoquinone, where some other methods fail. PMID:26730459

  13. Monte Carlo-based treatment planning system calculation engine for microbeam radiation therapy

    International Nuclear Information System (INIS)

    Purpose: Microbeam radiation therapy (MRT) is a synchrotron radiotherapy technique that explores the limits of the dose-volume effect. Preclinical studies have shown that MRT irradiations (arrays of 25-75-μm-wide microbeams spaced by 200-400 μm) are able to eradicate highly aggressive animal tumor models while healthy tissue is preserved. These promising results have provided the basis for the forthcoming clinical trials at the ID17 Biomedical Beamline of the European Synchrotron Radiation Facility (ESRF). The first step includes irradiation of pets (cats and dogs) as a milestone before treatment of human patients. Within this context, accurate dose calculations are required. The distinct features of both beam generation and irradiation geometry in MRT with respect to conventional techniques require the development of a specific MRT treatment planning system (TPS). In particular, a Monte Carlo (MC)-based calculation engine for the MRT TPS has been developed in this work. Experimental verification in heterogeneous phantoms and optimization of the computation time have also been performed. Methods: The penelope/penEasy MC code was used to compute dose distributions from a realistic beam source model. Experimental verification was carried out by means of radiochromic films placed within heterogeneous slab-phantoms. Once validation was completed, dose computations in a virtual model of a patient, reconstructed from computed tomography (CT) images, were performed. To this end, decoupling of the CT image voxel grid (a few cubic millimeter volume) to the dose bin grid, which has micrometer dimensions in the transversal direction of the microbeams, was performed. Optimization of the simulation parameters, the use of variance-reduction (VR) techniques, and other methods, such as the parallelization of the simulations, were applied in order to speed up the dose computation. Results: Good agreement between MC simulations and experimental results was achieved, even at the

  14. Monte Carlo-based treatment planning system calculation engine for microbeam radiation therapy

    Energy Technology Data Exchange (ETDEWEB)

    Martinez-Rovira, I.; Sempau, J.; Prezado, Y. [Institut de Tecniques Energetiques, Universitat Politecnica de Catalunya, Diagonal 647, Barcelona E-08028 (Spain) and ID17 Biomedical Beamline, European Synchrotron Radiation Facility (ESRF), 6 rue Jules Horowitz B.P. 220, F-38043 Grenoble Cedex (France); Institut de Tecniques Energetiques, Universitat Politecnica de Catalunya, Diagonal 647, Barcelona E-08028 (Spain); Laboratoire Imagerie et modelisation en neurobiologie et cancerologie, UMR8165, Centre National de la Recherche Scientifique (CNRS), Universites Paris 7 et Paris 11, Bat 440., 15 rue Georges Clemenceau, F-91406 Orsay Cedex (France)

    2012-05-15

    Purpose: Microbeam radiation therapy (MRT) is a synchrotron radiotherapy technique that explores the limits of the dose-volume effect. Preclinical studies have shown that MRT irradiations (arrays of 25-75-{mu}m-wide microbeams spaced by 200-400 {mu}m) are able to eradicate highly aggressive animal tumor models while healthy tissue is preserved. These promising results have provided the basis for the forthcoming clinical trials at the ID17 Biomedical Beamline of the European Synchrotron Radiation Facility (ESRF). The first step includes irradiation of pets (cats and dogs) as a milestone before treatment of human patients. Within this context, accurate dose calculations are required. The distinct features of both beam generation and irradiation geometry in MRT with respect to conventional techniques require the development of a specific MRT treatment planning system (TPS). In particular, a Monte Carlo (MC)-based calculation engine for the MRT TPS has been developed in this work. Experimental verification in heterogeneous phantoms and optimization of the computation time have also been performed. Methods: The penelope/penEasy MC code was used to compute dose distributions from a realistic beam source model. Experimental verification was carried out by means of radiochromic films placed within heterogeneous slab-phantoms. Once validation was completed, dose computations in a virtual model of a patient, reconstructed from computed tomography (CT) images, were performed. To this end, decoupling of the CT image voxel grid (a few cubic millimeter volume) to the dose bin grid, which has micrometer dimensions in the transversal direction of the microbeams, was performed. Optimization of the simulation parameters, the use of variance-reduction (VR) techniques, and other methods, such as the parallelization of the simulations, were applied in order to speed up the dose computation. Results: Good agreement between MC simulations and experimental results was achieved, even at

  15. Neutron spectra calculation and doses in a subcritical nuclear reactor based on thorium

    International Nuclear Information System (INIS)

    This paper describes a heterogeneous subcritical nuclear reactor with molten salts based on thorium, with graphite moderator and a source of 252Cf, whose dose levels in the periphery allows its use in teaching and research activities. The design was done by the Monte Carlo method with the code MCNP5 where the geometry, dimensions and fuel was varied in order to obtain the best design. The result is a cubic reactor of 110 cm side with graphite moderator and reflector. In the central part they have 9 ducts that were placed in the direction of axis Y. The central duct contains the source of 252Cf, of 8 other ducts, are two irradiation ducts and the other six contain a molten salt (7LiF - BeF2 - ThF4 - UF4) as fuel. For design the keff, neutron spectra and ambient dose equivalent was calculated. In the first instance the above calculation for a virgin fuel was called case 1, then a percentage of 233U was used and the percentage of Th was decreased and was called case 2. This with the purpose to compare two different fuels working inside the reactor. In the case 1 a value was obtained for the keff of 0.13 and case 2 of 0.28, maintaining the subcriticality in both cases. In the dose levels the higher value is in case 2 in the axis Y with a value of 3.31 e-3 ±1.6% p Sv/Q this value is reported in for one. With this we can calculate the exposure time of personnel working in the reactor. (Author)

  16. Star sub-pixel centroid calculation based on multi-step minimum energy difference method

    Science.gov (United States)

    Wang, Duo; Han, YanLi; Sun, Tengfei

    2013-09-01

    The star's centroid plays a vital role in celestial navigation, star images which be gotten during daytime, due to the strong sky background, have a low SNR, and the star objectives are nearly submerged in the background, takes a great trouble to the centroid localization. Traditional methods, such as a moment method, weighted centroid calculation method is simple but has a big error, especially in the condition of a low SNR. Gaussian method has a high positioning accuracy, but the computational complexity. Analysis of the energy distribution in star image, a location method for star target centroids based on multi-step minimum energy difference is proposed. This method uses the linear superposition to narrow the centroid area, in the certain narrow area uses a certain number of interpolation to pixels for the pixels' segmentation, and then using the symmetry of the stellar energy distribution, tentatively to get the centroid position: assume that the current pixel is the star centroid position, and then calculates and gets the difference of the sum of the energy which in the symmetric direction(in this paper we take the two directions of transverse and longitudinal) and the equal step length(which can be decided through different conditions, the paper takes 9 as the step length) of the current pixel, and obtain the centroid position in this direction when the minimum difference appears, and so do the other directions, then the validation comparison of simulated star images, and compare with several traditional methods, experiments shows that the positioning accuracy of the method up to 0.001 pixel, has good effect to calculate the centroid of low SNR conditions; at the same time, uses this method on a star map which got at the fixed observation site during daytime in near-infrared band, compare the results of the paper's method with the position messages which were known of the star, it shows that :the multi-step minimum energy difference method achieves a better

  17. Forecasting of Real Thunderstorms based on Electric Parameters Calculations in Numerical Weather Prediction Models

    Science.gov (United States)

    Dementyeva, Svetlana; Ilin, Nikolay; Shatalina, Maria; Mareev, Evgeny

    2016-04-01

    Now-casting and long-term forecasting of lightning flashes occurrence are urgent problems from different points of view. There are several approaches to predicting lightning activity using indirect non-electrical parameters based on the relationship of lightning flashes with vertical fluxes of solid-phased hydrometeors but for more explicit forecasting of the lightning flashes occurrence electric processes should be considered. In addition, a factor playing a key role for now-casting of lightning activity is the earliness. We have proposed an algorithm, which makes the process of thunderstorms prediction automatic (due to automatic start of the electric parameters calculation) and quick (due to the use of simplified methods). Our forecasting was based on the use of Weather Research and Forecasting (WRF) model, which does not include the electrification processes, but it was supplemented with two modules. The first is an algorithm, which allows us to select thunderstorm events indirectly. It is based on such characteristics of thunderclouds and thunderstorms as radar reflectivity, duration and area and provides us with information about an approximate beginning and duration of the thunderstorm. The second module is a method for electric parameters calculations, which we have proposed before. It was suggested that the non-inductive mechanism of charge generation and separation plays a key role in the thundercloud electrification processes. Also charge densities of solid-phased hydrometeors are assumed to be proportional to their mass in elementary air volume. According to the models by Saunders and Takahashi, particles change the sign of charge while getting into the lower part of thundercloud from the upper and vice versa. Electric neutrality in the vertical air column was supposed in the course of vertical charge separation due to collisions between falling graupels and carried upward ice crystals. Electric potential (and consequently electric field) can be found

  18. Benchmarking of the 3-D CAD-based Discrete Ordinates code “ATTILA” for dose rate calculations against experiments and Monte Carlo calculations

    International Nuclear Information System (INIS)

    Shutdown dose rate (SDDR) inside and around the diagnostics ports of ITER is performed at PPPL/UCLA using the 3-D, FEM, Discrete Ordinates code, ATTILA, along with its updated FORNAX transmutation/decay gamma library. Other ITER partners assess SDDR using codes based on the Monte Carlo (MC) approach (e.g. MCNP code) for transport calculation and the radioactivity inventory code FISPACT or other equivalent decay data libraries for dose rate assessment. To reveal the range of discrepancies in the results obtained by various analysts, an extensive experimental and calculation benchmarking effort has been undertaken to validate the capability of ATTILA for dose rate assessment. On the experimental validation front, the comparison was performed using the measured data from two SDDR experiments performed at the FNG facility, Italy. Comparison was made to the experimental data and to MC results obtained by other analysts. On the calculation validation front, the ATTILA's predictions were compared to other results at key locations inside a calculation benchmark whose configuration duplicates an upper diagnostics port plug (UPP) in ITER. Both serial and parallel version of ATTILA-7.1.0 are used in the PPPL/UCLA analysis performed with FENDL-2.1/FORNAX databases. In the FNG 1st experimental, it was shown that ATTILA's dose rates are largely over estimated (by ∼30–60%) with the ANSI/ANS-6.1.1 flux-to-dose factors whereas the ICRP-74 factors give better agreement (10–20%) with the experimental data and with the MC results at all cooling times. In the 2nd experiment, there is an under estimation in SDDR calculated by both MCNP and ATTILA based on ANSI/ANS-6.1.1 for cooling times up to ∼4 days after irradiation. Thereafter, an over estimation is observed (∼5–10% with MCNP and ∼10–15% with ATTILA). As for the calculation benchmark, the agreement is much better based on ICRP-74 1996 data. The divergence among all dose rate results at ∼11 days cooling time is no

  19. Characterization of tsunamigenic earthquake in Java region based on seismic wave calculation

    International Nuclear Information System (INIS)

    This study is to characterize the source mechanism of tsunamigenic earthquake based on seismic wave calculation. The source parameter used are the ratio (Θ) between the radiated seismic energy (E) and seismic moment (Mo), moment magnitude (MW), rupture duration (To) and focal mechanism. These determine the types of tsunamigenic earthquake and tsunami earthquake. We calculate the formula using the teleseismic wave signal processing with the initial phase of P wave with bandpass filter 0.001 Hz to 5 Hz. The amount of station is 84 broadband seismometer with far distance of 30° to 90°. The 2 June 1994 Banyuwangi earthquake with MW=7.8 and the 17 July 2006 Pangandaran earthquake with MW=7.7 include the criteria as a tsunami earthquake which distributed about ratio Θ=−6.1, long rupture duration To>100 s and high tsunami H>7 m. The 2 September 2009 Tasikmalaya earthquake with MW=7.2, Θ=−5.1 and To=27 s which characterized as a small tsunamigenic earthquake

  20. A 3D pencil-beam-based superposition algorithm for photon dose calculation in heterogeneous media

    Science.gov (United States)

    Tillikainen, L.; Helminen, H.; Torsti, T.; Siljamäki, S.; Alakuijala, J.; Pyyry, J.; Ulmer, W.

    2008-07-01

    In this work, a novel three-dimensional superposition algorithm for photon dose calculation is presented. The dose calculation is performed as a superposition of pencil beams, which are modified based on tissue electron densities. The pencil beams have been derived from Monte Carlo simulations, and are separated into lateral and depth-directed components. The lateral component is modeled using exponential functions, which allows accurate modeling of lateral scatter in heterogeneous tissues. The depth-directed component represents the total energy deposited on each plane, which is spread out using the lateral scatter functions. Finally, convolution in the depth direction is applied to account for tissue interface effects. The method can be used with the previously introduced multiple-source model for clinical settings. The method was compared against Monte Carlo simulations in several phantoms including lung- and bone-type heterogeneities. Comparisons were made for several field sizes for 6 and 18 MV energies. The deviations were generally within (2%, 2 mm) of the field central axis dmax. Significantly larger deviations (up to 8%) were found only for the smallest field in the lung slab phantom for 18 MV. The presented method was found to be accurate in a wide range of conditions making it suitable for clinical planning purposes.

  1. Seismic ray-tracing calculation based on parabolic travel-time interpolation

    Institute of Scientific and Technical Information of China (English)

    周竹生; 张赛民; 陈灵君

    2004-01-01

    A new seismic ray-tracing method is put forward based on parabolic travel-time interpolation(PTI) method, which is more accurate than the linear travel-time interpolation (LTI) method. Both PTI method and LTI method are used to compute seismic travel-time and ray-path in a 2-D grid cell model. Firstly, some basic concepts are introduced. The calculations of travel-time and ray-path are carried out only at cell boundaries. So, the ray-path is always straight in the same cells with uniform velocity. Two steps are applied in PTI and LTI method, step 1 computes travel-time and step 2 traces ray-path. Then, the derivation of LTI formulas is described. Because of the presence of refraction wave in shot cell, the formula aiming at shot cell is also derived. Finally, PTI method is presented. The calculation of PTI method is more complex than that of LTI method, but the error is limited. The results of numerical model show that PTI method can trace ray-path more accurately and efficiently than LTI method does.

  2. Metadyn View: Fast web-based viewer of free energy surfaces calculated by metadynamics

    Science.gov (United States)

    Hošek, Petr; Spiwok, Vojtěch

    2016-01-01

    Metadynamics is a highly successful enhanced sampling technique for simulation of molecular processes and prediction of their free energy surfaces. An in-depth analysis of data obtained by this method is as important as the simulation itself. Although there are several tools to compute free energy surfaces from metadynamics data, they usually lack user friendliness and a build-in visualization part. Here we introduce Metadyn View as a fast and user friendly viewer of bias potential/free energy surfaces calculated by metadynamics in Plumed package. It is based on modern web technologies including HTML5, JavaScript and Cascade Style Sheets (CSS). It can be used by visiting the web site and uploading a HILLS file. It calculates the bias potential/free energy surface on the client-side, so it can run online or offline without necessity to install additional web engines. Moreover, it includes tools for measurement of free energies and free energy differences and data/image export.

  3. Critical comparison of electrode models in density functional theory based quantum transport calculations

    Science.gov (United States)

    Jacob, D.; Palacios, J. J.

    2011-01-01

    We study the performance of two different electrode models in quantum transport calculations based on density functional theory: parametrized Bethe lattices and quasi-one-dimensional wires or nanowires. A detailed account of implementation details in both the cases is given. From the systematic study of nanocontacts made of representative metallic elements, we can conclude that the parametrized electrode models represent an excellent compromise between computational cost and electronic structure definition as long as the aim is to compare with experiments where the precise atomic structure of the electrodes is not relevant or defined with precision. The results obtained using parametrized Bethe lattices are essentially similar to the ones obtained with quasi-one-dimensional electrodes for large enough cross-sections of these, adding a natural smearing to the transmission curves that mimics the true nature of polycrystalline electrodes. The latter are more demanding from the computational point of view, but present the advantage of expanding the range of applicability of transport calculations to situations where the electrodes have a well-defined atomic structure, as is the case for carbon nanotubes, graphene nanoribbons, or semiconducting nanowires. All the analysis is done with the help of codes developed by the authors which can be found in the quantum transport toolbox ALACANT and are publicly available.

  4. Extension of the COSYMA-ECONOMICS module - cost calculations based on different economic sectors

    International Nuclear Information System (INIS)

    The COSYMA program system for evaluating the off-site consequences of accidental releases of radioactive material to the atmosphere includes an ECONOMICS module for assessing economic consequences. The aim of this module is to convert various consequences (radiation-induced health effects and impacts resulting from countermeasures) caused by an accident into the common framework of economic costs; this allows different effects to be expressed in the same terms and thus to make these effects comparable. With respect to the countermeasure 'movement of people', the dominant cost categories are 'loss-of-income costs' and 'costs of lost capital services'. In the original version of the ECONOMICS module these costs are calculated on the basis of the total number of people moved. In order to take into account also regional or local economic peculiarities of a nuclear site, the ECONOMICS module has been extended: Calculation of the above mentioned cost categories is now based on the number of employees in different economic sectors in the affected area. This extension of the COSYMA ECONOMICS module is described in more detail. (orig.)

  5. Adjoint-based deviational Monte Carlo methods for phonon transport calculations

    Science.gov (United States)

    Péraud, Jean-Philippe M.; Hadjiconstantinou, Nicolas G.

    2015-06-01

    In the field of linear transport, adjoint formulations exploit linearity to derive powerful reciprocity relations between a variety of quantities of interest. In this paper, we develop an adjoint formulation of the linearized Boltzmann transport equation for phonon transport. We use this formulation for accelerating deviational Monte Carlo simulations of complex, multiscale problems. Benefits include significant computational savings via direct variance reduction, or by enabling formulations which allow more efficient use of computational resources, such as formulations which provide high resolution in a particular phase-space dimension (e.g., spectral). We show that the proposed adjoint-based methods are particularly well suited to problems involving a wide range of length scales (e.g., nanometers to hundreds of microns) and lead to computational methods that can calculate quantities of interest with a cost that is independent of the system characteristic length scale, thus removing the traditional stiffness of kinetic descriptions. Applications to problems of current interest, such as simulation of transient thermoreflectance experiments or spectrally resolved calculation of the effective thermal conductivity of nanostructured materials, are presented and discussed in detail.

  6. A modified W–W interatomic potential based on ab initio calculations

    International Nuclear Information System (INIS)

    In this paper we have developed a Finnis–Sinclair-type interatomic potential for W–W interactions that is based on ab initio calculations. The modified potential is able to reproduce the correct formation energies of self-interstitial atom (SIA) defects in tungsten, offering a significant improvement over the Ackland–Thetford tungsten potential. Using the modified potential, the thermal expansion is calculated in a temperature range from 0 to 3500 K. The results are in reasonable agreement with the experimental data, thus overcoming the shortcomings of the negative thermal expansion using the Derlet–Nguyen–Manh–Dudarev tungsten potential. The W–W potential presented here is also applied to study in detail the diffusion of SIAs in tungsten. We reveal that the initial SIA initiates a sequence of tungsten atom displacements and replacements in the 〈1 1 1〉 direction. An Arrhenius fit to the diffusion data at temperatures below 550 K indicates a migration energy of 0.022 eV, which is in reasonable agreement with the experimental data. (paper)

  7. Oxidation Phase Diagram of Small Aluminum Clusters Based on First-Principles Calculations

    Science.gov (United States)

    Wang, Ligen; Kuklja, Maija

    2009-06-01

    It is important to understand the properties of individual nanometals before we can exploit their efficiency as energetic materials or as enhancement additives to other energetic formulations. In this paper, we construct the (p, T) phase diagram for the O/Al13 system based on first-principles atomistic thermodynamics. The temperature and pressure is taken into account via the oxygen chemical potential. The optimized Al13 cluster has an icosahedral shape. We find that O adsorption on the Al13 surface is site-specific; in particular, O adsorption at the bridge sites is most stable, whereas adsorption at the hollow sites is slightly unfavorable. For various oxygen adsorption layers, we determine the adsorption configurations/patterns by performing Monte Carlo calculations. We assume that the metal cluster becomes completely oxidized and calculate formation enthalpies of various oxidized metal clusters. The obtained phase diagram shows that an intact Al13 cluster is stable at the low O chemical potential range and the fully oxidized metal cluster is stable at the high O chemical potential range. However, the O adsorption phases are never thermodynamically stable. This study provides important insights into basic behavior of small aluminum clusters in the presence of oxygen, and may affect reliable predictions of behavior of Al-high explosive composites.

  8. Ni-based Superalloy Development for VHTR - Methodology Using Design of Experiments and Thermodynamic Calculation

    International Nuclear Information System (INIS)

    In this work, to develop novel structural materials for the IHX of a VHTR, a more systematic methodology using the design of experiments (DOE) and thermodynamic calculations was proposed. For 32 sets of designs of Ni-Cr-Co-Mo alloys with minor elements of W and Ta, the mass fraction of TCP phases and mechanical properties were calculated, and finally the chemical composition was optimized for further experimental studies by applying the proposed . The highly efficient generation of electricity and the production of massive hydrogen are possible using a very high temperature gas-cooled reactor (VHTR) among generation IV nuclear power plants. The structural material for an intermediate heat exchanger (IHX) among numerous components should be endurable at high temperature of up to 950 .deg. C during long-term operation. Impurities inevitably introduced in helium as a coolant facilitate the material degradation by corrosion at high temperature. This work is concerning a methodology of Ni-Cr-Co-Mo based superalloy developed for VHTR using the design of experiments (DOE) and thermodynamic calculationsmethodology

  9. Adjoint-based sensitivity and uncertainty analysis of lattice physics calculations with CASMO-4

    International Nuclear Information System (INIS)

    The topic of this paper is the development of sensitivity and uncertainty analysis capability to the reactor physics code CASMO-4 in the UAM (Uncertainty Analysis in Best-Estimate Modelling for Design, Operation and Safety Analysis of LWRs) benchmark. The developed calculation system enables the uncertainty analysis of homogenized multi-group cross-sections, diffusion coefficients and pin powers with respect to nuclear data. The uncertainty analysis methodology is deterministic, meaning that the sensitivity profiles of the responses are computed first, after which uncertainty is propagated by combining the sensitivity profiles with the covariance matrices of the uncertain nuclear data. The sensitivity analysis is based on perturbation theory which enables computing the sensitivity profiles efficiently by solving one generalized adjoint system for each response. The mathematical background of this work is reviewed and the main conclusions related to the implementation are summarized. Special emphasis is placed on the sensitivity analysis of two-group homogenized diffusion coefficients which require some modifications to the standard equations of generalized perturbation theory. Numerical results are presented and analyzed for a PWR fuel assembly with control rods out and inserted. The computational efficiency of the calculations is discussed. (author)

  10. Isotope shift in the electron affinity of beryllium

    International Nuclear Information System (INIS)

    The study of the isotope shift in the electron affinity is interesting for probing correlation effects. Experiments that allow this property to be measured are rare, being difficult to realize, while accurate calculations remain a challenge for atomic theory. The present work focuses on the theoretical estimation of the isotope shift in the electron affinity of Be (2s2p 3Po), using correlated electronic wavefunctions obtained from multiconfiguration Hartree-Fock and configuration interaction variational calculations. The reliability of the correlation models is assessed from a comparison between the observed and theoretical electron affinities, and between theoretical isotope shift values for the 2s2p 3Po 2s21S transition of neutral beryllium. The sign and the magnitude of the difference between the mass polarization term expectation values obtained for the neutral atom and the negative ion are such that the resulting isotope shift in the electron affinity is 'anomalous', corresponding to a smaller electron affinity for the heavier isotope

  11. The Geology and Geochemistry of Base Metal Sulfide Mineralization in the Foster River Area, Northern Saskatchewan: A SEDEX Deposit With Broken Hill-Type Affinities

    Science.gov (United States)

    Steadman, J. A.; Spry, P. G.

    2009-05-01

    in a restricted basin. The low Pb/(Pb+Zn) ratio of sulfide mineralization and the lack of bimodal volcanics suggest that the mineralization in the Foster River area is a SEDEX deposit with BHT affinities.

  12. Calculation of RBE for normal tissue complications based on charged particle track structure

    International Nuclear Information System (INIS)

    A new approach for the calculation of RBE for normal tissue complications after charged particle and neutron irradiation is discussed. It is based on the extension of a model originally developed for the application to cell survival. It can be shown, that according to the model RBE values are determined largely by the α/β-ratio of the photon dose response curve, but are expected to be nearly independent of the absolute values of α and β. Thus, the model can be applied to normal tissue complications as well, where α/β-ratios can be determined by means of fractionation experiments. Agreement of model predictions and experimental results obtained in animal experiments confirm the appliability of the model even in the case of complex biological endpoints. (orig.)

  13. Protein kinase inhibitor-induced endothelial cell cytotoxicity and its prediction based on calculated molecular descriptors.

    Science.gov (United States)

    Herczenik, Eszter; Varga, Zoltán; Eros, Dániel; Makó, Veronika; Oroszlán, Melinda; Rugonfalvi-Kiss, Szabolcs; Romics, László; Füst, George; Kéri, György; Orfi, László; Cervenak, László

    2009-01-01

    Protein kinase inhibitors (PKIs) as potent signal transduction therapeutic compounds represent a very rapidly expanding group of anticancer drugs. These agents may be toxic for endothelial cells, however, very few experimental data exist on the cytotoxicity of PKIs. The aim of this study was to set up an appropriate test system for endothelial cells and to assess the structure-related cytotoxic effects of a selected library of PKIs. The inhibitor library contains several lead molecules with different basic structures and a set of modified derivatives of the lead compounds. The toxicity of PKIs did not correlate directly with the structural features of the molecules. However, we successfully built up a model based on 15 calculated molecular descriptors, which is capable of predicting cytotoxicity with acceptable probability. Our results show that the cytotoxic effects of PKIs should be taken into account for optimal drug development to overcome endothelial cell-related side effects. PMID:19519173

  14. Simulation calculation of 232U productions in thorium-uranium transform process based on thermal reactor

    International Nuclear Information System (INIS)

    The decay products of 232U produced in the thorium-uranium fuel cycle emit high energy γ-rays. This affects the fuel cycle greatly. In this paper, the 232U productions in thermal reactor using thorium fuel are analyzed by ORIGEN2, SCALE5 and the code based on the Bateman method. Under normal situation, 232U is mainly produced by 232Th (n, 2n) reaction chain, and more quantity of 230Th can be transformed into 232U while the neutron spectrum is softer. The burnup calculation of CANDU reactor and PWR assembly indicates that the 232U in uranium increase with the burnup, and 230Th in fresh thorium has linear correlation with 232U/Utotal or 232U/233U at discharge burnup. (authors)

  15. Ionic liquid based lithium battery electrolytes: charge carriers and interactions derived by density functional theory calculations.

    Science.gov (United States)

    Angenendt, Knut; Johansson, Patrik

    2011-06-23

    The solvation of lithium salts in ionic liquids (ILs) leads to the creation of a lithium ion carrying species quite different from those found in traditional nonaqueous lithium battery electrolytes. The most striking differences are that these species are composed only of ions and in general negatively charged. In many IL-based electrolytes, the dominant species are triplets, and the charge, stability, and size of the triplets have a large impact on the total ion conductivity, the lithium ion mobility, and also the lithium ion delivery at the electrode. As an inherent advantage, the triplets can be altered by selecting lithium salts and ionic liquids with different anions. Thus, within certain limits, the lithium ion carrying species can even be tailored toward distinct important properties for battery application. Here, we show by DFT calculations that the resulting charge carrying species from combinations of ionic liquids and lithium salts and also some resulting electrolyte properties can be predicted. PMID:21591707

  16. Online program ‘vipcal’ for calculating lytic viral production and lysogenic cells based on a viral reduction approach

    OpenAIRE

    Luef, Birgit; Luef, Franz; Peduzzi, Peter

    2009-01-01

    Assessing viral production (VP) requires robust methodological settings combined with precise mathematical calculations. This contribution improves and standardizes mathematical calculations of VP and the assessment of the proportion of lysogenic cells in a sample. We present an online tool ‘Viral Production Calculator’ (vipcal, http://www.univie.ac.at/nuhag-php/vipcal) that calculates lytic production and the percentage of lysogenic cells based on data obtained from a viral reduction approac...

  17. An ANN-based load model for fast transient stability calculations

    Energy Technology Data Exchange (ETDEWEB)

    Qian, Ai; Shrestha, G.B. [School of EEE, Nanyang Technological University (Singapore)

    2006-01-15

    Load models play an important role in the simulation and calculation of power system performance. This paper presents a new load model which is based on a particular form of artificial neural networks we call adaptive back-propagation (ABP) network. ABP has can overcome some of short-comings of common back-propagation (BP) and ABP load models offer many advantages over traditional load models as they are non-structural and can be derived quickly. The application of the method in modeling loads is illustrated using actual field test data. The load models so obtained are shown to replicate the test measurements more closely than that based on traditional load models. Further extension of the method for the identification of the parameters of the traditional load models is proposed. It is based on linear back-propagation (LBP) network. The proposed LBP load model is incorporated in a transient stability program to show that the computational time is significantly reduced. (author)

  18. Calculated thermal performance of solar collectors based on measured weather data from 2001-2010

    DEFF Research Database (Denmark)

    Dragsted, Janne; Furbo, Simon; Andersen, Elsa;

    2015-01-01

    This paper presents an investigation of the differences in modeled thermal performance of solar collectors when meteorological reference years are used as input and when mulit-year weather data is used as input. The investigation has shown that using the Danish reference year based on the period ...... with an increase in global radiation. This means that besides increasing the thermal performance with increasing the solar radiation, the utilization of the solar radiation also becomes better.......This paper presents an investigation of the differences in modeled thermal performance of solar collectors when meteorological reference years are used as input and when mulit-year weather data is used as input. The investigation has shown that using the Danish reference year based on the period...... 1975-1990 will result in deviations of up to 39 % compared with thermal performance calculated with multi-year the measured weather data. For the newer local reference years based on the period 2001-2010 the maximum deviation becomes 25 %. The investigation further showed an increase in utilization...

  19. A study of potential numerical pitfalls in GPU-based Monte Carlo dose calculation

    Science.gov (United States)

    Magnoux, Vincent; Ozell, Benoît; Bonenfant, Éric; Després, Philippe

    2015-07-01

    The purpose of this study was to evaluate the impact of numerical errors caused by the floating point representation of real numbers in a GPU-based Monte Carlo code used for dose calculation in radiation oncology, and to identify situations where this type of error arises. The program used as a benchmark was bGPUMCD. Three tests were performed on the code, which was divided into three functional components: energy accumulation, particle tracking and physical interactions. First, the impact of single-precision calculations was assessed for each functional component. Second, a GPU-specific compilation option that reduces execution time as well as precision was examined. Third, a specific function used for tracking and potentially more sensitive to precision errors was tested by comparing it to a very high-precision implementation. Numerical errors were found in two components of the program. Because of the energy accumulation process, a few voxels surrounding a radiation source end up with a lower computed dose than they should. The tracking system contained a series of operations that abnormally amplify rounding errors in some situations. This resulted in some rare instances (less than 0.1%) of computed distances that are exceedingly far from what they should have been. Most errors detected had no significant effects on the result of a simulation due to its random nature, either because they cancel each other out or because they only affect a small fraction of particles. The results of this work can be extended to other types of GPU-based programs and be used as guidelines to avoid numerical errors on the GPU computing platform.

  20. A cultural study of a science classroom and graphing calculator-based technology

    Science.gov (United States)

    Casey, Dennis Alan

    Social, political, and technological events of the past two decades have had considerable bearing on science education. While sociological studies of scientists at work have seriously questioned traditional histories of science, national and state educational systemic reform initiatives have been enacted, stressing standards and accountability. Recently, powerful instructional technologies have become part of the landscape of the classroom. One example, graphing calculator-based technology, has found its way from commercial and domestic applications into the pedagogy of science and math education. The purpose of this study was to investigate the culture of an "alternative" science classroom and how it functions with graphing calculator-based technology. Using ethnographic methods, a case study of one secondary, team-taught, Environmental/Physical Science (EPS) classroom was conducted. Nearly half of the 23 students were identified as students with special education needs. Over a four-month period, field data was gathered from written observations, videotaped interactions, audio taped interviews, and document analyses to determine how technology was used and what meaning it had for the participants. Analysis indicated that the technology helped to keep students from getting frustrated with handling data and graphs. In a relatively short period of time, students were able to gather data, produce graphs, and to use inscriptions in meaningful classroom discussions. In addition, teachers used the technology as a means to involve and motivate students to want to learn science. By employing pedagogical skills and by utilizing a technology that might not otherwise be readily available to these students, an environment of appreciation, trust, and respect was fostered. Further, the use of technology by these teachers served to expand students' social capital---the benefits that come from an individual's social contacts, social skills, and social resources.

  1. GTV-based prescription in SBRT for lung lesions using advanced dose calculation algorithms

    International Nuclear Information System (INIS)

    The aim of current study was to investigate the way dose is prescribed to lung lesions during SBRT using advanced dose calculation algorithms that take into account electron transport (type B algorithms). As type A algorithms do not take into account secondary electron transport, they overestimate the dose to lung lesions. Type B algorithms are more accurate but still no consensus is reached regarding dose prescription. The positive clinical results obtained using type A algorithms should be used as a starting point. In current work a dose-calculation experiment is performed, presenting different prescription methods. Three cases with three different sizes of peripheral lung lesions were planned using three different treatment platforms. For each individual case 60 Gy to the PTV was prescribed using a type A algorithm and the dose distribution was recalculated using a type B algorithm in order to evaluate the impact of the secondary electron transport. Secondly, for each case a type B algorithm was used to prescribe 48 Gy to the PTV, and the resulting doses to the GTV were analyzed. Finally, prescriptions based on specific GTV dose volumes were evaluated. When using a type A algorithm to prescribe the same dose to the PTV, the differences regarding median GTV doses among platforms and cases were always less than 10% of the prescription dose. The prescription to the PTV based on type B algorithms, leads to a more important variability of the median GTV dose among cases and among platforms, (respectively 24%, and 28%). However, when 54 Gy was prescribed as median GTV dose, using a type B algorithm, the variability observed was minimal. Normalizing the prescription dose to the median GTV dose for lung lesions avoids variability among different cases and treatment platforms of SBRT when type B algorithms are used to calculate the dose. The combination of using a type A algorithm to optimize a homogeneous dose in the PTV and using a type B algorithm to prescribe the

  2. Mobile application-based Seoul National University Prostate Cancer Risk Calculator: development, validation, and comparative analysis with two Western risk calculators in Korean men.

    Directory of Open Access Journals (Sweden)

    Chang Wook Jeong

    Full Text Available OBJECTIVES: We developed a mobile application-based Seoul National University Prostate Cancer Risk Calculator (SNUPC-RC that predicts the probability of prostate cancer (PC at the initial prostate biopsy in a Korean cohort. Additionally, the application was validated and subjected to head-to-head comparisons with internet-based Western risk calculators in a validation cohort. Here, we describe its development and validation. PATIENTS AND METHODS: As a retrospective study, consecutive men who underwent initial prostate biopsy with more than 12 cores at a tertiary center were included. In the development stage, 3,482 cases from May 2003 through November 2010 were analyzed. Clinical variables were evaluated, and the final prediction model was developed using the logistic regression model. In the validation stage, 1,112 cases from December 2010 through June 2012 were used. SNUPC-RC was compared with the European Randomized Study of Screening for PC Risk Calculator (ERSPC-RC and the Prostate Cancer Prevention Trial Risk Calculator (PCPT-RC. The predictive accuracy was assessed using the area under the receiver operating characteristic curve (AUC. The clinical value was evaluated using decision curve analysis. RESULTS: PC was diagnosed in 1,240 (35.6% and 417 (37.5% men in the development and validation cohorts, respectively. Age, prostate-specific antigen level, prostate size, and abnormality on digital rectal examination or transrectal ultrasonography were significant factors of PC and were included in the final model. The predictive accuracy in the development cohort was 0.786. In the validation cohort, AUC was significantly higher for the SNUPC-RC (0.811 than for ERSPC-RC (0.768, p<0.001 and PCPT-RC (0.704, p<0.001. Decision curve analysis also showed higher net benefits with SNUPC-RC than with the other calculators. CONCLUSIONS: SNUPC-RC has a higher predictive accuracy and clinical benefit than Western risk calculators. Furthermore, it is easy

  3. Pressure Vessel Investigations of the Former Greifswald NPP: Fluence Calculations and Niobium Based Fluence Measurements

    International Nuclear Information System (INIS)

    Pressure vessel integrity assessment after long-term service irradiation is commonly based on surveillance program results. Nevertheless, only the investigation of RPV material from decommissioned NPPs enables the evaluation of the real toughness response. Such a chance is given now through the investigation of material from the former Greifswald NPP (VVER-440/230) to evaluate the material state of a standard RPV design and to assess the quality of prediction rules and assessment tools. The operation of the four Greifswald units was finished in 1991 after 12--15 years of operation. In autumn 2005 the first trepans (diameter 120 mm) were gained from the unit 1 of this NPP. Some details of the trepanning procedure will be given. The paper mainly deals with the retrospective dosimetry based on Niobium, which is a trace element of the RPV material. The reaction 93Nb(n,n')93mNb with an energy dependence highly correlated to radiation damage and a half life of the reaction product of 16.13 years is well suited for retrospective fast neutron dosimetry. Fluence calculations using the code TRAMO were based on pin-wise time dependent neutron sources and an updated nuclear data base (ENDF/B-VI release 8). The neutron spectra were determined at the trepan positions. The different loading schemes of unit 1 (standard and with 4 or 6 dummy assemblies) were taken into account. The calculated specific 93mNb activities for February, 2006 at the sample positions were determined to 16.3 Bq/μg Nb for sample 1, (0.1 cm distance from inner wall), and 4.0 Bq/μg Nb for sample 2 (11.5 cm distance from inner wall). Unfortunately, a second neutron reaction besides 93Nb(n,n') leading to 93mNb-activity is the reaction 92Mo(n,γ)93Mo. 93Mo decays by electron capture to 93mNb with a half life of 4000 years and a branching ratio br = 0.88. As (n,γ)-reactions are produced mainly by low energy neutrons, being less important for material damage, the 93mNb-activity generated through the Mo

  4. First-principles calculation method for electron transport based on the grid Lippmann-Schwinger equation.

    Science.gov (United States)

    Egami, Yoshiyuki; Iwase, Shigeru; Tsukamoto, Shigeru; Ono, Tomoya; Hirose, Kikuji

    2015-09-01

    We develop a first-principles electron-transport simulator based on the Lippmann-Schwinger (LS) equation within the framework of the real-space finite-difference scheme. In our fully real-space-based LS (grid LS) method, the ratio expression technique for the scattering wave functions and the Green's function elements of the reference system is employed to avoid numerical collapse. Furthermore, we present analytical expressions and/or prominent calculation procedures for the retarded Green's function, which are utilized in the grid LS approach. In order to demonstrate the performance of the grid LS method, we simulate the electron-transport properties of the semiconductor-oxide interfaces sandwiched between semi-infinite jellium electrodes. The results confirm that the leakage current through the (001)Si-SiO_{2} model becomes much larger when the dangling-bond state is induced by a defect in the oxygen layer, while that through the (001)Ge-GeO_{2} model is insensitive to the dangling bond state. PMID:26465580

  5. Design of Pd-Based Bimetallic Catalysts for ORR: A DFT Calculation Study

    Directory of Open Access Journals (Sweden)

    Lihui Ou

    2015-01-01

    Full Text Available Developing Pd-lean catalysts for oxygen reduction reaction (ORR is the key for large-scale application of proton exchange membrane fuel cells (PEMFCs. In the present paper, we have proposed a multiple-descriptor strategy for designing efficient and durable ORR Pd-based alloy catalysts. We demonstrated that an ideal Pd-based bimetallic alloy catalyst for ORR should possess simultaneously negative alloy formation energy, negative surface segregation energy of Pd, and a lower oxygen binding ability than pure Pt. By performing detailed DFT calculations on the thermodynamics, surface chemistry and electronic properties of Pd-M alloys, Pd-V, Pd-Fe, Pd-Zn, Pd-Nb, and Pd-Ta, are identified theoretically to have stable Pd segregated surface and improved ORR activity. Factors affecting these properties are analyzed. The alloy formation energy of Pd with transition metals M can be mainly determined by their electron interaction. This may be the origin of the negative alloy formation energy for Pd-M alloys. The surface segregation energy of Pd is primarily determined by the surface energy and the atomic radius of M. The metals M which have smaller atomic radius and higher surface energy would tend to favor the surface segregation of Pd in corresponding Pd-M alloys.

  6. Affine density in wavelet analysis

    CERN Document Server

    Kutyniok, Gitta

    2007-01-01

    In wavelet analysis, irregular wavelet frames have recently come to the forefront of current research due to questions concerning the robustness and stability of wavelet algorithms. A major difficulty in the study of these systems is the highly sensitive interplay between geometric properties of a sequence of time-scale indices and frame properties of the associated wavelet systems. This volume provides the first thorough and comprehensive treatment of irregular wavelet frames by introducing and employing a new notion of affine density as a highly effective tool for examining the geometry of sequences of time-scale indices. Many of the results are new and published for the first time. Topics include: qualitative and quantitative density conditions for existence of irregular wavelet frames, non-existence of irregular co-affine frames, the Nyquist phenomenon for wavelet systems, and approximation properties of irregular wavelet frames.

  7. Monte Carlo calculation based on hydrogen composition of the tissue for MV photon radiotherapy.

    Science.gov (United States)

    Demol, Benjamin; Viard, Romain; Reynaert, Nick

    2015-01-01

    The purpose of this study was to demonstrate that Monte Carlo treatment planning systems require tissue characterization (density and composition) as a function of CT number. A discrete set of tissue classes with a specific composition is introduced. In the current work we demonstrate that, for megavoltage photon radiotherapy, only the hydrogen content of the different tissues is of interest. This conclusion might have an impact on MRI-based dose calculations and on MVCT calibration using tissue substitutes. A stoichiometric calibration was performed, grouping tissues with similar atomic composition into 15 dosimetrically equivalent subsets. To demonstrate the importance of hydrogen, a new scheme was derived, with correct hydrogen content, complemented by oxygen (all elements differing from hydrogen are replaced by oxygen). Mass attenuation coefficients and mass stopping powers for this scheme were calculated and compared to the original scheme. Twenty-five CyberKnife treatment plans were recalculated by an in-house developed Monte Carlo system using tissue density and hydrogen content derived from the CT images. The results were compared to Monte Carlo simulations using the original stoichiometric calibration. Between 300 keV and 3 MeV, the relative difference of mass attenuation coefficients is under 1% within all subsets. Between 10 keV and 20 MeV, the relative difference of mass stopping powers goes up to 5% in hard bone and remains below 2% for all other tissue subsets. Dose-volume histograms (DVHs) of the treatment plans present no visual difference between the two schemes. Relative differences of dose indexes D98, D95, D50, D05, D02, and Dmean were analyzed and a distribution centered around zero and of standard deviation below 2% (3 σ) was established. On the other hand, once the hydrogen content is slightly modified, important dose differences are obtained. Monte Carlo dose planning in the field of megavoltage photon radiotherapy is fully achievable using

  8. [Calculation on ecological security baseline based on the ecosystem services value and the food security].

    Science.gov (United States)

    He, Ling; Jia, Qi-jian; Li, Chao; Xu, Hao

    2016-01-01

    The rapid development of coastal economy in Hebei Province caused rapid transition of coastal land use structure, which has threatened land ecological security. Therefore, calculating ecosystem service value of land use and exploring ecological security baseline can provide the basis for regional ecological protection and rehabilitation. Taking Huanghua, a city in the southeast of Hebei Province, as an example, this study explored the joint point, joint path and joint method between ecological security and food security, and then calculated the ecological security baseline of Huanghua City based on the ecosystem service value and the food safety standard. The results showed that ecosystem service value of per unit area from maximum to minimum were in this order: wetland, water, garden, cultivated land, meadow, other land, salt pans, saline and alkaline land, constructive land. The order of contribution rates of each ecological function value from high to low was nutrient recycling, water conservation, entertainment and culture, material production, biodiversity maintenance, gas regulation, climate regulation and environmental purification. The security baseline of grain production was 0.21 kg · m⁻², the security baseline of grain output value was 0.41 yuan · m⁻², the baseline of ecosystem service value was 21.58 yuan · m⁻², and the total of ecosystem service value in the research area was 4.244 billion yuan. In 2081 the ecological security will reach the bottom line and the ecological system, in which human is the subject, will be on the verge of collapse. According to the ecological security status, Huanghua can be divided into 4 zones, i.e., ecological core protection zone, ecological buffer zone, ecological restoration zone and human activity core zone. PMID:27228612

  9. Variations on calculating left-ventricular volume with the radionuclide count-based method

    International Nuclear Information System (INIS)

    Various methods for the calculation of left-ventricular volume by the count-based method utilizing red-blood-cell labeling with /sup 99m/Tc and a parallel-hole collimator are evaluated. Attenuation correction, linked to an additional left posterior oblique view, is utilized for all 26 patients. The authors examine (1) two methods of calculating depth, (2) the use of a pair of attenuation coefficients, (3) the optimization of attenuation coefficients, and (4) the employment of an automated program for expansion of the region of interest. The standard error of the estimate (SEE) from the correlation of the radionuclide volumes with the contrast-angiography volumes, and the root-mean-square difference between the two volume sets at the minimum SEE are computed. It is found that optimizing a single linear attenuation coefficient assumed for attenuation correction best reduces the value of the SEE. The average of the optimum value from the end-diastolic data and that from the end-systolic data is 0.11 cm-1. This value agrees with the mean minus one standard deviation value determined independently from computed tomography scans (0.13-0.02 cm-1). It is also found that expansion of the region of interest beyond the second-derivative edge with an automated program, in order to correctly include more counts, does not lower the SEE as hoped. This result is in contrast to the results of others with different data and a manual method. Possible causes for the difference are given

  10. GIS supported calculations of (137)Cs deposition in Sweden based on precipitation data.

    Science.gov (United States)

    Almgren, Sara; Nilsson, Elisabeth; Erlandsson, Bengt; Isaksson, Mats

    2006-09-15

    It is of interest to know the spatial variation and the amount of (137)Cs e.g. in case of an accident with a radioactive discharge. In this study, the spatial distribution of the quarterly (137)Cs deposition over Sweden due to nuclear weapons fallout (NWF) during the period 1962-1966 was determined by relating the measured deposition density at a reference site to the amount of precipitation. Measured quarterly values of (137)Cs deposition density per unit precipitation at three reference sites and quarterly precipitation at 62 weather stations distributed over Sweden were used in the calculations. The reference sites were assumed to represent areas with different quarterly mean precipitation. The extent of these areas was determined from the distribution of the mean measured precipitation between 1961 and 1990 and varied according to seasonal variations in the mean precipitation pattern. Deposition maps were created by interpolation within a geographical information system (GIS). Both integrated (total) and cumulative (decay corrected) deposition densities were calculated. The lowest levels of NWF (137)Cs deposition density were noted in north-eastern and eastern parts of Sweden and the highest levels in the western parts of Sweden. Furthermore the deposition density of (137)Cs, resulting from the Chernobyl accident was determined for an area in western Sweden based on precipitation data. The highest levels of Chernobyl (137)Cs in western Sweden were found in the western parts of the area along the coast and the lowest in the east. The sum of the deposition densities from NWF and Chernobyl in western Sweden was then compared to the total activity measured in soil samples at 27 locations. Comparisons between the predicted values of this study show a good agreement with measured values and other studies. PMID:16647743

  11. Protein isolation using affinity chromatography

    OpenAIRE

    Besselink, T.

    2012-01-01

    Many product or even waste streams in the food industry contain components that may have potential for e.g. functional foods. These streams are typically large in volume and the components of interest are only present at low concentrations. A robust and highly selective separation process should be developed for efficient isolation of the components. Affinity chromatography is such a selective method. Ligands immobilized to a stationary phase (e.g., a resin or membrane) are used to bind the c...

  12. Inhomogeneous self-affine carpets

    OpenAIRE

    Fraser, Jonathan M.

    2013-01-01

    We investigate the dimension theory of inhomogeneous self-affine carpets. Through the work of Olsen, Snigireva and Fraser, the dimension theory of inhomogeneous self-similar sets is now relatively well-understood, however, almost no progress has been made concerning more general non-conformal inhomogeneous attractors. If a dimension is countably stable, then the results are immediate and so we focus on the upper and lower box dimensions and compute these explicitly for large classes of inhomo...

  13. An accurate calculation method of the power harmonic parameters based on the delay time theorem of Fourier transform

    Institute of Scientific and Technical Information of China (English)

    TANG Yi; FANG Yong-li; YANG Luo; SUN Yu-xin; YU Zheng-hua

    2012-01-01

    A new accurate calculation method of electric power harmonic parameters was presented.Based on the delay time theorem of Fourier transform,the frequency of the electric power was calculated,and then,suing interpolation in the frequency domain of the windows,the parameters (amplitude and phase) of each harmonic frequency signals were calculated accurately.In the paper,the effect of the delay time and the windows on the electric power harmonic calculation accuracy was analysed.The digital simulation and the physical measurement tests show that the proposed method is effective and has more advantages than other methods which are based on multipoint interpolation especially in calculation time cost; therefore,it is very suitable to be used in the single chip DSP micro-processor.

  14. Nurse Staffing Calculation in the Emergency Department - Performance-Oriented Calculation Based on the Manchester Triage System at the University Hospital Bonn

    Science.gov (United States)

    Gräff, Ingo; Goldschmidt, Bernd; Glien, Procula; Klockner, Sophia; Erdfelder, Felix; Schiefer, Jennifer Lynn; Grigutsch, Daniel

    2016-01-01

    Background To date, there are no valid statistics regarding the number of full time staff necessary for nursing care in emergency departments in Europe. Material and Methods Staff requirement calculations were performed using state-of-the art procedures which take both fluctuating patient volume and individual staff shortfall rates into consideration. In a longitudinal observational study, the average nursing staff engagement time per patient was assessed for 503 patients. For this purpose, a full-time staffing calculation was estimated based on the five priority levels of the Manchester Triage System (MTS), taking into account specific workload fluctuations (50th-95th percentiles). Results Patients classified to the MTS category red (n = 35) required the most engagement time with an average of 97.93 min per patient. On weighted average, for orange MTS category patients (n = 118), nursing staff were required for 85.07 min, for patients in the yellow MTS category (n = 181), 40.95 min, while the two MTS categories with the least acute patients, green (n = 129) and blue (n = 40) required 23.18 min and 14.99 min engagement time per patient, respectively. Individual staff shortfall due to sick days and vacation time was 20.87% of the total working hours. When extrapolating this to 21,899 (2010) emergency patients, 67–123 emergency patients (50–95% percentile) per month can be seen by one nurse. The calculated full time staffing requirement depending on the percentiles was 14.8 to 27.1. Conclusion Performance-oriented staff planning offers an objective instrument for calculation of the full-time nursing staff required in emergency departments. PMID:27138492

  15. Impact of Heterogeneity-Based Dose Calculation Using a Deterministic Grid-Based Boltzmann Equation Solver for Intracavitary Brachytherapy

    International Nuclear Information System (INIS)

    Purpose: To investigate the dosimetric impact of the heterogeneity dose calculation Acuros (Transpire Inc., Gig Harbor, WA), a grid-based Boltzmann equation solver (GBBS), for brachytherapy in a cohort of cervical cancer patients. Methods and Materials: The impact of heterogeneities was retrospectively assessed in treatment plans for 26 patients who had previously received 192Ir intracavitary brachytherapy for cervical cancer with computed tomography (CT)/magnetic resonance-compatible tandems and unshielded colpostats. The GBBS models sources, patient boundaries, applicators, and tissue heterogeneities. Multiple GBBS calculations were performed with and without solid model applicator, with and without overriding the patient contour to 1 g/cm3 muscle, and with and without overriding contrast materials to muscle or 2.25 g/cm3 bone. Impact of source and boundary modeling, applicator, tissue heterogeneities, and sensitivity of CT-to-material mapping of contrast were derived from the multiple calculations. American Association of Physicists in Medicine Task Group 43 (TG-43) guidelines and the GBBS were compared for the following clinical dosimetric parameters: Manchester points A and B, International Commission on Radiation Units and Measurements (ICRU) report 38 rectal and bladder points, three and nine o’clock, and D2cm3 to the bladder, rectum, and sigmoid. Results: Points A and B, D2 cm3 bladder, ICRU bladder, and three and nine o’clock were within 5% of TG-43 for all GBBS calculations. The source and boundary and applicator account for most of the differences between the GBBS and TG-43 guidelines. The D2cm3 rectum (n = 3), D2cm3 sigmoid (n = 1), and ICRU rectum (n = 6) had differences of >5% from TG-43 for the worst case incorrect mapping of contrast to bone. Clinical dosimetric parameters were within 5% of TG-43 when rectal and balloon contrast were mapped to bone and radiopaque packing was not overridden. Conclusions: The GBBS has minimal impact on clinical

  16. Design calculations of a D(d,n) reaction based PGNAA set up

    International Nuclear Information System (INIS)

    An accelerator-based Prompt Gamma Ray Neutron Activation Analysis (PGNAA) setup has been developed at 350 keV Accelerator Laboratory of Center of King Fahd University of Petroleum and Minerals (KFUPM) for the analysis of cement samples. The setup mainly consists of a cylindrical cement sample holder enclosed in a cylindrical high density polyethylene moderator and placed between a γ--ray detector and a 2.8 MeV neutrons source. The 2.8 MeV neutrons are moderated in the moderator to thermal energies and are captured by the cement sample atoms. The prompt γ--ray emitted by cement sample nuclei are detected by a NaI detector to determine the concentration of each elements of interest. The design of the setup was obtained through Monte Carlo simulations of a 2.8 MeV neutron based PGNAA setup for elemental analysis of the cement samples. The simulations were conducted to optimize the size of the moderator, sample and detector shielding to obtain maximum yield of prompt γ--ray at the detector. In order to verify the results of design calculations, thermal neutron intensity and prompt γ--ray yield was measured using the PGNAA facility. In both the studies a pulsed 200 keV deuteron beam with 5 ns width and 30 kHz repetition rate was used to produce 2.8 MeV neutrons via D(d,n) reaction. The thermal neutron intensity was measured as a function of thickness of the moderator. The thermal neutrons measurements were carried out using Nuclear Track Detectors (NTD). A good agreement has been found between the results of thermal neutron intensity measurements and Monte Carlo calculations. Finally, a prompt γ-ray spectrum was acquired from a Portland cement sample using a 25.4 cm x 25.4 cm (diameter x thickness) NaI detector. Well resolved peaks of prompt γ-ray from thermal capture of neutrons in calcium, silicon, and iron were detected indicating satisfactory performance of the PGNAA facility. (Author)

  17. Determination of trace glucose and forecast of human diseases by affinity adsorption solid substrate room temperature phosphorimetry based on Triticum valgaris lectin labeled with 4.0-generation dendrimers

    Science.gov (United States)

    Li, Zhiming; Zhu, Guohui; Liu, Jiaming; Lu, Qiaomei; Yang, Minlan; Wu, Hong; Shi, Xiumei; Chen, Xinhua

    2007-08-01

    A new phosphorescence labeling reagent Triton-100X-4.0G-D (4.0G-D refers to 4.0-generation dendrimers) was found. Quantitative specific affinity adsorption (AA) reaction between Triton-100X-4.0G-D-WGA and glucose (G) was carried out on the surface of nitrocellulose membrane (NCM), and the Δ Ip of the product of AA reaction was linear correlation to the content of G. Based on the facts above, a new method for the determination of trace G was established by WGA labeled with Triton-100X-4.0G-D affinity adsorption solid substrate room temperature phosphorimetry (Triton-100X-4.0G-D-WGA-AA-SS-RTP). This research showed that AA-SS-RTP for either direct method or sandwich method could combine very well the characteristics of both the high sensitivity of SS-RTP and the specificity of the AA reaction. Detection limits (LD) were 0.24 fg spot -1 for direct method and 0.18 fg spot -1 for sandwich method, indicating both of them were of high sensitivity. The method has been applied to the determination of the content of G in human serum, and the results were coincided with those obtained by glucose oxidize enzyme method. It can also be applied to forecast accurately some human diseases, such as primary hepatic carcinoma, cirrhosis, acute and chronic hepatitis, transfer hepatocellular, etc. Meanwhile, the mechanism for the determination of G with AA-SS-RTP was discussed.

  18. Affinity-Based Screening of Tetravalent Peptides Identifies Subtype-Selective Neutralizers of Shiga Toxin 2d, a Highly Virulent Subtype, by Targeting a Unique Amino Acid Involved in Its Receptor Recognition.

    Science.gov (United States)

    Mitsui, Takaaki; Watanabe-Takahashi, Miho; Shimizu, Eiko; Zhang, Baihao; Funamoto, Satoru; Yamasaki, Shinji; Nishikawa, Kiyotaka

    2016-09-01

    Shiga toxin (Stx), a major virulence factor of enterohemorrhagic Escherichia coli (EHEC), can be classified into two subgroups, Stx1 and Stx2, each consisting of various closely related subtypes. Stx2 subtypes Stx2a and Stx2d are highly virulent and linked with serious human disorders, such as acute encephalopathy and hemolytic-uremic syndrome. Through affinity-based screening of a tetravalent peptide library, we previously developed peptide neutralizers of Stx2a in which the structure was optimized to bind to the B-subunit pentamer. In this study, we identified Stx2d-selective neutralizers by targeting Asn16 of the B subunit, an amino acid unique to Stx2d that plays an essential role in receptor binding. We synthesized a series of tetravalent peptides on a cellulose membrane in which the core structure was exactly the same as that of peptides in the tetravalent library. A total of nine candidate motifs were selected to synthesize tetravalent forms of the peptides by screening two series of the tetravalent peptides. Five of the tetravalent peptides effectively inhibited the cytotoxicity of Stx2a and Stx2d, and notably, two of the peptides selectively inhibited Stx2d. These two tetravalent peptides bound to the Stx2d B subunit with high affinity dependent on Asn16. The mechanism of binding to the Stx2d B subunit differed from that of binding to Stx2a in that the peptides covered a relatively wide region of the receptor-binding surface. Thus, this highly optimized screening technique enables the development of subtype-selective neutralizers, which may lead to more sophisticated treatments of infections by Stx-producing EHEC. PMID:27382021

  19. Comparison of CT number calibration techniques for CBCT-based dose calculation

    Energy Technology Data Exchange (ETDEWEB)

    Dunlop, Alex [The Royal Marsden NHS Foundation Trust, Joint Department of Physics, Institute of Cancer Research, London (United Kingdom); The Royal Marsden Hospital, Sutton, Surrey, Downs Road (United Kingdom); McQuaid, Dualta; Nill, Simeon; Hansen, Vibeke N.; Oelfke, Uwe [The Royal Marsden NHS Foundation Trust, Joint Department of Physics, Institute of Cancer Research, London (United Kingdom); Murray, Julia; Bhide, Shreerang; Harrington, Kevin [The Royal Marsden Hospital, Sutton, Surrey, Downs Road (United Kingdom); The Institute of Cancer Research, London (United Kingdom); Poludniowski, Gavin [Karolinska University Hospital, Department of Medical Physics, Stockholm (Sweden); Nutting, Christopher [The Institute of Cancer Research, London (United Kingdom); Newbold, Kate [The Royal Marsden Hospital, Sutton, Surrey, Downs Road (United Kingdom)

    2015-12-15

    The aim of this work was to compare and validate various computed tomography (CT) number calibration techniques with respect to cone beam CT (CBCT) dose calculation accuracy. CBCT dose calculation accuracy was assessed for pelvic, lung, and head and neck (H and N) treatment sites for two approaches: (1) physics-based scatter correction methods (CBCT{sub r}); (2) density override approaches including assigning water density to the entire CBCT (W), assignment of either water or bone density (WB), and assignment of either water or lung density (WL). Methods for CBCT density assignment within a commercially available treatment planning system (RS{sub auto}), where CBCT voxels are binned into six density levels, were assessed and validated. Dose-difference maps and dose-volume statistics were used to compare the CBCT dose distributions with the ground truth of a planning CT acquired the same day as the CBCT. For pelvic cases, all CTN calibration methods resulted in average dose-volume deviations below 1.5 %. RS{sub auto} provided larger than average errors for pelvic treatments for patients with large amounts of adipose tissue. For H and N cases, all CTN calibration methods resulted in average dose-volume differences below 1.0 % with CBCT{sub r} (0.5 %) and RS{sub auto} (0.6 %) performing best. For lung cases, WL and RS{sub auto} methods generated dose distributions most similar to the ground truth. The RS{sub auto} density override approach is an attractive option for CTN adjustments for a variety of anatomical sites. RS{sub auto} methods were validated, resulting in dose calculations that were consistent with those calculated on diagnostic-quality CT images, for CBCT images acquired of the lung, for patients receiving pelvic RT in cases without excess adipose tissue, and for H and N cases. (orig.) [German] Ziel dieser Arbeit ist der Vergleich und die Validierung mehrerer CT-Kalibrierungsmethoden zur Dosisberechnung auf der Grundlage von Kegelstrahlcomputertomographie

  20. Model-based dose calculations for COMS eye plaque brachytherapy using an anatomically realistic eye phantom

    Energy Technology Data Exchange (ETDEWEB)

    Lesperance, Marielle; Inglis-Whalen, M.; Thomson, R. M., E-mail: rthomson@physics.carleton.ca [Carleton Laboratory for Radiotherapy Physics, Department of Physics, Carleton University, Ottawa K1S 5B6 (Canada)

    2014-02-15

    Purpose : To investigate the effects of the composition and geometry of ocular media and tissues surrounding the eye on dose distributions for COMS eye plaque brachytherapy with{sup 125}I, {sup 103}Pd, or {sup 131}Cs seeds, and to investigate doses to ocular structures. Methods : An anatomically and compositionally realistic voxelized eye model with a medial tumor is developed based on a literature review. Mass energy absorption and attenuation coefficients for ocular media are calculated. Radiation transport and dose deposition are simulated using the EGSnrc Monte Carlo user-code BrachyDose for a fully loaded COMS eye plaque within a water phantom and our full eye model for the three radionuclides. A TG-43 simulation with the same seed configuration in a water phantom neglecting the plaque and interseed effects is also performed. The impact on dose distributions of varying tumor position, as well as tumor and surrounding tissue media is investigated. Each simulation and radionuclide is compared using isodose contours, dose volume histograms for the lens and tumor, maximum, minimum, and average doses to structures of interest, and doses to voxels of interest within the eye. Results : Mass energy absorption and attenuation coefficients of the ocular media differ from those of water by as much as 12% within the 20–30 keV photon energy range. For all radionuclides studied, average doses to the tumor and lens regions in the full eye model differ from those for the plaque in water by 8%–10% and 13%–14%, respectively; the average doses to the tumor and lens regions differ between the full eye model and the TG-43 simulation by 2%–17% and 29%–34%, respectively. Replacing the surrounding tissues in the eye model with water increases the maximum and average doses to the lens by 2% and 3%, respectively. Substituting the tumor medium in the eye model for water, soft tissue, or an alternate melanoma composition affects tumor dose compared to the default eye model

  1. Model-based dose calculations for COMS eye plaque brachytherapy using an anatomically realistic eye phantom

    International Nuclear Information System (INIS)

    Purpose : To investigate the effects of the composition and geometry of ocular media and tissues surrounding the eye on dose distributions for COMS eye plaque brachytherapy with125I, 103Pd, or 131Cs seeds, and to investigate doses to ocular structures. Methods : An anatomically and compositionally realistic voxelized eye model with a medial tumor is developed based on a literature review. Mass energy absorption and attenuation coefficients for ocular media are calculated. Radiation transport and dose deposition are simulated using the EGSnrc Monte Carlo user-code BrachyDose for a fully loaded COMS eye plaque within a water phantom and our full eye model for the three radionuclides. A TG-43 simulation with the same seed configuration in a water phantom neglecting the plaque and interseed effects is also performed. The impact on dose distributions of varying tumor position, as well as tumor and surrounding tissue media is investigated. Each simulation and radionuclide is compared using isodose contours, dose volume histograms for the lens and tumor, maximum, minimum, and average doses to structures of interest, and doses to voxels of interest within the eye. Results : Mass energy absorption and attenuation coefficients of the ocular media differ from those of water by as much as 12% within the 20–30 keV photon energy range. For all radionuclides studied, average doses to the tumor and lens regions in the full eye model differ from those for the plaque in water by 8%–10% and 13%–14%, respectively; the average doses to the tumor and lens regions differ between the full eye model and the TG-43 simulation by 2%–17% and 29%–34%, respectively. Replacing the surrounding tissues in the eye model with water increases the maximum and average doses to the lens by 2% and 3%, respectively. Substituting the tumor medium in the eye model for water, soft tissue, or an alternate melanoma composition affects tumor dose compared to the default eye model simulation by up to 16

  2. Manifolds with integrable affine shape operator

    Directory of Open Access Journals (Sweden)

    Daniel A. Joaquín

    2005-05-01

    Full Text Available This work establishes the conditions for the existence of vector fields with the property that theirs covariant derivative, with respect to the affine normal connection, be the affine shape operatorS in hypersurfaces. Some results are obtained from this property and, in particular, for some kind of affine decomposable hypersurfaces we explicitely get the actual vector fields.

  3. The future of new calculation concepts in dosimetry based on the Monte Carlo Methods; Avenir des nouveaux concepts des calculs dosimetriques bases sur les methodes de Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Makovicka, L.; Vasseur, A.; Sauget, M.; Martin, E.; Gschwind, R.; Henriet, J. [Universite de Franche-Comte, Equipe IRMA/ENISYS/FEMTO-ST, UMR6174 CNRS, 25 - Montbeliard (France); Vasseur, A.; Sauget, M.; Martin, E.; Gschwind, R.; Henriet, J.; Salomon, M. [Universite de Franche-Comte, Equipe AND/LIFC, 90 - Belfort (France)

    2009-01-15

    Monte Carlo codes, precise but slow, are very important tools in the vast majority of specialities connected to Radiation Physics, Radiation Protection and Dosimetry. A discussion about some other computing solutions is carried out; solutions not only based on the enhancement of computer power, or on the 'biasing'used for relative acceleration of these codes (in the case of photons), but on more efficient methods (A.N.N. - artificial neural network, C.B.R. - case-based reasoning - or other computer science techniques) already and successfully used for a long time in other scientific or industrial applications and not only Radiation Protection or Medical Dosimetry. (authors)

  4. Ruthenia-based electrochemical supercapacitors: insights from first-principles calculations.

    Science.gov (United States)

    Ozoliņš, Vidvuds; Zhou, Fei; Asta, Mark

    2013-05-21

    Electrochemical supercapacitors (ECs) have important applications in areas wherethe need for fast charging rates and high energy density intersect, including in hybrid and electric vehicles, consumer electronics, solar cell based devices, and other technologies. In contrast to carbon-based supercapacitors, where energy is stored in the electrochemical double-layer at the electrode/electrolyte interface, ECs involve reversible faradaic ion intercalation into the electrode material. However, this intercalation does not lead to phase change. As a result, ECs can be charged and discharged for thousands of cycles without loss of capacity. ECs based on hydrous ruthenia, RuO2·xH2O, exhibit some of the highest specific capacitances attained in real devices. Although RuO2 is too expensive for widespread practical use, chemists have long used it as a model material for investigating the fundamental mechanisms of electrochemical supercapacitance and heterogeneous catalysis. In this Account, we discuss progress in first-principles density-functional theory (DFT) based studies of the electronic structure, thermodynamics, and kinetics of hydrous and anhydrous RuO2. We find that DFT correctly reproduces the metallic character of the RuO2 band structure. In addition, electron-proton double-insertion into bulk RuO2 leads to the formation of a polar covalent O-H bond with a fractional increase of the Ru charge in delocalized d-band states by only 0.3 electrons. This is in slight conflict with the common assumption of a Ru valence change from Ru(4+) to Ru(3+). Using the prototype electrostatic ground state (PEGS) search method, we predict a crystalline RuOOH compound with a formation energy of only 0.15 eV per proton. The calculated voltage for the onset of bulk proton insertion in the dilute limit is only 0.1 V with respect to the reversible hydrogen electrode (RHE), in reasonable agreement with the 0.4 V threshold for a large diffusion-limited contribution measured experimentally

  5. A Quick and Affine Invariance Matching Method for Oblique Images

    Directory of Open Access Journals (Sweden)

    XIAO Xiongwu

    2015-04-01

    Full Text Available This paper proposed a quick, affine invariance matching method for oblique images. It calculated the initial affine matrix by making full use of the two estimated camera axis orientation parameters of an oblique image, then recovered the oblique image to a rectified image by doing the inverse affine transform, and left over by the SIFT method. We used the nearest neighbor distance ratio(NNDR, normalized cross correlation(NCC measure constraints and consistency check to get the coarse matches, then used RANSAC method to calculate the fundamental matrix and the homography matrix. And we got the matches that they were interior points when calculating the homography matrix, then calculated the average value of the matches' principal direction differences. During the matching process, we got the initial matching features by the nearest neighbor(NN matching strategy, then used the epipolar constrains, homography constrains, NCC measure constrains and consistency check of the initial matches' principal direction differences with the calculated average value of the interior matches' principal direction differences to eliminate false matches. Experiments conducted on three pairs of typical oblique images demonstrate that our method takes about the same time as SIFT to match a pair of oblique images with a plenty of corresponding points distributed evenly and an extremely low mismatching rate.

  6. Online Verification of Control Parameter Calculations in Communication Based Train Control System

    CERN Document Server

    Bu, Lei; Wang, Linzhang; Li, Xuandong

    2011-01-01

    Communication Based Train Control (CBTC) system is the state-of-the-art train control system. In a CBTC system, to guarantee the safety of train operation, trains communicate with each other intensively and adjust their control modes autonomously by computing critical control parameters, e.g. velocity range, according to the information they get. As the correctness of the control parameters generated are critical to the safety of the system, a method to verify these parameters is a strong desire in the area of train control system. In this paper, we present our ideas of how to model and verify the control parameter calculations in a CBTC system efficiently. - As the behavior of the system is highly nondeterministic, it is difficult to build and verify the complete behavior space model of the system online in advance. Thus, we propose to model the system according to the ongoing behavior model induced by the control parameters. - As the parameters are generated online and updated very quickly, the verification...

  7. Photon fluence-to-effective dose conversion coefficients calculated from a Saudi population-based phantom

    Science.gov (United States)

    Ma, A. K.; Altaher, K.; Hussein, M. A.; Amer, M.; Farid, K. Y.; Alghamdi, A. A.

    2014-02-01

    In this work we will present a new set of photon fluence-to-effective dose conversion coefficients using the Saudi population-based voxel phantom developed recently by our group. The phantom corresponds to an average Saudi male of 173 cm tall weighing 77 kg. There are over 125 million voxels in the phantom each of which is 1.37×1.37×1.00 mm3. Of the 27 organs and tissues of radiological interest specified in the recommendations of ICRP Publication 103, all but the oral mucosa, extrathoracic tissue and the lymph nodes were identified in the current version of the phantom. The bone surface (endosteum) is too thin to be identifiable; it is about 10 μm thick. The dose to the endosteum was therefore approximated by the dose to the bones. Irradiation geometries included anterior-posterior (AP), left (LLAT) and rotational (ROT). The simulations were carried out with the MCNPX code version 2.5.0. The fluence in free air and the energy depositions in each organ were calculated for monoenergetic photon beams from 10 keV to 10 MeV to obtain the conversion coefficients. The radiation and tissue weighting factors were taken from ICRP Publication 60 and 103. The results from this study will also be compared with the conversion coefficients in ICRP Publication 116.

  8. Improvement of Power Flow Calculation with Optimization Factor Based on Current Injection Method

    Directory of Open Access Journals (Sweden)

    Lei Wang

    2014-01-01

    Full Text Available This paper presents an improvement in power flow calculation based on current injection method by introducing optimization factor. In the method proposed by this paper, the PQ buses are represented by current mismatches while the PV buses are represented by power mismatches. It is different from the representations in conventional current injection power flow equations. By using the combined power and current injection mismatches method, the number of the equations required can be decreased to only one for each PV bus. The optimization factor is used to improve the iteration process and to ensure the effectiveness of the improved method proposed when the system is ill-conditioned. To verify the effectiveness of the method, the IEEE test systems are tested by conventional current injection method and the improved method proposed separately. Then the results are compared. The comparisons show that the optimization factor improves the convergence character effectively, especially that when the system is at high loading level and R/X ratio, the iteration number is one or two times less than the conventional current injection method. When the overloading condition of the system is serious, the iteration number in this paper appears 4 times less than the conventional current injection method.

  9. Fission yield calculation using toy model based on Monte Carlo simulation

    Energy Technology Data Exchange (ETDEWEB)

    Jubaidah, E-mail: jubaidah@student.itb.ac.id [Nuclear Physics and Biophysics Division, Department of Physics, Bandung Institute of Technology. Jl. Ganesa No. 10 Bandung – West Java, Indonesia 40132 (Indonesia); Physics Department, Faculty of Mathematics and Natural Science – State University of Medan. Jl. Willem Iskandar Pasar V Medan Estate – North Sumatera, Indonesia 20221 (Indonesia); Kurniadi, Rizal, E-mail: rijalk@fi.itb.ac.id [Nuclear Physics and Biophysics Division, Department of Physics, Bandung Institute of Technology. Jl. Ganesa No. 10 Bandung – West Java, Indonesia 40132 (Indonesia)

    2015-09-30

    Toy model is a new approximation in predicting fission yield distribution. Toy model assumes nucleus as an elastic toy consist of marbles. The number of marbles represents the number of nucleons, A. This toy nucleus is able to imitate the real nucleus properties. In this research, the toy nucleons are only influenced by central force. A heavy toy nucleus induced by a toy nucleon will be split into two fragments. These two fission fragments are called fission yield. In this research, energy entanglement is neglected. Fission process in toy model is illustrated by two Gaussian curves intersecting each other. There are five Gaussian parameters used in this research. They are scission point of the two curves (R{sub c}), mean of left curve (μ{sub L}) and mean of right curve (μ{sub R}), deviation of left curve (σ{sub L}) and deviation of right curve (σ{sub R}). The fission yields distribution is analyses based on Monte Carlo simulation. The result shows that variation in σ or µ can significanly move the average frequency of asymmetry fission yields. This also varies the range of fission yields distribution probability. In addition, variation in iteration coefficient only change the frequency of fission yields. Monte Carlo simulation for fission yield calculation using toy model successfully indicates the same tendency with experiment results, where average of light fission yield is in the range of 90

  10. Prediction of transport properties of new functional lanthanum-strontium cuprates based materials: molecular dynamics calculations

    International Nuclear Information System (INIS)

    Molecular dynamics method is used for the properties prediction of new lanthanum-strontium cuprates La2-xSrxCuO4-δ based functional materials. The most interesting phases have been synthesized, and electrophysical and thermomechanical properties have been investigated for the verification of acquired calculated data. High values of oxygen diffusion constants is demonstrated to be occurred in solid solutions La2-xSrxCuO4-δ with fine degree of substitution Sr→La (to x=1). Values of lattice parameters, thermal expansion coefficients and oxygen diffusion constants are agree with experimental data. Observed anisotropy of anion transport for all studied compositions is responsible for peculiarities of crystal structure of complex oxides. Applied molecular dynamics method permits to reveal the contribution of separate kinds of oxygen ions (equatorial and apical) in ionic transport at microscopic level, as well as really prove that the oxygen diffusion happens in the ordinary jump mechanism, mainly in (CuO2)-layers

  11. Joint kinematic calculation based on clinical direct kinematic versus inverse kinematic gait models.

    Science.gov (United States)

    Kainz, H; Modenese, L; Lloyd, D G; Maine, S; Walsh, H P J; Carty, C P

    2016-06-14

    Most clinical gait laboratories use the conventional gait analysis model. This model uses a computational method called Direct Kinematics (DK) to calculate joint kinematics. In contrast, musculoskeletal modelling approaches use Inverse Kinematics (IK) to obtain joint angles. IK allows additional analysis (e.g. muscle-tendon length estimates), which may provide valuable information for clinical decision-making in people with movement disorders. The twofold aims of the current study were: (1) to compare joint kinematics obtained by a clinical DK model (Vicon Plug-in-Gait) with those produced by a widely used IK model (available with the OpenSim distribution), and (2) to evaluate the difference in joint kinematics that can be solely attributed to the different computational methods (DK versus IK), anatomical models and marker sets by using MRI based models. Eight children with cerebral palsy were recruited and presented for gait and MRI data collection sessions. Differences in joint kinematics up to 13° were found between the Plug-in-Gait and the gait 2392 OpenSim model. The majority of these differences (94.4%) were attributed to differences in the anatomical models, which included different anatomical segment frames and joint constraints. Different computational methods (DK versus IK) were responsible for only 2.7% of the differences. We recommend using the same anatomical model for kinematic and musculoskeletal analysis to ensure consistency between the obtained joint angles and musculoskeletal estimates. PMID:27139005

  12. SAR Imagery Simulation of Ship Based on Electromagnetic Calculations and Sea Clutter Modelling for Classification Applications

    International Nuclear Information System (INIS)

    Ship detection and classification with space-borne SAR has many potential applications within the maritime surveillance, fishery activity management, monitoring ship traffic, and military security. While ship detection techniques with SAR imagery are well established, ship classification is still an open issue. One of the main reasons may be ascribed to the difficulties on acquiring the required quantities of real data of vessels under different observation and environmental conditions with precise ground truth. Therefore, simulation of SAR images with high scenario flexibility and reasonable computation costs is compulsory for ship classification algorithms development. However, the simulation of SAR imagery of ship over sea surface is challenging. Though great efforts have been devoted to tackle this difficult problem, it is far from being conquered. This paper proposes a novel scheme for SAR imagery simulation of ship over sea surface. The simulation is implemented based on high frequency electromagnetic calculations methods of PO, MEC, PTD and GO. SAR imagery of sea clutter is modelled by the representative K-distribution clutter model. Then, the simulated SAR imagery of ship can be produced by inserting the simulated SAR imagery chips of ship into the SAR imagery of sea clutter. The proposed scheme has been validated with canonical and complex ship targets over a typical sea scene

  13. Fission yield calculation using toy model based on Monte Carlo simulation

    International Nuclear Information System (INIS)

    Toy model is a new approximation in predicting fission yield distribution. Toy model assumes nucleus as an elastic toy consist of marbles. The number of marbles represents the number of nucleons, A. This toy nucleus is able to imitate the real nucleus properties. In this research, the toy nucleons are only influenced by central force. A heavy toy nucleus induced by a toy nucleon will be split into two fragments. These two fission fragments are called fission yield. In this research, energy entanglement is neglected. Fission process in toy model is illustrated by two Gaussian curves intersecting each other. There are five Gaussian parameters used in this research. They are scission point of the two curves (Rc), mean of left curve (μL) and mean of right curve (μR), deviation of left curve (σL) and deviation of right curve (σR). The fission yields distribution is analyses based on Monte Carlo simulation. The result shows that variation in σ or µ can significanly move the average frequency of asymmetry fission yields. This also varies the range of fission yields distribution probability. In addition, variation in iteration coefficient only change the frequency of fission yields. Monte Carlo simulation for fission yield calculation using toy model successfully indicates the same tendency with experiment results, where average of light fission yield is in the range of 90

  14. Calculation of Stress Intensity Factors Based on Force-Displacement Curve using Element Free Galerkin Method

    Science.gov (United States)

    Parvanova, Sonia

    2012-03-01

    An idea related to the calculation of stress intensity factors based on the standard appearance of the force-displacement curve is developed in this paper. The presented procedure predicts the shape of the graphics around the point under consideration form where indirectly the stress intensity factors are obtained. The numerical implementation of the new approach is achieved by using element free Galerkin method, which is a variant of meshless methods and requires only nodal data for a domain discretization without a finite element mesh. A MATLAB software code for two dimensional elasticity problems has been worked out, along with intrinsic basis enrichment for precise modelling of the singular stress field around the crack tip. One numerical example of a rectangular plate with different lengths of a symmetric edge crack is portrayed. The stress intensity factors obtained by the present numerical approach are compared with analytical solutions. The errors in the stress intensity factors for opening fracture mode I are less than 1% although the model mesh is relatively coarse.

  15. Research on Structural Safety of the Stratospheric Airship Based on Multi-Physics Coupling Calculation

    Science.gov (United States)

    Ma, Z.; Hou, Z.; Zang, X.

    2015-09-01

    As a large-scale flexible inflatable structure by a huge inner lifting gas volume of several hundred thousand cubic meters, the stratospheric airship's thermal characteristic of inner gas plays an important role in its structural performance. During the floating flight, the day-night variation of the combined thermal condition leads to the fluctuation of the flow field inside the airship, which will remarkably affect the pressure acted on the skin and the structural safety of the stratospheric airship. According to the multi-physics coupling mechanism mentioned above, a numerical procedure of structural safety analysis of stratospheric airships is developed and the thermal model, CFD model, finite element code and criterion of structural strength are integrated. Based on the computation models, the distributions of the deformations and stresses of the skin are calculated with the variation of day-night time. The effects of loads conditions and structural configurations on the structural safety of stratospheric airships in the floating condition are evaluated. The numerical results can be referenced for the structural design of stratospheric airships.

  16. An iterative calculation to derive extinction-to-backscatter ratio based on lidar measurements

    International Nuclear Information System (INIS)

    The aerosol optical thickness (AOT) is an important parameter for understanding the radiative impact of aerosols. AOT based on lidar measurements is often limited by its finite detection range. In this paper, we have reported a method of fitting and iterative calculation to derive the extinction profile of background aerosols from 0 to 30 km at 532 nm, which is virtually the AOT of the entire atmosphere. The mean extinction derived from this method at the ground level tallies with visibility measurement and it is also consistent with the sun-photometer data, within experimental error. These data have been further treated to study the dust cases. For most of the cases, transmission losses were determined to estimate the extinction as well as lidar ratio. The result of the analysis shows that for background aerosols, a mean lidar ratio of 47±15 sr was found. For dust layers, a mean lidar ratio of 44±19 sr and an optical thickness of 0.53±0.49 were determined at 532 nm

  17. Coenzyme-like ligands for affinity isolation of cholesterol oxidase.

    Science.gov (United States)

    Xin, Yu; Lu, Liushen; Wang, Qing; Zhang, Ling; Tong, Yanjun; Wang, Wu

    2016-05-15

    Two coenzyme-like chemical ligands were designed and synthesized for affinity isolation of cholesterol oxidase (COD). To simulate the structure of natural coenzyme of COD (flavin adenine dinucleotide (FAD)), on Sepharose beads, 5-aminouracil, cyanuric chloride and 1, 4-butanediamine were composed and then modified. The COD gene from Brevibacterium sp. (DQ345780) was expressed in Escherichia coli BL21 (DE3), and then the sorbents were applied to adsorption analysis with the pure enzyme. Subsequently, the captured enzyme was applied to SDS-PAGE and activity analysis. As calculated, the theoretical maximum adsorption (Qmax) of the two affinity sorbents (RL-1 and RL-2) were ∼83.5 and 46.3mg/g wet gel; and the desorption constant Kd of the two sorbents were ∼6.02×10(-4) and 1.19×10(-4)μM. The proteins after cell lysis were applied to affinity isolation, and then after one step of affinity binding on the two sorbents, the protein recoveries of RL-1 and RL-2 were 9.2% and 9.7%; the bioactivity recoveries were 92.7% and 91.3%, respectively. SDS-PAGE analysis revealed that the purities of COD isolated with the two affinity sorbents were approximately 95%. PMID:26856529

  18. Neutronic calculations of AFPR-100 reactor based on Spherical Cermet Fuel particles

    International Nuclear Information System (INIS)

    Highlights: • AFPR-100 reactor considered as a small nuclear reactor without on-site refueling originally based on TRISO micro-fuel element. • The AFPR-100 reactor was re-designed using the new Spherical Cermet fuel element. • The adoption of the Cermet fuel instead of TRISO fuel reduces the core lifetime operation by 3.1 equivalent full power years. • We discussed the new micro-fuel element candidate for small and medium sized reactors. - Abstract: The Atoms For Peace Reactor (AFPR-100), as a 100 MW(e) without the need of on-site refueling, was originally based on UO2 TRISO fuel coated particles embedded in a carbon matrix directly cooled by light water. AFPR-100 is considered as a small nuclear reactor without open-vessel refueling which is proposed by Pacific Northwest National Laboratory (PNNL). An account of significant irradiation swelling in the silicon carbide fission product barrier coating layer of TRISO fuel element, a Spherical Cermet Fuel element has been proposed. Indeed, the new fuel concept, which was developed by PNNL, consists of changing the pyro-carbon and ceramic coatings that are incompatible with low temperature by Zirconium. The latter was chosen to avoid any potential Wigner energy effect issues in the TRISO fuel element. Actually, the purpose of this study is to assess the goal of AFPR-100 concept using the Cermet fuel; undeniably, the fuel core lifetime prediction may be extended for reasonably long period without on-site refueling. In fact, we investigated some neutronic parameters of reactor core by the calculation code SRAC95. The results suggest that the core fuel lifetime beyond 12 equivalent full power years (EFPYs) is possible. Hence, the adoption of Cermet fuel concept shows a core lifetime decrease of about 3.1 EFPY

  19. A voxel-based mouse for internal dose calculations using Monte Carlo simulations (MCNP)

    International Nuclear Information System (INIS)

    Murine models are useful for targeted radiotherapy pre-clinical experiments. These models can help to assess the potential interest of new radiopharmaceuticals. In this study, we developed a voxel-based mouse for dosimetric estimates. A female nude mouse (30 g) was frozen and cut into slices. High-resolution digital photographs were taken directly on the frozen block after each section. Images were segmented manually. Monoenergetic photon or electron sources were simulated using the MCNP4c2 Monte Carlo code for each source organ, in order to give tables of S-factors (in Gy Bq sup - sup 1 s sup - sup 1) for all target organs. Results obtained from monoenergetic particles were then used to generate S-factors for several radionuclides of potential interest in targeted radiotherapy. Thirteen source and 25 target regions were considered in this study. For each source region, 16 photon and 16 electron energies were simulated. Absorbed fractions, specific absorbed fractions and S-factors were calculated for 16 radionuclides of interest for targeted radiotherapy. The results obtained generally agree well with data published previously. For electron energies ranging from 0.1 to 2.5 MeV, the self-absorbed fraction varies from 0.98 to 0.376 for the liver, and from 0.89 to 0.04 for the thyroid. Electrons cannot be considered as 'non-penetrating' radiation for energies above 0.5 MeV for mouse organs. This observation can be generalized to radionuclides: for example, the beta self-absorbed fraction for the thyroid was 0.616 for I-131; absorbed fractions for Y-90 for left kidney-to-left kidney and for left kidney-to-spleen were 0.486 and 0.058, respectively. Our voxel-based mouse allowed us to generate a dosimetric database for use in preclinical targeted radiotherapy experiments. (author)

  20. Digital Game-Based Learning: A Supplement for Medication Calculation Drills in Nurse Education

    Science.gov (United States)

    Foss, Brynjar; Lokken, Atle; Leland, Arne; Stordalen, Jorn; Mordt, Petter; Oftedal, Bjorg F.

    2014-01-01

    Student nurses, globally, appear to struggle with medication calculations. In order to improve these skills among student nurses, the authors developed The Medication Game--an online computer game that aims to provide simple mathematical and medical calculation drills, and help students practise standard medical units and expressions. The aim of…

  1. Specification of materials Data for Fire Safety Calculations based on ENV 1992-1-2

    DEFF Research Database (Denmark)

    Hertz, Kristian Dahl

    1997-01-01

    capacity of constructions of any concrete exposed to any time of any fire exposure can be calculated.Chapter 4.4 provides information on what should be observed if more general calculation methods are used.Annex A provides some additional information on materials data. This chapter is not a part of the...

  2. On the computation of stress in affine versus nonaffine fibril kinematics within planar collagen network models.

    Science.gov (United States)

    Pence, Thomas J; Monroe, Ryan J; Wright, Neil T

    2008-08-01

    Some recent analyses modeled the response of collagenous tissues, such as epicardium, using a hypothetical network consisting of interconnected springlike fibers. The fibers in the network were organized such that internal nodes served as the connection point between three such collagen springs. The results for assumed affine and nonaffine deformations are contrasted after a homogeneous deformation at the boundary. Affine deformation provides a stiffer mechanical response than nonaffine deformation. In contrast to nonaffine deformation, affine deformation determines the displacement of internal nodes without imposing detailed force balance, thereby complicating the simplest intuitive notion of stress, one based on free body cuts, at the single node scale. The standard notion of stress may then be recovered via average field theory computations based on large micromesh realizations. An alternative and by all indications complementary viewpoint for the determination of stress in these collagen fiber networks is discussed here, one in which stress is defined using elastic energy storage, a notion which is intuitive at the single node scale. It replaces the average field theory computations by an averaging technique over randomly oriented isolated simple elements. The analytical operations do not require large micromesh realizations, but the tedious nature of the mathematical manipulation is clearly aided by symbolic algebra calculation. For the example case of linear elastic deformation, this results in material stiffnesses that relate the infinitesimal strain and stress. The result that the affine case is stiffer than the nonaffine case is recovered, as would be expected. The energy framework also lends itself to the natural inclusion of changes in mechanical response due to the chemical, electrical, or thermal environment. PMID:18601451

  3. A brachytherapy treatment planning system based on dicom images and MCNP5 calculations optimized with artificial neural network

    International Nuclear Information System (INIS)

    Exact dose calculation is an important part of brachytherapy Treatment Planning Systems (TPS). Currently used methods, such as analytic methods or tabulated data are inexact, as they are based on dose calculation in homogeneous water medium. Dose calculation systems such as CT based Monte Carlo simulation are the most exact, but they take too much time to reach the desirable accuracy. The aim of this research is to optimize the CT-based Monte Carlo dose calculation for dynamic Treatment Planning systems by using an Artificial Neural Network (ANN) which is capable of calculating the dose distribution with the same accuracy as the CT based Monte Carlo simulation. 80000 Dose distributions -produced by the Best no.2301 seed source in different positions in the CT scan of the prostate- was calculated by the Monte Carlo Neutral particle (MCNP)5 code and this data was used to train the ANN. The ANN was tested for 26768 cases which were not used for the training step, with an average error of 0.8 percent compared to MCNP5 results. (author)

  4. CARE. A model for radiation exposure calculations based on measured emission rates from nuclear facilities

    International Nuclear Information System (INIS)

    The programme CARE (calculation of the annual radiation exposure) calculates the annual environmental exposure of complex nuclear installations. In the diffusion calculation of pollutants, the real weather conditions of the year concerned are taken into account on an hourly basis together with the associated release rates measured for the various nuclides of individual emitters. According to their location in the plant, the contributions of the time-integrated pollutant concentrations of the individual emitters are superimposed at predefinable receiving points in the vicinity or on the boundary of an installation (plant fencing). In the conception of models for calculating the resultant 50-year dose commitments care was taken to ensure that the programme CARE is capable of treating both individual emissions limited in time and quasi-continuous emissions. The programme CARE can therefore be used also for a subsequent calculation of radiation exposure in the event of accidents. (orig.)

  5. A linear integral-equation-based computer code for self-amplified spontaneous emission calculations of free-electron lasers

    International Nuclear Information System (INIS)

    The linear integral-equation-based computer code 'Roger Oleg Nikolai' (RON), which was recently developed at Argonne National Laboratory, was used to calculate the self-amplified spontaneous emission (SASE) performance of the free-electron laser (FEL) being built at Argonne. Signal growth calculations under different conditions were used to estimate tolerances of actual design parameters and to estimate optimal length of the break sections between undulator segments. Explicit calculation of the radiation field was added recently. The measured magnetic fields of five undulators were used to calculate the gain for the Argonne FEL. The result indicates that the real undulators for the Argonne FEL (the effect of magnetic field errors alone) will not significantly degrade the FEL performance. The capability to calculate the small-signal gain for an FEL-oscillator is also demonstrated

  6. Fluence-to-dose conversion coefficients for muons and pions calculated based on ICRP publication 103 using the PHITS code

    International Nuclear Information System (INIS)

    The fluence to effective-dose and organ-absorbed-dose conversion coefficients for charged pions and muons were calculated based on the instructions given in ICRP Publication 103. For the calculation, the particle motions in the ICRP/ICRU adult reference computational phantoms were simulated using the PHITS code for four idealized irradiation geometries as well as those closely representing the geometrical simulations of cosmic-ray muon exposure. Cosmic-ray pion and muon dose rates over a wide altitude range were estimated using the calculated dose conversion coefficients. The results of the calculation indicate that the assumption of the isotropic irradiation geometry is suitable to be utilized in the dose estimations for cosmic-ray pions and muons. It is also found from the calculation that the introduction of ICRP103 gives little impact on the pion and muon dosimetries, since the radiation weighting factors assigned to those particles are maintained in the issue. (author)

  7. Structures of native and affinity-enhanced WT1 epitopes bound to HLA-A*0201: implications for WT1-based cancer therapeutics

    OpenAIRE

    Borbulevych, Oleg Y.; Do, Priscilla; Baker, Brian M.

    2010-01-01

    Presentation of peptides by class I or class II major histocompatibility complex (MHC) molecules is required for the initiation and propagation of a T cell-mediated immune response. Peptides from the Wilms Tumor 1 transcription factor (WT1), upregulated in many hematopoetic and solid tumors, can be recognized by T cells and numerous efforts are underway to engineer WT1-based cancer vaccines. Here we determined the structures of the class I MHC molecule HLA-A*0201 bound to the native 126–134 e...

  8. Three-dimensional thermal-hydraulic response in LBLOCA based on MARS-KS calculation

    International Nuclear Information System (INIS)

    Three-dimensional (3D) thermal-hydraulic analysis of an accident in Nuclear Power Plant (NPP) has been extended to use since Best-Estimate (BE) calculation was allowed for safety analysis. The present study is to discuss why and how big differences can be obtained from the 1D and 3D thermal-hydraulic calculations for large break Loss-of-Coolant Accident (LBLOCA). Calculations are performed using MARS-KS code with one-dimensional (1D) modeling and with 3D modeling for reactor vessel of Advanced Power Reactor (APR1400). For the 3D modeling, the MULTI-D component of the MARS-KS code is applied. Especially, a hot channel having a size of one fuel assembly is also simulated. From the comparison of the calculation results, four differences are found: lower blowdown Peak Cladding Temperature (PCT) in 3D calculation, instantaneous stop of cladding heat-up, extent of blowdown quenching, and milder and longer reflood process in 3D calculation. The flow distribution in the core in 3D calculation could be one of the reasons for those differences. From the sensitivity study, the initial temperature at the reactor vessel upper head is found to have strong effect on the blowdown quenching, thus the reflood PCT and needs a careful consideration. (author)

  9. Volume calculation of subsurface structures and traps in hydrocarbon exploration — a comparison between numerical integration and cell based models

    Science.gov (United States)

    Slavinić, Petra; Cvetković, Marko

    2016-01-01

    The volume calculation of geological structures is one of the primary goals of interest when dealing with exploration or production of oil and gas in general. Most of those calculations are done using advanced software packages but still the mathematical workflow (equations) has to be used and understood for the initial volume calculation process. In this paper a comparison is given between bulk volume calculations of geological structures using trapezoidal and Simpson's rule and the ones obtained from cell-based models. Comparison in calculation is illustrated with four models; dome - 1/2 of ball/sphere, elongated anticline, stratigraphic trap due to lateral facies change and faulted anticline trap. Results show that Simpson's and trapezoidal rules give a very accurate volume calculation even with a few inputs(isopach areas - ordinates). A test of cell based model volume calculation precision against grid resolution is presented for various cases. For high accuracy, less the 1% of an error from coarsening, a cell area has to be 0.0008% of the reservoir area

  10. Calculation of the yearly energy performance of heating systems based on the European Building Directive and related CEN Standards

    DEFF Research Database (Denmark)

    Olesen, Bjarne W.; Langkilde, Gunnar

    2009-01-01

    and cost-effectiveness. For new and existing buildings this requires a calculation of the energy performance of the building including heating, ventilation, cooling and lighting systems, based on primary energy. Each building must have an energy certificate and regular inspections of heating, cooling...... and ventilation systems must be performed. The present paper will present the method for calculating the energy performance for heating systems. The relevant CEN-standards are presented and a sample calculation of energy performance is made for a small family house in different geographical locations: Stockholm...

  11. Bases of general calculation thermohydrodynamic means (codes) verification and validation methodology for accident analysis at nuclear power plants

    International Nuclear Information System (INIS)

    On the basis of previous known approaches' analysis the generalised calculation thermohydrodynamic means verification/validation (V/V) methodology for accident/transition processes' analysis at NPPs is offered in this article. Taking into account formulated requirements and principles the basic V/V procedures, their correlation and order are grounded and considered. The realisation order includes forming calculation means applicability assessment criteria system, analysing mathematical models adequacy to real processes, developing test data bases including a stands adequacy analysis to full-scale conditions, results generalisation methods for final calculation means applicability assessments for specific tasks at specific equipment

  12. GPU-based finite-size pencil beam algorithm with 3D-density correction for radiotherapy dose calculation

    OpenAIRE

    Gu, Xuejun; Jelen, Urszula; Li, Jinsheng; Jia, Xun; Jiang, Steve B.

    2011-01-01

    Targeting at the development of an accurate and efficient dose calculation engine for online adaptive radiotherapy, we have implemented a finite size pencil beam (FSPB) algorithm with a 3D-density correction method on GPU. This new GPU-based dose engine is built on our previously published ultrafast FSPB computational framework. Dosimetric evaluations against Monte Carlo dose calculations are conducted on 10 IMRT treatment plans (5 head-and-neck cases and 5 lung cases). For all cases, there i...

  13. Method for stability analysis based on the Floquet theory and Vidyn calculations

    Energy Technology Data Exchange (ETDEWEB)

    Ganander, Hans

    2005-03-01

    This report presents the activity 3.7 of the STEM-project Aerobig and deals with aeroelastic stability of the complete wind turbine structure at operation. As a consequence of the increase of sizes of wind turbines dynamic couplings are being more important for loads and dynamic properties. The steady ambition to increase the cost competitiveness of wind turbine energy by using optimisation methods lowers design margins, which in turn makes questions about stability of the turbines more important. The main objective of the project is to develop a general stability analysis tool, based on the VIDYN methodology regarding the turbine dynamic equations and the Floquet theory for the stability analysis. The reason for selecting the Floquet theory is that it is independent of number of blades, thus can be used for 2 as well as 3 bladed turbines. Although the latter ones are dominating on the market, the former has large potential when talking about offshore large turbines. The fact that cyclic and individual blade pitch controls are being developed as a mean for reduction of fatigue also speaks for general methods as Floquet. The first step of a general system for stability analysis has been developed, the code VIDSTAB. Together with other methods, as the snap shot method, the Coleman transformation and the use of Fourier series, eigenfrequences and modes can be analysed. It is general with no restrictions on the number of blades nor the symmetry of the rotor. The derivatives of the aerodynamic forces are calculated numerically in this first version. Later versions would include state space formulations of these forces. This would also be the case for the controllers of turbine rotation speed, yaw direction and pitch angle.

  14. Current in the Protein Nanowires: Quantum Calculations of the Base States.

    Science.gov (United States)

    Suprun, Anatol D; Shmeleva, Liudmyla V

    2016-12-01

    It is known that synthesis of adenosine triphosphoric acid in mitochondrions may be only completed on the condition of transport of the electron pairs, which were created due to oxidation processes, to mitochondrions. As of today, many efforts were already taken in order to understand those processes that occur in the course of donor-acceptor electron transport between cellular organelles (that is, between various proteins and protein structures). However, the problem concerning the mechanisms of electron transport over these organelles still remains understudied. This paper is dedicated to the investigation of these same issues.It has been shown that regardless of the amino acid inhomogeneity of the primary structure, it is possible to apply a representation of the second quantization in respect of the protein molecule (hereinafter "numbers of filling representation"). Based on this representation, it has been established that the primary structure of the protein molecule is actually a semiconductor nanowire. In addition, at the same time, its conduction band, into which an electron is injected as the result of donor-acceptor processes, consists of five sub-bands. Three of these sub-bands have normal dispersion laws, while the rest two sub-bands have abnormal dispersion laws (reverse laws). Test calculation of the current density was made under the conditions of the complete absence of the factors, which may be interpreted as external fields. It has been shown that under such conditions, current density is exactly equal to zero. This is the evidence of correctness of the predictive model of the conductivity band of the primary structure of the protein molecule (protein nanowire). At the same time, it makes it possible to apply the obtained results in respect of the actual situation, where factors, which may be interpreted as external fields, exist. PMID:26858156

  15. Ray tracing based path-length calculations for polarized light tomographic imaging

    Science.gov (United States)

    Manjappa, Rakesh; Kanhirodan, Rajan

    2015-09-01

    A ray tracing based path length calculation is investigated for polarized light transport in a pixel space. Tomographic imaging using polarized light transport is promising for applications in optical projection tomography of small animal imaging and turbid media with low scattering. Polarized light transport through a medium can have complex effects due to interactions such as optical rotation of linearly polarized light, birefringence, di-attenuation and interior refraction. Here we investigate the effects of refraction of polarized light in a non-scattering medium. This step is used to obtain the initial absorption estimate. This estimate can be used as prior in Monte Carlo (MC) program that simulates the transport of polarized light through a scattering medium to assist in faster convergence of the final estimate. The reflectance for p-polarized (parallel) and s-polarized (perpendicular) are different and hence there is a difference in the intensities that reach the detector end. The algorithm computes the length of the ray in each pixel along the refracted path and this is used to build the weight matrix. This weight matrix with corrected ray path length and the resultant intensity reaching the detector for each ray is used in the algebraic reconstruction (ART) method. The proposed method is tested with numerical phantoms for various noise levels. The refraction errors due to regions of different refractive index are discussed, the difference in intensities with polarization is considered. The improvements in reconstruction using the correction so applied is presented. This is achieved by tracking the path of the ray as well as the intensity of the ray as it traverses through the medium.

  16. The Astrophysical r-Process: A Comparison of Calculations following Adiabatic Expansion with Classical Calculations Based on Neutron Densities and Temperatures

    International Nuclear Information System (INIS)

    The rapid neutron-capture process (r-process) encounters unstable nuclei far from β-stability. Therefore its observable features, like the abundances, witness (still uncertain) nuclear structure as well as the conditions in the appropriate astrophysical environment. With the remaining lack of a full understanding of its astrophysical origin, parameterized calculations are still needed. We consider two approaches: (1) the classical approach is based on (constant) neutron number densities nn and temperatures T over duration timescales τ; (2) recent investigations, motivated by the neutrino wind scenario from hot neutron stars after a supernova explosion, followed the expansion of matter with initial entropies S and electron fractions Ye over expansion timescales τ. In the latter case the freezeout of reactions with declining temperatures and densities can be taken into account explicitly. We compare the similarities and differences between the two approaches with respect to resulting abundance features and their relation to solar r-process abundances, applying for the first time different nuclear mass models in entropy-based calculations. Special emphasis is given to the questions of (a) whether the same nuclear properties far from stability lead to similar abundance patterns and possible deficiencies in (1) and (2), and (b) whether some features can also provide clear constraints on the astrophysical conditions in terms of permitted entropies, Ye values, and expansion timescales in (2). This relates mostly to the A<110 mass range, where a fit to solar r-abundances in high-entropy supernova scenarios seems to be hard to attain. Possible low-entropy alternatives are presented. copyright copyright 1999. The American Astronomical Society

  17. Rational self-affine tiles

    CERN Document Server

    Steiner, Wolfgang

    2012-01-01

    An integral self-affine tile is the solution of a set equation $\\mathbf{A} \\mathcal{T} = \\bigcup_{d \\in \\mathcal{D}} (\\mathcal{T} + d)$, where $\\mathbf{A}$ is an $n \\times n$ integer matrix and $\\mathcal{D}$ is a finite subset of $\\mathbb{Z}^n$. In the recent decades, these objects and the induced tilings have been studied systematically. We extend this theory to matrices $\\mathbf{A} \\in \\mathbb{Q}^{n \\times n}$. We define rational self-affine tiles as compact subsets of the open subring $\\mathbb{R}^n\\times \\prod_\\mathfrak{p} K_\\mathfrak{p}$ of the ad\\'ele ring $\\mathbb{A}_K$, where the factors of the (finite) product are certain $\\mathfrak{p}$-adic completions of a number field $K$ that is defined in terms of the characteristic polynomial of $\\mathbf{A}$. Employing methods from classical algebraic number theory, Fourier analysis in number fields, and results on zero sets of transfer operators, we establish a general tiling theorem for these tiles. We also associate a second kind of tiles with a rational matr...

  18. The affine quantum gravity programme

    International Nuclear Information System (INIS)

    The central principle of affine quantum gravity is securing and maintaining the strict positivity of the matrix { g-hat ab(x)} composed of the spatial components of the local metric operator. On spectral grounds, canonical commutation relations are incompatible with this principle, and they must be replaced by noncanonical, affine commutation relations. Due to the partial second-class nature of the quantum gravitational constraints, it is advantageous to use the recently developed projection operator method, which treats all quantum constraints on an equal footing. Using this method, enforcement of regularized versions of the gravitational operator constraints is formulated quite naturally by means of a novel and relatively well-defined functional integral involving only the same set of variables that appears in the usual classical formulation. It is anticipated that skills and insight to study this formulation can be developed by studying special, reduced-variable models that still retain some basic characteristics of gravity, specifically a partial second-class constraint operator structure. Although perturbatively nonrenormalizable, gravity may possibly be understood nonperturbatively from a hard-core perspective that has proved valuable for specialized models. Finally, developing a procedure to pass to the genuine physical Hilbert space involves several interconnected steps that require careful coordination

  19. 12 CFR 702.106 - Standard calculation of risk-based net worth requirement.

    Science.gov (United States)

    2010-01-01

    ... AFFECTING CREDIT UNIONS PROMPT CORRECTIVE ACTION Net Worth Classification § 702.106 Standard calculation of...) Allowance. Negative one hundred percent (−100%) of the balance of the Allowance for Loan and Lease...

  20. Calculation procedures for oil free scroll compressors based on mathematical modelling of working process

    Science.gov (United States)

    Paranin, Y.; Burmistrov, A.; Salikeev, S.; Fomina, M.

    2015-08-01

    Basic propositions of calculation procedures for oil free scroll compressors characteristics are presented. It is shown that mathematical modelling of working process in a scroll compressor makes it possible to take into account such factors influencing the working process as heat and mass exchange, mechanical interaction in working chambers, leakage through slots, etc. The basic mathematical model may be supplemented by taking into account external heat exchange, elastic deformation of scrolls, inlet and outlet losses, etc. To evaluate the influence of procedure on scroll compressor characteristics calculations accuracy different calculations were carried out. Internal adiabatic efficiency was chosen as a comparative parameter which evaluates the perfection of internal thermodynamic and gas-dynamic compressor processes. Calculated characteristics are compared with experimental values obtained for the compressor pilot sample.

  1. A regression model for calculating the boiling point isobars of tetrachloromethane-based binary solutions

    Science.gov (United States)

    Preobrazhenskii, M. P.; Rudakov, O. B.

    2016-01-01

    A regression model for calculating the boiling point isobars of tetrachloromethane-organic solvent binary homogeneous systems is proposed. The parameters of the model proposed were calculated for a series of solutions. The correlation between the nonadditivity parameter of the regression model and the hydrophobicity criterion of the organic solvent is established. The parameter value of the proposed model is shown to allow prediction of the potential formation of azeotropic mixtures of solvents with tetrachloromethane.

  2. Calculation of Effective Freezing Time in Lung Cancer Cryosurgery Based on Godunov Simulation

    OpenAIRE

    Т.G. Kotova; V.I. Kochenov; D.Y. Madai; А.V. Gurin; S.N. Tsybusov

    2016-01-01

    There have been presented the results of lung cancer cryosurgery simulation using numerical solutions of enthalpy equation according to Godunov method. For the cryodestruction improvement purposes we successfully calculated the effective freezing time taking into account the evolution of an ice ball covering the tumor area. Geometrical transformation parameters of an ice ball have been measured by calculating the temperature distribution and the interface position in biological tissu...

  3. Calculation of Effective Freezing Time in Lung Cancer Cryosurgery Based on Godunov Simulation

    OpenAIRE

    Т.G. Kotova; V.I. Kochenov; S.N. Tsybusov; D.Y. Madai; А.V. Gurin

    2016-01-01

    There have been presented the results of lung cancer cryosurgery simulation using numerical solutions of enthalpy equation according to Godunov method. For the cryodestruction improvement purposes we successfully calculated the effective freezing time taking into account the evolution of an ice ball covering the tumor area. Geometrical transformation parameters of an ice ball have been measured by calculating the temperature distribution and the interface position in biological tissue. Mathem...

  4. On the calculation of the topographic wetness index: evaluation of different methods based on field observations

    OpenAIRE

    Sørensen, R.; Zinko, U.; Seibert, J.

    2005-01-01

    The topographic wetness index (TWI, ln(a/tanβ)), which combines local upslope contributing area and slope, is commonly used to quantify topographic control on hydrological processes. Methods of computing this index differ primarily in the way the upslope contributing area is calculated. In this study we compared a number of calculation methods for TWI and evaluated them in terms of their correlation with the following measured variables: vascular plant species richness, soil pH, groundwa...

  5. Path-integral virial estimator for reaction rate calculation based on the quantum instanton approximation

    OpenAIRE

    Yang, Sandy; Yamamoto, Takeshi; Miller, William H.

    2005-01-01

    The quantum instanton approximation is a type of quantum transition state theory that calculates the chemical reaction rate using the reactive flux correlation function and its low order derivatives at time zero. Here we present several path-integral estimators for the latter quantities, which characterize the initial decay profile of the flux correlation function. As with the internal energy or heat capacity calculation, different estimators yield different variances (and therefore different...

  6. Calculated electronic and magnetic properties of the half-metallic, transition metal based Heusler compounds

    OpenAIRE

    Kandpal, Hem C.; FECHER, GERHARD H.; Felser, Claudia

    2006-01-01

    In this work, results of {\\it ab-initio} band structure calculations for $A_2BC$ Heusler compounds that have $A$ and $B$ sites occupied by transition metals and $C$ by a main group element are presented. This class of materials includes some interesting half-metallic and ferromagnetic properties. The calculations have been performed in order to understand the properties of the minority band gap and the peculiar magnetic behavior found in these materials. Among the interesting aspects of the e...

  7. Calculation of Henry constant on the base of critical parameters of adsorbable gas

    International Nuclear Information System (INIS)

    Calculation of Henry constant using correlation between critical parameters Psub(c), Tsub(c) and adsorption energy, determined by the value of internal pressure in molecular field of adsorbent, has been made. The calculated Henry constants for Ar, Kr and Xe, adsorbed by MoS2 and zeolite NaX, are compared with the experimental ones. The state of the molecules adsorbed is evaluated

  8. Application of polymer model for calculation of oxides activity in B2O3 based melts

    International Nuclear Information System (INIS)

    Possibility of using equations of polymer model for calculation of oxide activity in boron silicate systems is shown. Correlation of calculation and experimental values of MnO activity in MnO-B2O3 MnO-B2O3-SiO2 melts testifies to the fact, that boron coordination number with respect to oxygen in these systems is constant and equals three. 6 refs., 1 figs., 1 tab

  9. On the calculation of the topographic wetness index: evaluation of different methods based on field observations

    Directory of Open Access Journals (Sweden)

    R. Sørensen

    2006-01-01

    Full Text Available The topographic wetness index (TWI, ln(a/tanβ, which combines local upslope contributing area and slope, is commonly used to quantify topographic control on hydrological processes. Methods of computing this index differ primarily in the way the upslope contributing area is calculated. In this study we compared a number of calculation methods for TWI and evaluated them in terms of their correlation with the following measured variables: vascular plant species richness, soil pH, groundwater level, soil moisture, and a constructed wetness degree. The TWI was calculated by varying six parameters affecting the distribution of accumulated area among downslope cells and by varying the way the slope was calculated. All possible combinations of these parameters were calculated for two separate boreal forest sites in northern Sweden. We did not find a calculation method that performed best for all measured variables; rather the best methods seemed to be variable and site specific. However, we were able to identify some general characteristics of the best methods for different groups of measured variables. The results provide guiding principles for choosing the best method for estimating species richness, soil pH, groundwater level, and soil moisture by the TWI derived from digital elevation models.

  10. Further consideration of the phylogeny of some "traditional" heterotrichs (Protista, Ciliophora) of uncertain affinities, based on new sequences of the small subunit rRNA gene.

    Science.gov (United States)

    Miao, Miao; Song, Weibo; Clamp, John C; Al-Rasheid, Khaled A S; Al-Khedhairy, Abdulaziz A; Al-Arifi, Saud

    2009-01-01

    The systematic relationships and taxonomic positions of the traditional heterotrich genera Condylostentor, Climacostomum, Fabrea, Folliculina, Peritromus, and Condylostoma, as well as the licnophorid genus Licnophora, were re-examined using new data from sequences of the gene coding for small subunit ribosomal RNA. Trees constructed using distance-matrix, Bayesian inference, and maximum-parsimony methods all showed the following relationships: (1) the "traditional" heterotrichs consist of several paraphyletic groups, including the current classes Heterotrichea, Armophorea and part of the Spirotrichea; (2) the class Heterotrichea was confirmed as a monophyletic assemblage based on our analyses of 31 taxa, and the genus Peritromus was demonstrated to be a peripheral group; (3) the genus Licnophora occupied an isolated branch on one side of the deepest divergence in the subphylum Intramacronucleata and was closely affiliated with spirotrichs, armophoreans, and clevelandellids; (4) Condylostentor, a recently defined genus with several truly unique morphological features, is more closely related to Condylostoma than to Stentor; (5) Folliculina, Eufolliculina, and Maristentor always clustered together with high bootstrap support; and (6) Climacostomum occupied a paraphyletic position distant from Fabrea, showing a close relationship with Condylostomatidae and Chattonidiidae despite of modest support. PMID:19527351

  11. Generation of a panel of high affinity antibodies and development of a biosensor-based immunoassay for the detection of okadaic acid in shellfish.

    Science.gov (United States)

    Le Berre, Marie; Kilcoyne, Michelle; Kane, Marian

    2015-09-01

    Okadaic acid (OA) and its derivatives, DTX-1 and DTX-2, are marine biotoxins associated with diarrhetic shellfish poisoning. Routine monitoring of these toxins relies on the mouse bioassay. However, due to the technical unreliability and animal usage of this bioassay, there is always a need for convenient and reliable alternative assay methods. A panel of monoclonal antibodies against OA was generated and the most suitable was selected for biosensor-based assay development using surface plasmon resonance. The cross reactivity of the selected antibody with DTX-1 was found to be 73%, confirming the antibody suitability for both OA and DTX detection. The OA and derivative assay was designed as an inhibition assay covering the concentrations 1-75 ng/ml, with a sensitivity of 22.4 ng/ml. The assay was highly reproducible and preliminary validation showed no matrix interference from mussel extracts and good recovery of added standard in mussel extracts, with %CV of <9.3%. This assay could provide a useful and convenient screening tool for OA and its derivatives with a comprehensive extraction protocol for shellfish monitoring programmes. PMID:26169671

  12. Aspects of affine Toda field theory

    International Nuclear Information System (INIS)

    The report is devoted to properties of the affine Toda field theory, the intention being to highlight a selection of curious properties that should be explicable in terms of the underlying group theory but for which in most cases there are no explanation. The motivation for exploring the ideas contained in this report came principally from the recent work of Zamolodchikov concerning the two dimensional Ising model at critical temperature perturbed by a magnetic field. Hollowood and Mansfield pointed out that since Toda field theory is conformal the perturbation considered by Zamolodchikov might well be best regarded as a perturbation of a Toda field theory. This work made it seem plausible that the theory sought by Zamolodchikov was actually affine E8 Toda field theory. However, this connection required an imaginary value of the coupling constant. Investigations here concerning exact S-matrices use a perturbative approach based on real coupling and the results differ in various ways from those thought to correspond to perturbed conformal field theory. A further motivation is to explore the connection between conformal and perturbed conformal field theories in other contexts using similar ideas. (N.K.)

  13. An iterative approach for symmetrical and asymmetrical Short-circuit calculations with converter-based connected renewable energy sources

    DEFF Research Database (Denmark)

    Göksu, Ömer; Teodorescu, Remus; Bak-Jensen, Birgitte;

    2012-01-01

    As more renewable energy sources, especially more wind turbines are installed in the power system, analysis of the power system with the renewable energy sources becomes more important. Short-circuit calculation is a well known fault analysis method which is widely used for early stage analysis and...... design purposes and tuning of the network protection equipments. However, due to current controlled power converter-based grid connection of the wind turbines, short-circuit calculation cannot be performed with its current form for networks with power converter-based wind turbines. In this paper, an...... iterative approach for short-circuit calculation of networks with power converter-based wind turbines is developed for both symmetrical and asymmetrical short-circuit grid faults. As a contribution to existing solutions, negative sequence current injection from the wind turbines is also taken into account...

  14. Calculation of aqueous solubility of crystalline un-ionized organic chemicals and drugs based on structural similarity and physicochemical descriptors.

    Science.gov (United States)

    Raevsky, Oleg A; Grigor'ev, Veniamin Yu; Polianczyk, Daniel E; Raevskaja, Olga E; Dearden, John C

    2014-02-24

    Solubilities of crystalline organic compounds calculated according to AMP (arithmetic mean property) and LoReP (local one-parameter regression) models based on structural and physicochemical similarities are presented. We used data on water solubility of 2615 compounds in un-ionized form measured at 25±5 °C. The calculation results were compared with the equation based on the experimental data for lipophilicity and melting point. According to statistical criteria, the model based on structural and physicochemical similarities showed a better fit with the experimental data. An additional advantage of this model is that it uses only theoretical descriptors, and this provides means for calculating water solubility for both existing and not yet synthesized compounds. PMID:24456022

  15. 基于仿射聚类的宏基因组序列物种聚类%Metagenomic DNA Sequence Binning based on Affinity Propagation

    Institute of Scientific and Technical Information of China (English)

    聂鹏宇; 潘玮华; 徐云

    2013-01-01

    随着下一代测序技术的迅猛发展,宏基因组学已经成为新的研究热点,宏基因组学序列聚类问题使用无参考的方法,对包含多个物种的宏基因组序列进行有效分离。为此,提出一种结合相似度信息和结构信息的宏基因组物种聚类算法,并引入仿射聚类来进行序列物种聚类。实验数据表明该方法聚类精度高、执行速度快。我们也开发了基于该方法的宏基因组序列物种聚类软件。%Nowadays, with the rapid development of the next generation sequencing technologies, metagenomics have become a new hotspot,However research in metagenomics faces the issue of binning --- identification and taxonomic characterization of the NGS short reads. To solve this problem, this paper first analyzes the next generation sequencing technology characteristics, statistical characteristics of metagenomic sequence, then proposes a new clustering method for DNA sequence binning. Test results show that this method has a very good clustering accuracy. In the same time, we developed an software for metagenomic binning based on this algorithm MetaBinning.

  16. First macro Monte Carlo based commercial dose calculation module for electron beam treatment planning—new issues for clinical consideration

    Science.gov (United States)

    Ding, George X.; Duggan, Dennis M.; Coffey, Charles W.; Shokrani, Parvaneh; Cygler, Joanna E.

    2006-06-01

    The purpose of this study is to present our experience of commissioning, testing and use of the first commercial macro Monte Carlo based dose calculation algorithm for electron beam treatment planning and to investigate new issues regarding dose reporting (dose-to-water versus dose-to-medium) as well as statistical uncertainties for the calculations arising when Monte Carlo based systems are used in patient dose calculations. All phantoms studied were obtained by CT scan. The calculated dose distributions and monitor units were validated against measurements with film and ionization chambers in phantoms containing two-dimensional (2D) and three-dimensional (3D) type low- and high-density inhomogeneities at different source-to-surface distances. Beam energies ranged from 6 to 18 MeV. New required experimental input data for commissioning are presented. The result of validation shows an excellent agreement between calculated and measured dose distributions. The calculated monitor units were within 2% of measured values except in the case of a 6 MeV beam and small cutout fields at extended SSDs (>110 cm). The investigation on the new issue of dose reporting demonstrates the differences up to 4% for lung and 12% for bone when 'dose-to-medium' is calculated and reported instead of 'dose-to-water' as done in a conventional system. The accuracy of the Monte Carlo calculation is shown to be clinically acceptable even for very complex 3D-type inhomogeneities. As Monte Carlo based treatment planning systems begin to enter clinical practice, new issues, such as dose reporting and statistical variations, may be clinically significant. Therefore it is imperative that a consistent approach to dose reporting is used.

  17. First macro Monte Carlo based commercial dose calculation module for electron beam treatment planning-new issues for clinical consideration

    International Nuclear Information System (INIS)

    The purpose of this study is to present our experience of commissioning, testing and use of the first commercial macro Monte Carlo based dose calculation algorithm for electron beam treatment planning and to investigate new issues regarding dose reporting (dose-to-water versus dose-to-medium) as well as statistical uncertainties for the calculations arising when Monte Carlo based systems are used in patient dose calculations. All phantoms studied were obtained by CT scan. The calculated dose distributions and monitor units were validated against measurements with film and ionization chambers in phantoms containing two-dimensional (2D) and three-dimensional (3D) type low- and high-density inhomogeneities at different source-to-surface distances. Beam energies ranged from 6 to 18 MeV. New required experimental input data for commissioning are presented. The result of validation shows an excellent agreement between calculated and measured dose distributions. The calculated monitor units were within 2% of measured values except in the case of a 6 MeV beam and small cutout fields at extended SSDs (>110 cm). The investigation on the new issue of dose reporting demonstrates the differences up to 4% for lung and 12% for bone when 'dose-to-medium' is calculated and reported instead of 'dose-to-water' as done in a conventional system. The accuracy of the Monte Carlo calculation is shown to be clinically acceptable even for very complex 3D-type inhomogeneities. As Monte Carlo based treatment planning systems begin to enter clinical practice, new issues, such as dose reporting and statistical variations, may be clinically significant. Therefore it is imperative that a consistent approach to dose reporting is used

  18. Relating the shape of protein binding sites to binding affinity profiles: is there an association?

    Directory of Open Access Journals (Sweden)

    Bitter István

    2010-10-01

    Full Text Available Abstract Background Various pattern-based methods exist that use in vitro or in silico affinity profiles for classification and functional examination of proteins. Nevertheless, the connection between the protein affinity profiles and the structural characteristics of the binding sites is still unclear. Our aim was to investigate the association between virtual drug screening results (calculated binding free energy values and the geometry of protein binding sites. Molecular Affinity Fingerprints (MAFs were determined for 154 proteins based on their molecular docking energy results for 1,255 FDA-approved drugs. Protein binding site geometries were characterized by 420 PocketPicker descriptors. The basic underlying component structure of MAFs and binding site geometries, respectively, were examined by principal component analysis; association between principal components extracted from these two sets of variables was then investigated by canonical correlation and redundancy analyses. Results PCA analysis of the MAF variables provided 30 factors which explained 71.4% of the total variance of the energy values while 13 factors were obtained from the PocketPicker descriptors which cumulatively explained 94.1% of the total variance. Canonical correlation analysis resulted in 3 statistically significant canonical factor pairs with correlation values of 0.87, 0.84 and 0.77, respectively. Redundancy analysis indicated that PocketPicker descriptor factors explain 6.9% of the variance of the MAF factor set while MAF factors explain 15.9% of the total variance of PocketPicker descriptor factors. Based on the salient structures of the factor pairs, we identified a clear-cut association between the shape and bulkiness of the drug molecules and the protein binding site descriptors. Conclusions This is the first study to investigate complex multivariate associations between affinity profiles and the geometric properties of protein binding sites. We found that

  19. Python-based framework for coupled MC-TH reactor calculations

    International Nuclear Information System (INIS)

    We have developed a set of Python packages to provide a modern programming interface to codes used for analysis of nuclear reactors. Python classes can be classified by their functionality into three groups: low-level interfaces, general model classes and high-level interfaces. A low-level interface describes an interface between Python and a particular code. General model classes can be used to describe calculation geometry and meshes to represent system variables. High-level interface classes are used to convert geometry described with general model classes into instances of low-level interface classes and to put results of code calculations (read by low-interface classes) back to general model. The implementation of Python interfaces to the Monte Carlo neutronics code MCNP and thermo-hydraulic code SCF allow efficient description of calculation models and provide a framework for coupled calculations. In this paper we illustrate how these interfaces can be used to describe a pin model, and report results of coupled MCNP-SCF calculations performed for a PWR fuel assembly, organized by means of the interfaces

  20. Development of Subspace-based Hybrid Monte Carlo-Deterministric Algorithms for Reactor Physics Calculations

    Energy Technology Data Exchange (ETDEWEB)

    Abdel-Khalik, Hany S. [North Carolina State Univ., Raleigh, NC (United States); Zhang, Qiong [North Carolina State Univ., Raleigh, NC (United States)

    2014-05-20

    The development of hybrid Monte-Carlo-Deterministic (MC-DT) approaches, taking place over the past few decades, have primarily focused on shielding and detection applications where the analysis requires a small number of responses, i.e. at the detector locations(s). This work further develops a recently introduced global variance reduction approach, denoted by the SUBSPACE approach is designed to allow the use of MC simulation, currently limited to benchmarking calculations, for routine engineering calculations. By way of demonstration, the SUBSPACE approach is applied to assembly level calculations used to generate the few-group homogenized cross-sections. These models are typically expensive and need to be executed in the order of 103 - 105 times to properly characterize the few-group cross-sections for downstream core-wide calculations. Applicability to k-eigenvalue core-wide models is also demonstrated in this work. Given the favorable results obtained in this work, we believe the applicability of the MC method for reactor analysis calculations could be realized in the near future.

  1. Nonlinear Optimization Method of Ship Floating Condition Calculation in Wave Based on Vector

    Institute of Scientific and Technical Information of China (English)

    丁宁; 余建星

    2014-01-01

    Ship-floating-condition-in-regular-waves-is-calculated.-New-equations-controlling-any-ship’s-floating-condition-are-proposed-by-use-of-the-vector-operation.-This-form-is-a-nonlinear-optimization-problem-which-can-be-solved-using-the-penalty-function-method-with-constant-coefficients.-And-the-solving-process-is-accelerated-by-dichotomy.-During-the-solving-process,-the-ship’s-displacement-and-buoyant-centre-have-been-calculated-by-the-integration-of-the-ship-surface-according-to-the-waterline.-The-ship-surface-is-described-using-an-accumulative-chord-length-theory-in-order-to-determine-the-displacement,-the-buoyancy-center-and-the-waterline.-The-draught-forming-the-waterline-at-each-station-can-be-found-out-by-calculating-the-intersection-of-the-ship-surface-and-the-wave-surface.-The-results-of-an-example-indicate-that-this-method-is-exact-and-efficient.-It-can-calculate-the-ship-floating-condition-in-regular-waves-as-well-as-simplify-the-calculation-and-improve-the-computational-efficiency-and-the-precision-of-results.

  2. Validation of XiO Electron Monte Carlo-based calculations by measurements in a homogeneous phantom and by EGSnrc calculations in a heterogeneous phantom.

    Science.gov (United States)

    Edimo, P; Kwato Njock, M G; Vynckier, S

    2013-11-01

    The purpose of the present study is to perform a clinical validation of a new commercial Monte Carlo (MC) based treatment planning system (TPS) for electron beams, i.e. the XiO 4.60 electron MC (XiO eMC). Firstly, MC models for electron beams (4, 8, 12 and 18 MeV) have been simulated using BEAMnrc user code and validated by measurements in a homogeneous water phantom. Secondly, these BEAMnrc models have been set as the reference tool to evaluate the ability of XiO eMC to reproduce dose perturbations in the heterogeneous phantom. In the homogeneous phantom calculations, differences between MC computations (BEAMnrc, XiO eMC) and measurements are less than 2% in the homogeneous dose regions and less than 1 mm shifting in the high dose gradient regions. As for the heterogeneous phantom, the accuracy of XiO eMC has been benchmarked with predicted BEAMnrc models. In the lung tissue, the overall agreement between the two schemes lies under 2.5% for the most tested dose distributions at 8, 12 and 18 MeV and is better than the 4 MeV one. In the non-lung tissue, a good agreement has been found between BEAMnrc simulation and XiO eMC computation for 8, 12 and 18 MeV. Results are worse in the case of 4 MeV calculations (discrepancies ≈ 4%). XiO eMC can predict dose perturbation induced by high-density heterogeneities for 8, 12 and 18 MeV. However, significant deviations found in the case of 4 MeV demonstrate that caution is necessary in using XiO eMC at lower electron energies. PMID:23010450

  3. An angular momentum conserving Affine-Particle-In-Cell method

    CERN Document Server

    Jiang, Chenfanfu; Teran, Joseph

    2016-01-01

    We present a new technique for transferring momentum and velocity between particles and grid with Particle-In-Cell (PIC) calculations which we call Affine-Particle-In-Cell (APIC). APIC represents particle velocities as locally affine, rather than locally constant as in traditional PIC. We show that this representation allows APIC to conserve linear and angular momentum across transfers while also dramatically reducing numerical diffusion usually associated with PIC. Notably, conservation is achieved with lumped mass, as opposed to the more commonly used Fluid Implicit Particle (FLIP) transfers which require a 'full' mass matrix for exact conservation. Furthermore, unlike FLIP, APIC retains a filtering property of the original PIC and thus does not accumulate velocity modes on particles as FLIP does. In particular, we demonstrate that APIC does not experience velocity instabilities that are characteristic of FLIP in a number of Material Point Method (MPM) hyperelasticity calculations. Lastly, we demonstrate th...

  4. Semi-empirical Calculation of Detection Efficiency for Voluminous Source Based on Effective Solid Angle Concept

    Energy Technology Data Exchange (ETDEWEB)

    Kang, M. Y.; Kim, J. H.; Choi, H. D.; Sun, G. M. [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2014-10-15

    To calculate the full energy (FE) absorption peak efficiency for arbitrary volume sample, we developed and verified the Effective Solid Angle (ESA) Code. The procedure for semi-empirical determination of the FE efficiency for the arbitrary volume sources and the calculation principles and processes about ESA code is referred to, and the code was validated with a HPGe detector (relative efficiency 32%, n-type) in previous studies. In this study, we use different type and efficiency of HPGe detectors, in order to verify the performance of the ESA code for the various detectors. We calculated the efficiency curve of voluminous source and compared with experimental data. We will carry out additional validation by measurement of various medium, volume and shape of CRM volume sources with detector of different efficiency and type. And we will reflect the effect of the dead layer of p-type HPGe detector and coincidence summing correction technique in near future.

  5. Zonal calculation for large scale drought monitoring based on MODIS data

    Science.gov (United States)

    Li, Hongjun; Zheng, Li; Li, Chunqiang; Lei, Yuping

    2006-08-01

    Temperature vegetation dryness index (TVDI) is a simple and effective methods for drought monitoring. In this study, the statistic characteristics of MODIS-EVI and MODI-NDVI at two different times were analyzed and compared. NDVI reaches saturation in well-vegetated areas while EVI has no such a shortcoming. In current study, we used MODIS-EVI as vegetation index for TVDI. The analysis of vegetation index and land surface temperature at different latitudes and different times showed that there was a zonal distribution of land surface parameters. It is therefore necessary to calculate the TVDI with a zonal distribution. Compared with TVDI calculated for the whole region, the zonal calculation of TVDI increases the accuracy of regression equations of wet and dry edge, improves the correlations of TVDI and measured soil moisture, and the effectiveness of the large scale drought monitoring using remote sensing data.

  6. [The calculation of the intraocular lens power based on raytracing methods: a systematic review].

    Science.gov (United States)

    Steiner, Deborah; Hoffmann, Peter; Goldblum, David

    2013-04-01

    A problem in cataract surgery consists in the preoperative identification of the appropriate intraocular lens (IOL) power. Different calculation approaches have been developed for this purpose; raytracing methods represent one of the most exact but also mathematically more challenging methods. This article gives a systematic overview of the different raytracing calculations available and described in the literature and compares their results. It has been shown that raytracing includes physical measurements and IOL manufacturing data but no approximations. The prediction error is close to zero and an essential advantage is the applicability to different conditions without the need of modifications. Compared to the classical formulae the raytracing methods are more precise overall, but due to the various data and property situations they are hardly comparable yet. The raytracing calculations represent a good alternative to the 3rd generation formulae. They minimize refractive errors, are wider applicable and provide better results overall, particularly in eyes with preconditions. PMID:23629771

  7. Fast 3D dosimetric verifications based on an electronic portal imaging device using a GPU calculation engine

    International Nuclear Information System (INIS)

    To use a graphic processing unit (GPU) calculation engine to implement a fast 3D pre-treatment dosimetric verification procedure based on an electronic portal imaging device (EPID). The GPU algorithm includes the deconvolution and convolution method for the fluence-map calculations, the collapsed-cone convolution/superposition (CCCS) algorithm for the 3D dose calculations and the 3D gamma evaluation calculations. The results of the GPU-based CCCS algorithm were compared to those of Monte Carlo simulations. The planned and EPID-based reconstructed dose distributions in overridden-to-water phantoms and the original patients were compared for 6 MV and 10 MV photon beams in intensity-modulated radiation therapy (IMRT) treatment plans based on dose differences and gamma analysis. The total single-field dose computation time was less than 8 s, and the gamma evaluation for a 0.1-cm grid resolution was completed in approximately 1 s. The results of the GPU-based CCCS algorithm exhibited good agreement with those of the Monte Carlo simulations. The gamma analysis indicated good agreement between the planned and reconstructed dose distributions for the treatment plans. For the target volume, the differences in the mean dose were less than 1.8%, and the differences in the maximum dose were less than 2.5%. For the critical organs, minor differences were observed between the reconstructed and planned doses. The GPU calculation engine was used to boost the speed of 3D dose and gamma evaluation calculations, thus offering the possibility of true real-time 3D dosimetric verification

  8. Determination of trace alkaline phosphatase by affinity adsorption solid substrate room temperature phosphorimetry based on wheat germ agglutinin labeled with 8-quinolineboronic acid phosphorescent molecular switch and prediction of diseases

    Science.gov (United States)

    Liu, Jia-Ming; Gao, Hui; Li, Fei-Ming; Shi, Xiu-Mei; Lin, Chang-Qing; Lin, Li-Ping; Wang, Xin-Xing; Li, Zhi-Ming

    2010-09-01

    The 8-quinolineboronic acid phosphorescent molecular switch (abbreviated as PMS-8-QBA. Thereinto, 8-QBA is 8-quinolineboronic acid, and PMS is phosphorescent molecular switch) was found for the first time. PMS-8-QBA, which was in the "off" state, could only emit weak room temperature phosphorescence (RTP) on the acetyl cellulose membrane (ACM). However, PMS-8-QBA turned "on" automatically for its changed structure, causing that the RTP of 8-QBA in the system increased, after PMS-8-QBA-WGA (WGA is wheat germ agglutinin) was formed by reaction between -OH of PMS-8-QBA and -COOH of WGA. More interesting is that the -NH 2 of PMS-8-QBA-WGA could react with the -COOH of alkaline phosphatase (AP) to form the affinity adsorption (AA) product WGA-AP-WGA-8-QBA-PMS (containing -NH-CO- bond), which caused RTP of the system to greatly increase. Thus, affinity adsorption solid substrate room temperature phosphorimetry using PMS-8-QBA as labelling reagent (PMS-8-QBA-AA-SSRTP) for the determination of trace AP was established. The method had many advantages, such as high sensitivity (the detection limit (LD) was 2.5 zg spot -1. For sample volume of 0.40 μl spot -1, corresponding concentration was 6.2 × 10 -18 g ml -1), good selectivity (the allowed concentration of coexisting material was higher, when the relative error was ±5%), high accuracy (applied to detection of AP content in serum samples, the result was coincided with those obtained by enzyme-linked immunoassay), which was suitable for the detection of trace AP content in serum samples and the forecast of human diseases. Meanwhile, the mechanism of PMS-8-QBA-AASSRTP was discussed. The new field of analytical application and clinic diagnosis technique of molecule switch are exploited, based on the phosphorescence characteristic of PMS-8-QBA, the AA reaction between WGA and AP, as well as the relation between AP content and human diseases. The research results promote the development and interpenetrate among molecule

  9. Monte Carlo calculations for design of An accelerator based PGNAA facility

    International Nuclear Information System (INIS)

    Monte Carlo calculations were carried out for design of a set up for Prompt Gamma Ray Neutron Activation Analysis (PGNAA) by 14 MeV neutrons to analyze cement raw material samples. The calculations were carried out using code the MCNP4B2. Various geometry parameters of the PGNAA experimental setup such as sample thickness, moderator geometry and detector shielding etc were optimized by maximizing the prompt gamma ray yield of different elements of sample material. Finally calibration curve of the PGNAA setup were generated for various concentrations of calcium in the material sample. Results of this simulation are presented. (author)

  10. Monte Carlo calculations for design of An accelerator based PGNAA facility

    Energy Technology Data Exchange (ETDEWEB)

    Nagadi, M.M.; Naqvi, A.A. [King Fahd University of Petroleum and Minerals, Center for Applied Physical Sciences, Dhahran (Saudi Arabia); Rehman, Khateeb-ur; Kidwai, S. [King Fahd University of Petroleum and Minerals, Department of Physics, Dhahran (Saudi Arabia)

    2002-08-01

    Monte Carlo calculations were carried out for design of a set up for Prompt Gamma Ray Neutron Activation Analysis (PGNAA) by 14 MeV neutrons to analyze cement raw material samples. The calculations were carried out using code the MCNP4B2. Various geometry parameters of the PGNAA experimental setup such as sample thickness, moderator geometry and detector shielding etc were optimized by maximizing the prompt gamma ray yield of different elements of sample material. Finally calibration curve of the PGNAA setup were generated for various concentrations of calcium in the material sample. Results of this simulation are presented. (author)

  11. Spectral linelist of HD16O molecule based on VTT calculations for atmospheric application

    Science.gov (United States)

    Voronin, B. A.

    2014-11-01

    Three version line-list of dipole transition for isotopic modification of water molecule HD16O are presented. Line-lists have been created on the basis of VTT calculations (Voronin, Tennyson, Tolchenov et al. MNRAS, 2010) by adding air- and self-broadening coefficient, and temperature exponents for HD16O-air case. Three cut-of values for the line intensities were used: 1e-30, 1e-32 and 1e-35 cm/molecule. Calculated line-lists are available on the site ftp://ftp.iao.ru/pub/VTT/VTT-296/.

  12. Torpedo's Search Trajectory Design Based on Acquisition and Hit Probability Calculation

    Institute of Scientific and Technical Information of China (English)

    LI Wen-zhe; ZHANG Yu-wen; FAN Hui; WANG Yong-hu

    2008-01-01

    Taking aim at light torpedo search trajectory characteristic of warship, by analyzing common used torpedo search trajectory, a better torpedo search trajectory is designed, a mathematic model is built up, and the simulation calculation taking MK46 torpedo for example is carried out. The calculation results testify that this method can increase acquisition probability and hit probability by about 10%-30% at some situations and becomes feasible for the torpedo trajectory design. The research is of great reference value for the acoustic homing torpedo trajectory design and the torpedo combat efficiency research.

  13. MVP Based Calculation of Reactivity Loss Due to Gemstone Irradiation Facility of Thai Research Reactor

    International Nuclear Information System (INIS)

    Full text: The calculation of initial core criticality of Thai Research Reactor-1/Modification 1 was performed by the continuous energy Monte Carlo Code MVP and the material cross-sections from JENDL-3.3 continuous-energy library. After that gemstone irradiation facility model were extended for calculation on the magnitude of the reactivity loss. The results showed that total reactivity worth of the control system was 10.83. The reactivity effects associated with the insertion of gemstone irradiation facility was about -0.43% δk/k

  14. Autoradiography-based, three-dimensional calculation of dose rate for murine, human-tumor xenografts

    International Nuclear Information System (INIS)

    A Fast Fourier Transform method for calculating the three-dimensional dose rate distribution for murine, human-tumour xenografts is outlined. The required input includes evenly-spaced activity slices which span the tumour. Numerical values in these slices are determined by quantitative 125I autoradiography. For the absorbed dose-rate calculation, we assume the activity from both 131I- and 90Y-labeled radiopharmaceuticals would be distributed as is measured with the 125I label. Two example cases are presented: an ovarian-carcinoma xenograft with an IgG 2ak monoclonal antibody and a neuroblastoma xenograft with meta-iobenzylguanidine (MIBG). (Author)

  15. Communication: Revised electron affinity of SF6 from kinetic data.

    OpenAIRE

    Troe, J.; Miller, T; Viggiano, A.

    2012-01-01

    Previously determined experimental data for thermal attachment of electrons to SF 6 and thermal detachment from SF 6 − over the range 590–670 K are reevaluated by a third-law analysis. Recent high precision calculations of SF 6 − harmonic frequences and anharmonicities (for several of the modes) lead to considerable changes in modeled vibrational partition functions which then have to be accommodated for by a smaller value of the derived adiabatic electron affinity EA of SF 6 . The previously...

  16. Affine symmetry in mechanics of collective and internal modes. Part I. Classical models

    CERN Document Server

    Sławianowski, J J; Sławianowska, A; Gołubowska, B; Martens, A; zko, E E Ro\\; Zawistowski, Z J

    2008-01-01

    Discussed is a model of collective and internal degrees of freedom with kinematics based on affine group and its subgroups. The main novelty in comparison with the previous attempts of this kind is that it is not only kinematics but also dynamics that is affinely-invariant. The relationship with the dynamics of integrable one-dimensional lattices is discussed. It is shown that affinely-invariant geodetic models may encode the dynamics of something like elastic vibrations.

  17. Comparison among MCNP-based depletion codes applied to burnup calculations of pebble-bed HTR lattices

    International Nuclear Information System (INIS)

    The double-heterogeneity characterising pebble-bed high temperature reactors (HTRs) makes Monte Carlo based calculation tools the most suitable for detailed core analyses. These codes can be successfully used to predict the isotopic evolution during irradiation of the fuel of this kind of cores. At the moment, there are many computational systems based on MCNP that are available for performing depletion calculation. All these systems use MCNP to supply problem dependent fluxes and/or microscopic cross sections to the depletion module. This latter then calculates the isotopic evolution of the fuel resolving Bateman's equations. In this paper, a comparative analysis of three different MCNP-based depletion codes is performed: Montburns2.0, MCNPX2.6.0 and BGCore. Monteburns code can be considered as the reference code for HTR calculations, since it has been already verified during HTR-N and HTR-N1 EU project. All calculations have been performed on a reference model representing an infinite lattice of thorium-plutonium fuelled pebbles. The evolution of k-inf as a function of burnup has been compared, as well as the inventory of the important actinides. The k-inf comparison among the codes shows a good agreement during the entire burnup history with the maximum difference lower than 1%. The actinide inventory prediction agrees well. However significant discrepancy in Am and Cm concentrations calculated by MCNPX as compared to those of Monteburns and BGCore has been observed. This is mainly due to different Am-241 (n,γ) branching ratio utilized by the codes. The important advantage of BGCore is its significantly lower execution time required to perform considered depletion calculations. While providing reasonably accurate results BGCore runs depletion problem about two times faster than Monteburns and two to five times faster than MCNPX.

  18. Conformal field theory on affine Lie groups

    International Nuclear Information System (INIS)

    Working directly on affine Lie groups, we construct several new formulations of the WZW model, the gauged WZW model, and the generic affine-Virasoro action. In one formulation each of these conformal field theories (CFTs) is expressed as a one-dimensional mechanical system whose variables are coordinates on the affine Lie group. When written in terms of the affine group element, this formulation exhibits a two-dimensional WZW term. In another formulation each CFT is written as a two-dimensional field theory, with a three- dimensional WZW term, whose fields are coordinates on the affine group. On the basis of these equivalent formulations, we develop a translation dictionary in which the new formulations on the affine Lie group are understood as mode formulations of the conventional formulations on the Lie group. Using this dictionary, we also express each CFT as a three-dimensional field theory on the Lie group with a four-dimensional WZW term. 36 refs

  19. Conformal field theory on affine Lie groups

    Energy Technology Data Exchange (ETDEWEB)

    Clubok, K.S.

    1996-04-01

    Working directly on affine Lie groups, we construct several new formulations of the WZW model, the gauged WZW model, and the generic affine-Virasoro action. In one formulation each of these conformal field theories (CFTs) is expressed as a one-dimensional mechanical system whose variables are coordinates on the affine Lie group. When written in terms of the affine group element, this formulation exhibits a two-dimensional WZW term. In another formulation each CFT is written as a two-dimensional field theory, with a three- dimensional WZW term, whose fields are coordinates on the affine group. On the basis of these equivalent formulations, we develop a translation dictionary in which the new formulations on the affine Lie group are understood as mode formulations of the conventional formulations on the Lie group. Using this dictionary, we also express each CFT as a three-dimensional field theory on the Lie group with a four-dimensional WZW term. 36 refs.

  20. Maximin affinity learning of image segmentation

    CERN Document Server

    Turaga, Srinivas C; Helmstaedter, Moritz; Denk, Winfried; Seung, H Sebastian

    2009-01-01

    Images can be segmented by first using a classifier to predict an affinity graph that reflects the degree to which image pixels must be grouped together and then partitioning the graph to yield a segmentation. Machine learning has been applied to the affinity classifier to produce affinity graphs that are good in the sense of minimizing edge misclassification rates. However, this error measure is only indirectly related to the quality of segmentations produced by ultimately partitioning the affinity graph. We present the first machine learning algorithm for training a classifier to produce affinity graphs that are good in the sense of producing segmentations that directly minimize the Rand index, a well known segmentation performance measure. The Rand index measures segmentation performance by quantifying the classification of the connectivity of image pixel pairs after segmentation. By using the simple graph partitioning algorithm of finding the connected components of the thresholded affinity graph, we are ...