Ga(+) Basicity and Affinity Scales Based on High-Level Ab Initio Calculations.
Brea, Oriana; Mó, Otilia; Yáñez, Manuel
2015-10-26
The structure, relative stability and bonding of complexes formed by the interaction between Ga(+) and a large set of compounds, including hydrocarbons, aromatic systems, and oxygen-, nitrogen-, fluorine and sulfur-containing Lewis bases have been investigated through the use of the high-level composite ab initio Gaussian-4 theory. This allowed us to establish rather accurate Ga(+) cation affinity (GaCA) and Ga(+) cation basicity (GaCB) scales. The bonding analysis of the complexes under scrutiny shows that, even though one of the main ingredients of the Ga(+) -base interaction is electrostatic, it exhibits a non-negligible covalent character triggered by the presence of the low-lying empty 4p orbital of Ga(+) , which favors a charge donation from occupied orbitals of the base to the metal ion. This partial covalent character, also observed in AlCA scales, is behind the dissimilarities observed when GaCA are compared with Li(+) cation affinities, where these covalent contributions are practically nonexistent. Quite unexpectedly, there are some dissimilarities between several Ga(+) -complexes and the corresponding Al(+) -analogues, mainly affecting the relative stability of π-complexes involving aromatic compounds.
Proton Affinity Calculations with High Level Methods.
Kolboe, Stein
2014-08-12
Proton affinities, stretching from small reference compounds, up to the methylbenzenes and naphthalene and anthracene, have been calculated with high accuracy computational methods, viz. W1BD, G4, G3B3, CBS-QB3, and M06-2X. Computed and the currently accepted reference proton affinities are generally in excellent accord, but there are deviations. The literature value for propene appears to be 6-7 kJ/mol too high. Reported proton affinities for the methylbenzenes seem 4-5 kJ/mol too high. G4 and G3 computations generally give results in good accord with the high level W1BD. Proton affinity values computed with the CBS-QB3 scheme are too low, and the error increases with increasing molecule size, reaching nearly 10 kJ/mol for the xylenes. The functional M06-2X fails markedly for some of the small reference compounds, in particular, for CO and ketene, but calculates methylbenzene proton affinities with high accuracy.
Reflectable bases for affine reflection systems
Azam, Saeid; Yousofzadeh, Malihe
2011-01-01
The notion of a "root base" together with its geometry plays a crucial role in the theory of finite and affine Lie theory. However, it is known that such a notion does not exist for the recent generalizations of finite and affine root systems such as extended affine root systems and affine reflection systems. As an alternative, we introduce the notion of a "reflectable base", a minimal subset $\\Pi$ of roots such that the non-isotropic part of the root system can be recovered by reflecting roots of $\\Pi$ relative to the hyperplanes determined by $\\Pi$. We give a full characterization of reflectable bases for tame irreducible affine reflection systems of reduced types, excluding types $E_{6,7,8}$. As a byproduct of our results, we show that if the root system under consideration is locally finite then any reflectable base is an integral base.
Calculations on Lie Algebra of the Group of Affine Symplectomorphisms
Directory of Open Access Journals (Sweden)
Zuhier Altawallbeh
2017-01-01
Full Text Available We find the image of the affine symplectic Lie algebra gn from the Leibniz homology HL⁎(gn to the Lie algebra homology H⁎Lie(gn. The result shows that the image is the exterior algebra ∧⁎(wn generated by the forms wn=∑i=1n(∂/∂xi∧∂/∂yi. Given the relevance of Hochschild homology to string topology and to get more interesting applications, we show that such a map is of potential interest in string topology and homological algebra by taking into account that the Hochschild homology HH⁎-1(U(gn is isomorphic to H⁎-1Lie(gn,U(gnad. Explicitly, we use the alternation of multilinear map, in our elements, to do certain calculations.
Zou, Tiefang; Peng, Haitao; Cai, Ming; Wu, Hequan; Hu, Lin
2016-09-01
In order to analyze the uncertainty of a reconstructed result, the Interval Algorithm (IA), the Affine Arithmetic (AA) and the Modified Affine Arithmetic (MAA) were introduced firstly, and then a Taylor-Affine Arithmetic (TAA) was proposed based on the MAA and Taylor series. Steps of the TAA, especially in analyzing uncertainty of a simulation result were given. Through the preceding five numerical cases, its application was demonstrated and its feasibility was validated. Results showed that no matter other methods (The IA, AA, the Upper and Lower bound Method, the Finite Difference Method) work well or bad, the TAA work well, even under the condition that the MAA cannot work in some cases because of the division/root operation in these models. Furthermore, in order to make sure that the result obtained from the TAA can be very close to the accurate interval, a simple algorithm was proposed based on the sub-interval technique, its feasibility was validated by two other numerical cases. Finally, a vehicle-pedestrian test was given to demonstrate the application of the TAA in practice. In the vehicle-pedestrian test, the interval [35.5, 39.1]km/h of the impact velocity can be calculated according to steps of the TAA, such interval information will be more useful in accident responsibility identification than a single number. This study will provide a new alternative method for uncertainty analysis in accident reconstruction.
Fast Affinity Propagation Clustering based on Machine Learning
Shailendra Kumar Shrivastava; J. L. Rana; DR.R.C.JAIN
2013-01-01
Affinity propagation (AP) was recently introduced as an un-supervised learning algorithm for exemplar based clustering. In this paper a novel Fast Affinity Propagation clustering Approach based on Machine Learning (FAPML) has been proposed. FAPML tries to put data points into clusters based on the history of the data points belonging to clusters in early stages. In FAPML we introduce affinity learning constant and dispersion constant which supervise the clustering process. FAPML also enforces...
Qureshi, M S; Sheikh, Q I; Hill, R; Brown, P E; Dickman, M J; Tzokov, S B; Rice, D W; Gjerde, D T; Hornby, D P
2013-08-01
The isolation of complex macromolecular assemblies at the concentrations required for structural analysis represents a major experimental challenge. Here we present a method that combines the genetic power of site-specific recombination in order to selectively "tag" one or more components of a protein complex with affinity-based rapid filtration and a final step of capillary-based enrichment. This modified form of tandem affinity purification produces highly purified protein complexes at high concentrations in a highly efficient manner. The application of the method is demonstrated for the yeast Arp2/3 heptameric protein complex involved in mediating reorganization of the actin cytoskeleton.
Moment-Based Method to Estimate Image Affine Transform
Institute of Scientific and Technical Information of China (English)
FENG Guo-rui; JIANG Ling-ge
2005-01-01
The estimation of affine transform is a crucial problem in the image recognition field. This paper resorted to some invariant properties under translation, rotation and scaling, and proposed a simple method to estimate the affine transform kernel of the two-dimensional gray image. Maps, applying to the original, produce some correlative points that can accurately reflect the affine transform feature of the image. Furthermore, unknown variables existing in the kernel of the transform are calculated. The whole scheme only refers to one-order moment,therefore, it has very good stability.
Betowski, Leon D; Enlow, Mark; Riddick, Lee; Aue, Donald H
2006-11-30
Electron affinities (EAs) and free energies for electron attachment (DeltaGo(a,298K)) have been directly calculated for 45 polynuclear aromatic hydrocarbons (PAHs) and related molecules by a variety of theoretical methods, with standard regression errors of about 0.07 eV (mean unsigned error = 0.05 eV) at the B3LYP/6-31 + G(d,p) level and larger errors with HF or MP2 methods or using Koopmans' Theorem. Comparison of gas-phase free energies with solution-phase reduction potentials provides a measure of solvation energy differences between the radical anion and neutral PAH. A simple Born-charging model approximates the solvation effects on the radical anions, leading to a good correlation with experimental solvation energy differences. This is used to estimate unknown or questionable EAs from reduction potentials. Two independent methods are used to predict DeltaGo(a,298K) values: (1) based upon DFT methods, or (2) based upon reduction potentials and the Born model. They suggest reassignments or a resolution of conflicting experimental EAs for nearly one-half (17 of 38) of the PAH molecules for which experimental EAs have been reported. For the antiaromatic molecules, 1,3,5-tri-tert-butylpentalene and the dithia-substituted cyclobutadiene 1, the reduction potentials lead to estimated EAs close to those expected from DFT calculations and provide a basis for the prediction of the EAs and reduction potentials of pentalene and cyclobutadiene. The Born model has been used to relate the electrostatic solvation energies of PAH and hydrocarbon radical anions, and spherical halide anions, alkali metal cations, and ammonium ions to effective ionic radii from DFT electron-density envelopes. The Born model used for PAHs has been successfully extended here to quantitatively explain the solvation energy of the C60 radical anion.
Calculating protein-ligand binding affinities with MMPBSA: Method and error analysis.
Wang, Changhao; Nguyen, Peter H; Pham, Kevin; Huynh, Danielle; Le, Thanh-Binh Nancy; Wang, Hongli; Ren, Pengyu; Luo, Ray
2016-10-15
Molecular Mechanics Poisson-Boltzmann Surface Area (MMPBSA) methods have become widely adopted in estimating protein-ligand binding affinities due to their efficiency and high correlation with experiment. Here different computational alternatives were investigated to assess their impact to the agreement of MMPBSA calculations with experiment. Seven receptor families with both high-quality crystal structures and binding affinities were selected. First the performance of nonpolar solvation models was studied and it was found that the modern approach that separately models hydrophobic and dispersion interactions dramatically reduces RMSD's of computed relative binding affinities. The numerical setup of the Poisson-Boltzmann methods was analyzed next. The data shows that the impact of grid spacing to the quality of MMPBSA calculations is small: the numerical error at the grid spacing of 0.5 Å is already small enough to be negligible. The impact of different atomic radius sets and different molecular surface definitions was further analyzed and weak influences were found on the agreement with experiment. The influence of solute dielectric constant was also analyzed: a higher dielectric constant generally improves the overall agreement with experiment, especially for highly charged binding pockets. The data also showed that the converged simulations caused slight reduction in the agreement with experiment. Finally the direction of estimating absolute binding free energies was briefly explored. Upon correction of the binding-induced rearrangement free energy and the binding entropy lost, the errors in absolute binding affinities were also reduced dramatically when the modern nonpolar solvent model was used, although further developments were apparently necessary to further improve the MMPBSA methods. © 2016 Wiley Periodicals, Inc.
Stocker-Majd, Gisela; Hilbrig, Frank; Freitag, Ruth
2008-06-13
Affinity precipitation was compared to affinity chromatography and batch adsorption as the final purification step in a protocol for the isolation of haemoglobin from human blood. Haptoglobin was the affinity ligand. The first steps on the process were realized by traditional methods (lyses of red blood cells followed by ammonium sulphate precipitation). For affinity chromatography (and batch adsorption) the ligand was linked to Sepharose, for affinity precipitation to a thermoresponsive polymer, namely poly(N-isopropylacrylamide). Five haptoglobin-poly(N-isopropylacrylamide) bioconjugates (affinity macroligands) were constructed with different polymer: haptoglobin-coupling ratios. Conjugation of haptoglobin to the soluble poly(N-isopropylacrylamide) apparently does not change the interaction thermodynamics with haemoglobin, as the haemoglobin binding constants calculated by a Scatchard analysis for the affinity macroligand were of the same order of magnitude as those described in the literature for the haemoglobin-haptoglobin complex in solution. Two elution protocols were used for haemoglobin release from the various affinity materials, one at pH 2, the other with 5 M urea at pH 11. Both affinity chromatography and affinity precipitation yielded a pure haemoglobin of high quality. Compared to the affinity chromatography, affinity precipitation showed a significantly higher ligand efficiency (ratio of the experimental capacity to the theoretical one). The method thus makes better use of the expensive affinity ligands. As affinity precipitation only requires small temperature changes to bring about precipitation/redissolution of the affinity complexes and a centrifugation step for recovery of the precipitate, the method in addition has advantages in term of scalability and simplicity.
Fast Affinity Propagation Clustering based on Machine Learning
Directory of Open Access Journals (Sweden)
Shailendra Kumar Shrivastava
2013-01-01
Full Text Available Affinity propagation (AP was recently introduced as an un-supervised learning algorithm for exemplar based clustering. In this paper a novel Fast Affinity Propagation clustering Approach based on Machine Learning (FAPML has been proposed. FAPML tries to put data points into clusters based on the history of the data points belonging to clusters in early stages. In FAPML we introduce affinity learning constant and dispersion constant which supervise the clustering process. FAPML also enforces the exemplar consistency and one of 'N constraints. Experiments conducted on many data sets such as Olivetti faces, Mushroom, Documents summarization, Thyroid, Yeast, Wine quality Red, Balance etc. show that FAPML is up to 54 % faster than the original AP with better Net Similarity.
Enhancing Community Detection By Affinity-based Edge Weighting Scheme
Energy Technology Data Exchange (ETDEWEB)
Yoo, Andy [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Sanders, Geoffrey [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Henson, Van [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Vassilevski, Panayot [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2015-10-05
Community detection refers to an important graph analytics problem of finding a set of densely-connected subgraphs in a graph and has gained a great deal of interest recently. The performance of current community detection algorithms is limited by an inherent constraint of unweighted graphs that offer very little information on their internal community structures. In this paper, we propose a new scheme to address this issue that weights the edges in a given graph based on recently proposed vertex affinity. The vertex affinity quantifies the proximity between two vertices in terms of their clustering strength, and therefore, it is ideal for graph analytics applications such as community detection. We also demonstrate that the affinity-based edge weighting scheme can improve the performance of community detection algorithms significantly.
Brown, Scott P; Muchmore, Steven W
2006-01-01
We have developed a system for performing computations on an enterprise grid using a freely available package for grid computing that allows us to harvest unused CPU cycles off of employee desktop computers. By modifying the traditional formulation of Molecular Mechanics with Poisson-Boltzmann Surface Area (MM-PBSA) methodology, in combination with a coarse-grain parallelized implementation suitable for deployment onto our enterprise grid, we show that it is possible to produce rapid physics-based estimates of protein-ligand binding affinities that have good correlation to experimental data. This is demonstrated by examining the correlation of our calculated binding affinities to experimental data and also by comparison to the correlation obtained from the binding-affinity calculations using traditional MM-PBSA that are reported in the literature.
Dai, Lu; Li, Weikang; Sun, Fei; Li, Baizhi; Li, Hongrui; Zhang, Hongxing; Zheng, Qingchuan; Liang, Chongyang
2016-09-01
Designing affinity ligands has always been the development focus of affinity chromatography. Previous antibody affinity ligand designs were mostly based on the crystal structure of protein A (UniProt code number: P38507), and the antibody-binding domains were modified according to the properties of amino acid residues. Currently, more effective bioinformatic prediction and experimental validation has been used to improve the design of antibody affinity ligands. In the present study, the complex crystal structure (the domain D of protein A and the Fab segment of IgM, PDB code: 1DEE) was used as the model. The vital site that inhibits the binding between domain D and IgM was estimated by means of molecular dynamics (MD) simulation, then MM-GBSA calculations were used to design a mutant of domain D (K46E) for improving affinity on the above vital site. The binding analysis using Biacore showed the association and dissociation parameters of K46E mutant that were optimized with IgM. The affinity increase of K46E mutant preferred for IgM, the affinity order is K46E tetramer (KD=6.02×10(-9)M)>K46E mutant (KD=6.66×10(-8)M)>domain D (KD=2.17×10(-7)M). Similar results were obtained when the optimized ligands were immobilized to the chromatography medium. A complete designing strategy was validated in this study, which will provide a novel insight into designing new ligands of antibody affinity chromatography media.
De Proft, Frank; Sablon, Nick; Tozer, David J; Geerlings, Paul
2007-01-01
An important chemical property emerging from density-functional theory is the hardness, which can be evaluated as half of the difference between the vertical ionisation energy and electron affinity of the system. For many gas phase molecules, however, the electron affinity is negative and standard ways of evaluating this property are troublesome. In this contribution, we investigate an unconventional approximation for the electron affinity, based on the Kohn-Sham orbital energies of the frontier orbitals and the ionisation potential. It is shown that, for a large series of molecules possessing negative electron affinities, this methodology yields reasonable values for this quantity and that the correlation of the computed values with the experimental affinities from electron transmission spectroscopy is superior to other theoretical approaches. In a second part of this contribution, the hardness of a series of stable negative ions is evaluated in aqueous solution.
Evolution based on chromosome affinity from a network perspective
Monteiro, R. L. S.; Fontoura, J. R. A.; Carneiro, T. K. G.; Moret, M. A.; Pereira, H. B. B.
2014-06-01
Recent studies have focused on models to simulate the complex phenomenon of evolution of species. Several studies have been performed with theoretical models based on Darwin's theories to associate them with the actual evolution of species. However, none of the existing models include the affinity between individuals using network properties. In this paper, we present a new model based on the concept of affinity. The model is used to simulate the evolution of species in an ecosystem composed of individuals and their relationships. We propose an evolutive algorithm that incorporates the degree centrality and efficiency network properties to perform the crossover process and to obtain the network topology objective, respectively. Using a real network as a starting point, we simulate its evolution and compare its results with the results of 5788 computer-generated networks.
Kong, Gang; Dai, Dao-Qing; Zou, Lu-Min
2008-07-01
In order to remove the artifacts of peripheral digital subtraction angiography (DSA), an affine transformation-based automatic image registration algorithm is introduced here. The whole process is described as follows: First, rectangle feature templates are constructed with their centers of the extracted Harris corners in the mask, and motion vectors of the central feature points are estimated using template matching technology with the similarity measure of maximum histogram energy. And then the optimal parameters of the affine transformation are calculated with the matrix singular value decomposition (SVD) method. Finally, bilinear intensity interpolation is taken to the mask according to the specific affine transformation. More than 30 peripheral DSA registrations are performed with the presented algorithm, and as the result, moving artifacts of the images are removed with sub-pixel precision, and the time consumption is less enough to satisfy the clinical requirements. Experimental results show the efficiency and robustness of the algorithm.
Flexible Molybdenum Electrodes towards Designing Affinity Based Protein Biosensors.
Kamakoti, Vikramshankar; Panneer Selvam, Anjan; Radha Shanmugam, Nandhinee; Muthukumar, Sriram; Prasad, Shalini
2016-07-18
Molybdenum electrode based flexible biosensor on porous polyamide substrates has been fabricated and tested for its functionality as a protein affinity based biosensor. The biosensor performance was evaluated using a key cardiac biomarker; cardiac Troponin-I (cTnI). Molybdenum is a transition metal and demonstrates electrochemical behavior upon interaction with an electrolyte. We have leveraged this property of molybdenum for designing an affinity based biosensor using electrochemical impedance spectroscopy. We have evaluated the feasibility of detection of cTnI in phosphate-buffered saline (PBS) and human serum (HS) by measuring impedance changes over a frequency window from 100 mHz to 1 MHz. Increasing changes to the measured impedance was correlated to the increased dose of cTnI molecules binding to the cTnI antibody functionalized molybdenum surface. We achieved cTnI detection limit of 10 pg/mL in PBS and 1 ng/mL in HS medium. The use of flexible substrates for designing the biosensor demonstrates promise for integration with a large-scale batch manufacturing process.
Identifying Affinity Classes of Inorganic Materials Binding Sequences via a Graph-Based Model.
Du, Nan; Knecht, Marc R; Swihart, Mark T; Tang, Zhenghua; Walsh, Tiffany R; Zhang, Aidong
2015-01-01
Rapid advances in bionanotechnology have recently generated growing interest in identifying peptides that bind to inorganic materials and classifying them based on their inorganic material affinities. However, there are some distinct characteristics of inorganic materials binding sequence data that limit the performance of many widely-used classification methods when applied to this problem. In this paper, we propose a novel framework to predict the affinity classes of peptide sequences with respect to an associated inorganic material. We first generate a large set of simulated peptide sequences based on an amino acid transition matrix tailored for the specific inorganic material. Then the probability of test sequences belonging to a specific affinity class is calculated by minimizing an objective function. In addition, the objective function is minimized through iterative propagation of probability estimates among sequences and sequence clusters. Results of computational experiments on two real inorganic material binding sequence data sets show that the proposed framework is highly effective for identifying the affinity classes of inorganic material binding sequences. Moreover, the experiments on the structural classification of proteins (SCOP) data set shows that the proposed framework is general and can be applied to traditional protein sequences.
Calculation of Absolute Protein-Ligand Binding Affinity Using Path and Endpoint Approaches
2006-02-01
an explicit solvent layer width of 10 Å. The hybrid solvent model (35) involves encapsulating a biological solute by a layer of water molecules...of cyclodextrin binding affinities: energy, entropy, and implications for drug design. Biophys. J. 87:3035–3049. 42. Janezic, D., R. M. Venable, and
Affinity sensor based on immobilized molecular imprinted synthetic recognition elements.
Lenain, Pieterjan; De Saeger, Sarah; Mattiasson, Bo; Hedström, Martin
2015-07-15
An affinity sensor based on capacitive transduction was developed to detect a model compound, metergoline, in a continuous flow system. This system simulates the monitoring of low-molecular weight organic compounds in natural flowing waters, i.e. rivers and streams. During operation in such scenarios, control of the experimental parameters is not possible, which poses a true analytical challenge. A two-step approach was used to produce a sensor for metergoline. Submicron spherical molecularly imprinted polymers, used as recognition elements, were obtained through emulsion polymerization and subsequently coupled to the sensor surface by electropolymerization. This way, a robust and reusable sensor was obtained that regenerated spontaneously under the natural conditions in a river. Small organic compounds could be analyzed in water without manipulating the binding or regeneration conditions, thereby offering a viable tool for on-site application.
Wasserman, Evgeny; Rustad, James R.; Felmy, Andrew R.
1999-03-01
Calculation of the energy of a charged defect on a surface in supercell geometry is discussed. An important example of such a calculation is evaluation of surface proton affinities and acidities, as adding or removing a proton creates a charged unit cell. Systems with periodic boundary conditions in three spatial directions and a vacuum gap between slabs are demonstrated to be inadequate for unit cells having non-zero ionic charge and uniform neutralizing background. In such a system the calculated energy diverges linearly with the thickness of the vacuum gap. A system periodic in two directions and finite in the direction perpendicular to the surface (2-D PBC) with the neutralizing background distributed as the surface charge density is free from this problem. Furthermore, the correction for the interaction of the charged defect with its own translational images is needed to speed up the convergence to the infinite dilution limit. The expression for the asymptotic correction for the energy of interaction of a charged defect with its translational images in 2-D PBC geometry has been developed in this study. The asymptotic correction is evaluated as the interaction energy of a 2-D translationally periodic array of point charges located above and below the plate of non-uniform dielectric. This is a generalization of the method of M. Leslie and M.J. Gillan [J. Phys. C, 18 (1985) 973] for the calculation of the energy of a charged defect in bulk crystals. The usefulness of this correction was demonstrated on two test cases involving the calculation of proton affinity and acidity at the (012) surface of hematite. The proposed method is likely to be important in ab initio calculations of the energy effect of the surface protonation reactions, where computational limitations dictate a small size for the unit cell.
Structure-based identification of new high-affinity nucleosome binding sequences.
Battistini, Federica; Hunter, Christopher A; Moore, Irene K; Widom, Jonathan
2012-06-29
The substrate for the proteins that express genetic information in the cell is not naked DNA but an assembly of nucleosomes, where the DNA is wrapped around histone proteins. The organization of these nucleosomes on genomic DNA is influenced by the DNA sequence. Here, we present a structure-based computational approach that translates sequence information into the energy required to bend DNA into a nucleosome-bound conformation. The calculations establish the relationship between DNA sequence and histone octamer binding affinity. In silico selection using this model identified several new DNA sequences, which were experimentally found to have histone octamer affinities comparable to the highest-affinity sequences known. The results provide insights into the molecular mechanism through which DNA sequence information encodes its organization. A quantitative appreciation of the thermodynamics of nucleosome positioning and rearrangement will be one of the key factors in understanding the regulation of transcription and in the design of new promoter architectures for the purposes of tuning gene expression dynamics.
DEFF Research Database (Denmark)
Poongavanam, Vasanthanathan; Svendsen, Casper Steinmann; Kongsted, Jacob
2014-01-01
. Furthermore, full protein fragment molecular orbital (FMO) calculations were conducted and subsequently analysed for individual residue stabilization/destabilization energy contributions to the overall binding affinity in order to better understand the true and false predictions. After a successful assessment......Quantum mechanical (QM) calculations have been used to predict the binding affinity of a set of ligands towards HIV-1 RT associated RNase H (RNH). The QM based chelation calculations show improved binding affinity prediction for the inhibitors compared to using an empirical scoring function...... of the methods based on the use of a training set of molecules, QM based chelation calculations were used as filter in virtual screening of compounds in the ZINC database. By this, we find, compared to regular docking, QM based chelation calculations to significantly reduce the large number of false positives...
Affinity functions: recognizing essential parameters in fuzzy connectedness based image segmentation
Ciesielski, Krzysztof C.; Udupa, Jayaram K.
2009-02-01
Fuzzy connectedness (FC) constitutes an important class of image segmentation schemas. Although affinity functions represent the core aspect (main variability parameter) of FC algorithms, they have not been studied systematically in the literature. In this paper, we present a thorough study to fill this gap. Our analysis is based on the notion of equivalent affinities: if any two equivalent affinities are used in the same FC schema to produce two versions of the algorithm, then these algorithms are equivalent in the sense that they lead to identical segmentations. We give a complete characterization of the affinity equivalence and show that many natural definitions of affinity functions and their parameters used in the literature are redundant in the sense that different definitions and values of such parameters lead to equivalent affinities. We also show that two main affinity types - homogeneity based and object feature based - are equivalent, respectively, to the difference quotient of the intensity function and Rosenfeld's degree of connectivity. In addition, we demonstrate that any segmentation obtained via relative fuzzy connectedness (RFC) algorithm can be viewed as segmentation obtained via absolute fuzzy connectedness (AFC) algorithm with an automatic and adaptive threshold detection. We finish with an analysis of possible ways of combining different component affinities that result in non equivalent affinities.
DEFF Research Database (Denmark)
Poongavanam, Vasanthanathan; Olsen, Lars; Jørgensen, Flemming Steen;
2010-01-01
, and methods based on statistical mechanics. In the present investigation, we started from an LIE model to predict the binding free energy of structurally diverse compounds of cytochrome P450 1A2 ligands, one of the important human metabolizing isoforms of the cytochrome P450 family. The data set includes both...... substrates and inhibitors. It appears that the electrostatic contribution to the binding free energy becomes negligible in this particular protein and a simple empirical model was derived, based on a training set of eight compounds. The root mean square error for the training set was 3.7 kJ/mol. Subsequent......Predicting binding affinities for receptor-ligand complexes is still one of the challenging processes in computational structure-based ligand design. Many computational methods have been developed to achieve this goal, such as docking and scoring methods, the linear interaction energy (LIE) method...
A global benchmark study using affinity-based biosensors
Rich, Rebecca L.; Papalia, Giuseppe A.; Flynn, Peter J.; Furneisen, Jamie; Quinn, John; Klein, Joshua S.; Katsamba, Phini S.; Waddell, M. Brent; Scott, Michael; Thompson, Joshua; Berlier, Judie; Corry, Schuyler; Baltzinger, Mireille; Zeder-Lutz, Gabrielle; Schoenemann, Andreas; Clabbers, Anca; Wieckowski, Sebastien; Murphy, Mary M.; Page, Phillip; Ryan, Thomas E.; Duffner, Jay; Ganguly, Tanmoy; Corbin, John; Gautam, Satyen; Anderluh, Gregor; Bavdek, Andrej; Reichmann, Dana; Yadav, Satya P.; Hommema, Eric; Pol, Ewa; Drake, Andrew; Klakamp, Scott; Chapman, Trevor; Kernaghan, Dawn; Miller, Ken; Schuman, Jason; Lindquist, Kevin; Herlihy, Kara; Murphy, Michael B.; Bohnsack, Richard; Andrien, Bruce; Brandani, Pietro; Terwey, Danny; Millican, Rohn; Darling, Ryan J.; Wang, Liann; Carter, Quincy; Dotzlaf, Joe; Lopez-Sagaseta, Jacinto; Campbell, Islay; Torreri, Paola; Hoos, Sylviane; England, Patrick; Liu, Yang; Abdiche, Yasmina; Malashock, Daniel; Pinkerton, Alanna; Wong, Melanie; Lafer, Eileen; Hinck, Cynthia; Thompson, Kevin; Primo, Carmelo Di; Joyce, Alison; Brooks, Jonathan; Torta, Federico; Bagge Hagel, Anne Birgitte; Krarup, Janus; Pass, Jesper; Ferreira, Monica; Shikov, Sergei; Mikolajczyk, Malgorzata; Abe, Yuki; Barbato, Gaetano; Giannetti, Anthony M.; Krishnamoorthy, Ganeshram; Beusink, Bianca; Satpaev, Daulet; Tsang, Tiffany; Fang, Eric; Partridge, James; Brohawn, Stephen; Horn, James; Pritsch, Otto; Obal, Gonzalo; Nilapwar, Sanjay; Busby, Ben; Gutierrez-Sanchez, Gerardo; Gupta, Ruchira Das; Canepa, Sylvie; Witte, Krista; Nikolovska-Coleska, Zaneta; Cho, Yun Hee; D’Agata, Roberta; Schlick, Kristian; Calvert, Rosy; Munoz, Eva M.; Hernaiz, Maria Jose; Bravman, Tsafir; Dines, Monica; Yang, Min-Hsiang; Puskas, Agnes; Boni, Erica; Li, Jiejin; Wear, Martin; Grinberg, Asya; Baardsnes, Jason; Dolezal, Olan; Gainey, Melicia; Anderson, Henrik; Peng, Jinlin; Lewis, Mark; Spies, Peter; Trinh, Quyhn; Bibikov, Sergei; Raymond, Jill; Yousef, Mohammed; Chandrasekaran, Vidya; Feng, Yuguo; Emerick, Anne; Mundodo, Suparna; Guimaraes, Rejane; McGirr, Katy; Li, Yue-Ji; Hughes, Heather; Mantz, Hubert; Skrabana, Rostislav; Witmer, Mark; Ballard, Joshua; Martin, Loic; Skladal, Petr; Korza, George; Laird-Offringa, Ite; Lee, Charlene S.; Khadir, Abdelkrim; Podlaski, Frank; Neuner, Phillippe; Rothacker, Julie; Rafique, Ashique; Dankbar, Nico; Kainz, Peter; Gedig, Erk; Vuyisich, Momchilo; Boozer, Christina; Ly, Nguyen; Toews, Mark; Uren, Aykut; Kalyuzhniy, Oleksandr; Lewis, Kenneth; Chomey, Eugene; Pak, Brian J.; Myszka, David G.
2013-01-01
To explore the variability in biosensor studies, 150 participants from 20 countries were given the same protein samples and asked to determine kinetic rate constants for the interaction. We chose a protein system that was amenable to analysis using different biosensor platforms as well as by users of different expertise levels. The two proteins (a 50-kDa Fab and a 60-kDa glutathione S-transferase [GST] antigen) form a relatively high-affinity complex, so participants needed to optimize several experimental parameters, including ligand immobilization and regeneration conditions as well as analyte concentrations and injection/dissociation times. Although most participants collected binding responses that could be fit to yield kinetic parameters, the quality of a few data sets could have been improved by optimizing the assay design. Once these outliers were removed, the average reported affinity across the remaining panel of participants was 620 pM with a standard deviation of 980 pM. These results demonstrate that when this biosensor assay was designed and executed appropriately, the reported rate constants were consistent, and independent of which protein was immobilized and which biosensor was used. PMID:19133223
Indian Academy of Sciences (India)
Younes Valadbeigi; Hossein Farrokhpour; Mahmoud Tabrizchi
2014-07-01
The proton affinities, gas phase basicities and adiabatic ionization energies and electron affinities of some important hydroxylamines and alkanolamines were calculated using B3LYP, CBS-Q and G4MP2 methods. Also, the B3LYP method was used to calculate vertical ionization energies and electron affinities of the molecules. The calculated ionization energies are in the range of 8-10.5 eV and they decrease as the number of carbon atoms increases. Computational results and ion mobility spectrometry study confirm that some alkanolamines lose a water molecule due to protonation at oxygen site and form cationic cyclic compounds. Effect of different substitutions on the cyclization of ethanolamine was studied theoretically.
An Affinity Propagation-Based DNA Motif Discovery Algorithm
Directory of Open Access Journals (Sweden)
Chunxiao Sun
2015-01-01
Full Text Available The planted (l,d motif search (PMS is one of the fundamental problems in bioinformatics, which plays an important role in locating transcription factor binding sites (TFBSs in DNA sequences. Nowadays, identifying weak motifs and reducing the effect of local optimum are still important but challenging tasks for motif discovery. To solve the tasks, we propose a new algorithm, APMotif, which first applies the Affinity Propagation (AP clustering in DNA sequences to produce informative and good candidate motifs and then employs Expectation Maximization (EM refinement to obtain the optimal motifs from the candidate motifs. Experimental results both on simulated data sets and real biological data sets show that APMotif usually outperforms four other widely used algorithms in terms of high prediction accuracy.
An Affinity Propagation-Based DNA Motif Discovery Algorithm.
Sun, Chunxiao; Huo, Hongwei; Yu, Qiang; Guo, Haitao; Sun, Zhigang
2015-01-01
The planted (l, d) motif search (PMS) is one of the fundamental problems in bioinformatics, which plays an important role in locating transcription factor binding sites (TFBSs) in DNA sequences. Nowadays, identifying weak motifs and reducing the effect of local optimum are still important but challenging tasks for motif discovery. To solve the tasks, we propose a new algorithm, APMotif, which first applies the Affinity Propagation (AP) clustering in DNA sequences to produce informative and good candidate motifs and then employs Expectation Maximization (EM) refinement to obtain the optimal motifs from the candidate motifs. Experimental results both on simulated data sets and real biological data sets show that APMotif usually outperforms four other widely used algorithms in terms of high prediction accuracy.
Li, Peng; He, Tingting; Hu, Xiaohua; Zhao, Junmin; Shen, Xianjun; Zhang, Ming; Wang, Yan
2014-06-01
A novel algorithm based on Connected Affinity Clique Extension (CACE) for mining overlapping functional modules in protein interaction network is proposed in this paper. In this approach, the value of protein connected affinity which is inferred from protein complexes is interpreted as the reliability and possibility of interaction. The protein interaction network is constructed as a weighted graph, and the weight is dependent on the connected affinity coefficient. The experimental results of our CACE in two test data sets show that the CACE can detect the functional modules much more effectively and accurately when compared with other state-of-art algorithms CPM and IPC-MCE.
Quantum image encryption based on generalized affine transform and logistic map
Liang, Hao-Ran; Tao, Xiang-Yang; Zhou, Nan-Run
2016-07-01
Quantum circuits of the generalized affine transform are devised based on the novel enhanced quantum representation of digital images. A novel quantum image encryption algorithm combining the generalized affine transform with logistic map is suggested. The gray-level information of the quantum image is encrypted by the XOR operation with a key generator controlled by the logistic map, while the position information of the quantum image is encoded by the generalized affine transform. The encryption keys include the independent control parameters used in the generalized affine transform and the logistic map. Thus, the key space is large enough to frustrate the possible brute-force attack. Numerical simulations and analyses indicate that the proposed algorithm is realizable, robust and has a better performance than its classical counterpart in terms of computational complexity.
Data base to compare calculations and observations
Energy Technology Data Exchange (ETDEWEB)
Tichler, J.L.
1985-01-01
Meteorological and climatological data bases were compared with known tritium release points and diffusion calculations to determine if calculated concentrations could replace measure concentrations at the monitoring stations. Daily tritium concentrations were monitored at 8 stations and 16 possible receptors. Automated data retrieval strategies are listed. (PSB)
High affinity, bioavailable 3-amino-1,4-benzodiazepine-based gamma-secretase inhibitors.
Owens, Andrew P; Nadin, Alan; Talbot, Adam C; Clarke, Earl E; Harrison, Timothy; Lewis, Huw D; Reilly, Michael; Wrigley, Jonathan D J; Castro, José L
2003-11-17
In this paper, we describe the development of a novel series of high affinity, orally bioavailable 3-amino-1,4 benzodiazepine-based gamma-secretase inhibitors for the potential treatment of Alzheimer's disease. We disclose structure-activity relationships based around the 1, 3 and 5 positions of the benzodiazepine core structure.
Targeting Anti-Cancer Active Compounds: Affinity-Based Chromatographic Assays
de Moraes, Marcela Cristina; Cardoso, Carmen Lucia; Seidl, Claudia; Moaddel, Ruin; Cass, Quezia Bezerra
2016-01-01
Affinity-based chromatography assays encompass the use of solid supports containing immobilized biological targets to monitor binding events in the isolation , identification and/or characterization of bioactive compounds. This powerful bioanalytical technique allows the screening of potential binders through fast analyses that can be directly performed using isolated substances or complex matrices. An overview of the recent researches in frontal and zonal affinity-based chromatography screening assays, which has been used as a tool in the identification and characterization of new anti-cancer agents, is discussed. In addition, a critical evaluation of the recently emerged ligands fishing assays in complex mixtures is also discussed. PMID:27306095
Walkup, Ward G; Kennedy, Mary B
2014-06-01
PDZ (PSD-95, DiscsLarge, ZO1) domains function in nature as protein binding domains within scaffold and membrane-associated proteins. They comprise ∼90 residues and make specific, high affinity interactions with complementary C-terminal peptide sequences, with other PDZ domains, and with phospholipids. We hypothesized that the specific, strong interactions of PDZ domains with their ligands would make them well suited for use in affinity chromatography. Here we describe a novel affinity chromatography method applicable for the purification of proteins that contain PDZ domain-binding ligands, either naturally or introduced by genetic engineering. We created a series of affinity resins comprised of PDZ domains from the scaffold protein PSD-95, or from neuronal nitric oxide synthase (nNOS), coupled to solid supports. We used them to purify heterologously expressed neuronal proteins or protein domains containing endogenous PDZ domain ligands, eluting the proteins with free PDZ domain peptide ligands. We show that Proteins of Interest (POIs) lacking endogenous PDZ domain ligands can be engineered as fusion products containing C-terminal PDZ domain ligand peptides or internal, N- or C-terminal PDZ domains and then can be purified by the same method. Using this method, we recovered recombinant GFP fused to a PDZ domain ligand in active form as verified by fluorescence yield. Similarly, chloramphenicol acetyltransferase (CAT) and β-Galactosidase (LacZ) fused to a C-terminal PDZ domain ligand or an N-terminal PDZ domain were purified in active form as assessed by enzymatic assay. In general, PDZ domains and ligands derived from PSD-95 were superior to those from nNOS for this method. PDZ Domain Affinity Chromatography promises to be a versatile and effective method for purification of a wide variety of natural and recombinant proteins.
Towards full Quantum-Mechanics-based Protein-Ligand Binding Affinities.
Ehrlich, Stephan; Göller, Andreas H; Grimme, Stefan
2017-01-29
Computational methods play a key role in modern drug design in the pharmaceutical industry but are mostly based on force fields, which are limited in accuracy when describing non-classical binding effects, proton transfer, or metal coordination. Here, we propose a general fully quantum mechanical (QM) scheme for the computation of protein-ligand affinities. It works on a single protein cutout (of about 1000 atoms) and evaluates all contributions (interaction energy, solvation, thermostatistical) to absolute binding free energy on the highest feasible QM level. The methodology is tested on two different protein targets: activated serine protease factor X (FXa) and tyrosine-protein kinase 2 (TYK2). We demonstrate that the geometry of the model systems can be efficiently energy-minimized by using general purpose graphics processing units, resulting in structures that are close to the co-crystallized protein-ligand structures. Our best calculations at a hybrid DFT level (PBEh-3c composite method) for the FXa ligand set result in an overall mean absolute deviation as low as 2.1 kcal mol(-1) . Though very encouraging, an analysis of outliers indicates that the structure optimization level, conformational sampling, and solvation treatment require further improvement.
Pašteka, L. F.; Eliav, E.; Borschevsky, A.; Kaldor, U.; Schwerdtfeger, P.
2017-01-01
The first ionization potential (IP) and electron affinity (EA) of the gold atom have been determined to an unprecedented accuracy using relativistic coupled cluster calculations up to the pentuple excitation level including the Breit and QED contributions. We reach meV accuracy (with respect to the experimental values) by carefully accounting for all individual contributions beyond the standard relativistic coupled cluster approach. Thus, we are able to resolve the long-standing discrepancy between experimental and theoretical IP and EA of gold.
Affinity-based methodologies and ligands for antibody purification: advances and perspectives.
Roque, Ana C A; Silva, Cláudia S O; Taipa, M Angela
2007-08-10
Many successful, recent therapies for life-threatening diseases such as cancer and rheumatoid arthritis are based on the recognition between native or genetically engineered antibodies and cell-surface receptors. Although naturally produced by the immune system, the need for antibodies with unique specificities and designed for single application, has encouraged the search for novel antibody purification strategies. The availability of these products to the end-consumer is strictly related to manufacture costs, particularly those attributed to downstream processing. Over the last decades, academia and industry have developed different types of interactions and separation techniques for antibody purification, affinity-based strategies being the most common and efficient methodologies. The affinity ligands utilized range from biological to synthetic designed molecules with enhanced resistance and stability. Despite the successes achieved, the purification "paradigm" still moves interests and efforts in the continuous demand for improved separation performances. This review will focus on recent advances and perspectives in antibody purification by affinity interactions using different techniques, with particular emphasis on affinity chromatography.
Numerical inductance calculations based on first principles.
Shatz, Lisa F; Christensen, Craig W
2014-01-01
A method of calculating inductances based on first principles is presented, which has the advantage over the more popular simulators in that fundamental formulas are explicitly used so that a deeper understanding of the inductance calculation is obtained with no need for explicit discretization of the inductor. It also has the advantage over the traditional method of formulas or table lookups in that it can be used for a wider range of configurations. It relies on the use of fast computers with a sophisticated mathematical computing language such as Mathematica to perform the required integration numerically so that the researcher can focus on the physics of the inductance calculation and not on the numerical integration.
Directory of Open Access Journals (Sweden)
Vasanthanathan Poongavanam
Full Text Available Quantum mechanical (QM calculations have been used to predict the binding affinity of a set of ligands towards HIV-1 RT associated RNase H (RNH. The QM based chelation calculations show improved binding affinity prediction for the inhibitors compared to using an empirical scoring function. Furthermore, full protein fragment molecular orbital (FMO calculations were conducted and subsequently analysed for individual residue stabilization/destabilization energy contributions to the overall binding affinity in order to better understand the true and false predictions. After a successful assessment of the methods based on the use of a training set of molecules, QM based chelation calculations were used as filter in virtual screening of compounds in the ZINC database. By this, we find, compared to regular docking, QM based chelation calculations to significantly reduce the large number of false positives. Thus, the computational models tested in this study could be useful as high throughput filters for searching HIV-1 RNase H active-site molecules in the virtual screening process.
Clustering of Symbolic Data based on Affinity Coefficient: Application to a Real Data Set
Sousa, Áurea; Bacelar-Nicolau,Helena; Nicolau, Fernando C.; Silva, Osvaldo
2013-01-01
Copyright © 2013 Walter de Gruyter GmbH. In this paper, we illustrate an application of Ascendant Hierarchical Cluster Analysis (AHCA) to complex data taken from the literature (interval data), based on the standardized weighted generalized affinity coefficient, by the method of Wald and Wolfowitz. The probabilistic aggregation criteria used belong to a parametric family of methods under the probabilistic approach of AHCA, named VL methodology. Finally, we compare the results achieved usin...
EXACT LINEARIZATION BASED MULTIPLE-SUBSPACE ITERATIVE RESOLUTION TO AFFINE NONLINEAR CONTROL SYSTEM
Institute of Scientific and Technical Information of China (English)
XU Zi-xiang; ZHOU De-yun; DENG Zi-chen
2006-01-01
To the optimal control problem of affine nonlinear system, based on differential geometry theory, feedback precise linearization was used. Then starting from the simulative relationship between computational structural mechanics and optimal control,multiple-substructure method was inducted to solve the optimal control problem which was linearized. And finally the solution to the original nonlinear system was found. Compared with the classical linearizational method of Taylor expansion, this one diminishes the abuse of error expansion with the enlargement of used region.
Brownian-motion based simulation of stochastic reaction-diffusion systems for affinity based sensors
Tulzer, Gerhard; Heitzinger, Clemens
2016-04-01
In this work, we develop a 2D algorithm for stochastic reaction-diffusion systems describing the binding and unbinding of target molecules at the surfaces of affinity-based sensors. In particular, we simulate the detection of DNA oligomers using silicon-nanowire field-effect biosensors. Since these devices are uniform along the nanowire, two dimensions are sufficient to capture the kinetic effects features. The model combines a stochastic ordinary differential equation for the binding and unbinding of target molecules as well as a diffusion equation for their transport in the liquid. A Brownian-motion based algorithm simulates the diffusion process, which is linked to a stochastic-simulation algorithm for association at and dissociation from the surface. The simulation data show that the shape of the cross section of the sensor yields areas with significantly different target-molecule coverage. Different initial conditions are investigated as well in order to aid rational sensor design. A comparison of the association/hybridization behavior for different receptor densities allows optimization of the functionalization setup depending on the target-molecule density.
Tulzer, Gerhard; Heitzinger, Clemens
2016-04-22
In this work, we develop a 2D algorithm for stochastic reaction-diffusion systems describing the binding and unbinding of target molecules at the surfaces of affinity-based sensors. In particular, we simulate the detection of DNA oligomers using silicon-nanowire field-effect biosensors. Since these devices are uniform along the nanowire, two dimensions are sufficient to capture the kinetic effects features. The model combines a stochastic ordinary differential equation for the binding and unbinding of target molecules as well as a diffusion equation for their transport in the liquid. A Brownian-motion based algorithm simulates the diffusion process, which is linked to a stochastic-simulation algorithm for association at and dissociation from the surface. The simulation data show that the shape of the cross section of the sensor yields areas with significantly different target-molecule coverage. Different initial conditions are investigated as well in order to aid rational sensor design. A comparison of the association/hybridization behavior for different receptor densities allows optimization of the functionalization setup depending on the target-molecule density.
Serizawa, Takeshi; Fukuta, Hiroki; Date, Takaaki; Sawada, Toshiki
2016-02-01
Peptides with affinities for the target segments of polymer hydrogels were identified by biological screening using phage-displayed peptide libraries, and these peptides exhibited an affinity-based release capability from hydrogels. The results from cell culture assays demonstrated the sustained anticancer effects of the drug-conjugated peptides that were released from the hydrogels.
Novel cyclen-based linear polymer as a high-affinity binding material for DNA condensation
Institute of Scientific and Technical Information of China (English)
无
2009-01-01
A novel cyclen-based linear polyamine (POGEC) was designed and synthesized from the reaction between 1,3-propanediol diglycidyl ether and 1,7-bis(diethoxyphosphory)-1,4,7,10-tetraazacyclod-odecane. High-affinity binding between POGEC and DNA was demonstrated by agarose gel electrophoresis and scanning electron microscopy (SEM). Moreover,the formed POGEC/DNA complex (termed polyplex) could be disassociated to release the free DNA through addition of the physiological concentration of NaCl solution. Fluorescence spectrum was used to measure the high-affinity binding and DNA condensation capability of POGEC. Circular dichroism (CD) spectrum indicates that the DNA conformation did not change after binding to POEGC.
Data-based identification and control of nonlinear systems via piecewise affine approximation.
Lai, Chow Yin; Xiang, Cheng; Lee, Tong Heng
2011-12-01
The piecewise affine (PWA) model represents an attractive model structure for approximating nonlinear systems. In this paper, a procedure for obtaining the PWA autoregressive exogenous (ARX) (autoregressive systems with exogenous inputs) models of nonlinear systems is proposed. Two key parameters defining a PWARX model, namely, the parameters of locally affine subsystems and the partition of the regressor space, are estimated, the former through a least-squares-based identification method using multiple models, and the latter using standard procedures such as neural network classifier or support vector machine classifier. Having obtained the PWARX model of the nonlinear system, a controller is then derived to control the system for reference tracking. Both simulation and experimental studies show that the proposed algorithm can indeed provide accurate PWA approximation of nonlinear systems, and the designed controller provides good tracking performance.
Novel cyclen-based linear polymer as a high-affinity binding material for DNA condensation
Institute of Scientific and Technical Information of China (English)
XIANG YongZhe; WANG Na; ZHANG Ji; LI Kun; ZHANG ZhongWei; LIN HongHui; YU XiaoQi
2009-01-01
A novel cyclen-based linear polyamine (POGEC) was designed and synthesized from the reaction be-tween 1,3-propanediol diglycidyl ether and 1,7-bis(diethoxyphosphory)-1,4,7,10-tetraazacyclod- odecane.High-affinity binding between POGEC and DNA was demonstrated by agarose gel electrophoresis and scanning electron microscopy (SEM). Moreover, the formed POGEC/DNA complex (termed polyplex) could be disassociated to release the free DNA through addition of the physiological concentration of NaCl solution. Fluorescence spectrum was used to measure the high-affinity binding and DNA con-densation capability of POGEC. Circular dichroism (CD) spectrum indicates that the DNA conformation did not change after binding to POEGC.
Target identification of natural products and bioactive compounds using affinity-based probes.
Pan, Sijun; Zhang, Hailong; Wang, Chenyu; Yao, Samantha C L; Yao, Shao Q
2016-05-04
Covering: 2010 to 2014.Advances in isolation, synthesis and screening strategies have made many bioactive substances available. However, in most cases their putative biological targets remain unknown. Herein, we highlight recent advances in target identification of natural products and bioactive compounds by using affinity-based probes. Aided by photoaffinity labelling, this strategy can capture potential cellular targets (on and off) of a natural product or bioactive compound in live cells directly, even when the compound-target interaction is reversible with moderate affinity. The knowledge of these targets may help uncover molecular pathways and new therapeutics for currently untreatable diseases. In this highlight, we will introduce the development of various photoactivatable groups, their synthesis and applications in target identification of natural products and bioactive compounds, with a focus on work done in recent years and from our laboratory. We will further discuss the strengths and weaknesses of each group and the outlooks for this novel proteome-wide profiling strategy.
Development of an epoxy-based monolith used for the affinity capturing of Escherichia coli bacteria.
Peskoller, Caroline; Niessner, Reinhard; Seidel, Michael
2009-05-01
An epoxy-based monolith has been developed for use as hydrophilic support in bioseparation. This monolith is produced by self-polymerization of polyglycerol-3-glycidyl ether in organic solvents as porogens at room temperature within 1 h. One receives a highly cross-linked structure that provides useful mechanical properties. The porosity and pore diameter can be controlled by varying the composition of the porogen. In this work, an epoxy-based monolith with a high porosity (79%) and large pore size (22 microm) is prepared and used in affinity capturing of bacterial cells. These features allow the passage of bacterial cells through the column. As affinity ligand polymyxin B is used, which allows the binding of gram-negative bacteria. The efficiency of the monolithic affinity column is studied with Escherichia coli spiked in water. Bacterial cells are concentrated on the column at pH 4 and eluted with a recovery of 97+/-3% in 200 microL by changing the pH value without impairing viability of bacteria. The dynamic capacity for the monolithic column is nearly independent of the flow rate (4x10(9)cells/column). Thereby, it is possible to separate and enrich gram-negative bacterial cells, such as E. coli, with high flow rates (10 mL/min) and low back pressure (<1 bar) in a volume as low as 200 microL compatible for real-time polymerase chain reaction, microarray formats, and biosensors.
Vasilescu, Alina; Nunes, Gilvanda; Hayat, Akhtar; Latif, Usman; Marty, Jean-Louis
2016-11-05
Food allergens are proteins from nuts and tree nuts, fish, shellfish, wheat, soy, eggs or milk which trigger severe adverse reactions in the human body, involving IgE-type antibodies. Sensitive detection of allergens in a large variety of food matrices has become increasingly important considering the emergence of functional foods and new food manufacturing technologies. For example, proteins such as casein from milk or lysozyme and ovalbumin from eggs are sometimes used as fining agents in the wine industry. Nonetheless, allergen detection in processed foods is a challenging endeavor, as allergen proteins are degraded during food processing steps involving heating or fermentation. Detection of food allergens was primarily achieved via Enzyme-Linked Immuno Assay (ELISA) or by chromatographic methods. With the advent of biosensors, electrochemical affinity-based biosensors such as those incorporating antibodies and aptamers as biorecognition elements were also reported in the literature. In this review paper, we highlight the success achieved in the design of electrochemical affinity biosensors based on disposable screen-printed electrodes towards detection of protein allergens. We will discuss the analytical figures of merit for various disposable screen-printed affinity sensors in relation to methodologies employed for immobilization of bioreceptors on transducer surface.
Li, Feng; Dong, Ping-Jun; Zhuang, Qian-Fen
2009-05-15
A novel column-based chromatographic protein refolding strategy was developed using dye-ligand affinity chromatography (DLAC) based on macroporous biomaterial. Chitosan-silica (CS-silica) biomaterial with macroporous surface was used as the supporting matrix for the preparation of the DLAC material. The dye-ligand Cibacron Blue F3GA (CBF) was selected as affinity handle and could be covalently immobilized to form dye-ligand affinity adsorbent (CBF-CS-silica) using the reactivity of NH(2) on CS-silica biomaterial. After the model protein catalase was denatured with 6mol/L urea, the denaturant could be rapidly removed and catalase could be successfully refolded as facilitated by the adsorption of CBF-CS-silica. The urea denaturation process and the elute condition for the chromatographic refolding were optimized by measuring tryptophan fluorescence and activity of catalase. The refolding performance of the proposed DLAC was compared with dilution refolding. The protein concentration during the proposed chromatographic refolding increased by a factor of 20 without reducing the yield achieved as compared to dilution refolding. The column-based protein refolding strategy based on dye-ligand affinity chromatography with porous biomaterial being matrix possessed potential in chromatographic refolding of protein.
Bi, Xiaodong; Liu, Zhen
2014-12-16
Enzyme activity assay is an important method in clinical diagnostics. However, conventional enzyme activity assay suffers from apparent interference from the sample matrix. Herein, we present a new format of enzyme activity assay that can effectively eliminate the effects of the sample matrix. The key is a 96-well microplate modified with molecularly imprinted polymer (MIP) prepared according to a newly proposed method called boronate affinity-based oriented surface imprinting. Alkaline phosphatase (ALP), a glycoprotein enzyme that has been routinely used as an indicator for several diseases in clinical tests, was taken as a representative target enzyme. The prepared MIP exhibited strong affinity toward the template enzyme (with a dissociation constant of 10(-10) M) as well as superb tolerance for interference. Thus, the enzyme molecules in a complicated sample matrix could be specifically captured and cleaned up for enzyme activity assay, which eliminated the interference from the sample matrix. On the other hand, because the boronate affinity MIP could well retain the enzymatic activity of glycoprotein enzymes, the enzyme captured by the MIP was directly used for activity assay. Thus, additional assay time and possible enzyme or activity loss due to an enzyme release step required by other methods were avoided. Assay of ALP in human serum was successfully demonstrated, suggesting a promising prospect of the proposed method in real-world applications.
Marchenko, N Yu; Sikorskaya, E V; Marchenkov, V V; Kashparov, I A; Semisotnov, G V
2016-03-01
Molecular chaperones are involved in folding, oligomerization, transport, and degradation of numerous cellular proteins. Most of chaperones are heat-shock proteins (HSPs). A number of diseases of various organisms are accompanied by changes in the structure and functional activity of chaperones, thereby revealing their vital importance. One of the fundamental properties of chaperones is their ability to bind polypeptides lacking a rigid spatial structure. Here, we demonstrate that affinity chromatography using sorbents with covalently attached denatured proteins allows effective purification and quantitative assessment of their bound protein partners. Using pure Escherichia coli chaperone GroEL (Hsp60), the capacity of denatured pepsin or lysozyme-based affinity sorbents was evaluated as 1 mg and 1.4 mg of GroEL per 1 ml of sorbent, respectively. Cell lysates of bacteria (E. coli, Thermus thermophilus, and Yersinia pseudotuberculosis), archaea (Halorubrum lacusprofundi) as well as the lysate of rat liver mitochondria were analyzed using affinity carrier with denatured lysozyme. It was found that, apart from Hsp60, other proteins with a molecular weight of about 100, 50, 40, and 20 kDa are able to interact with denatured lysozyme.
Affinity analysis for biomolecular interactions based on magneto-optical relaxation measurements
Aurich, Konstanze; Nagel, Stefan; Heister, Elena; Weitschies, Werner
2008-12-01
Magneto-optical relaxation measurements of magnetically labelled biomolecules are a promising tool for immunometric analyses. Carcinoembryonic antigen (CEA) and its polyclonal and monoclonal antibodies (anti-CEA) were utilized as a model system for affinity analysis of the interaction between antibody and antigen. For this purpose antibodies were coupled with magnetic nanoparticles (MNPs). Aggregation of these antibody sensors due to interactions with the CEA was observed subsequently by measuring the relaxation time of the birefringence of a transmitted laser beam that occurs in a pulsed magnetic field. A kinetic model of chain-like aggregation developed for these purposes enables the rapid and simple calculation of the kinetic parameters of the underlying protein interaction. From the known antigen concentration and the increase in particle size during the interaction we are able to estimate the unknown parameters with standard methods for the statistical description of stepwise polymerization. This novel affinity analysis was successfully applied for the antigen-antibody interaction described herein and can be applied to other biomolecular interactions. First efforts have been made to establish magneto-optical relaxation measurements in body fluids.
Affine Invariant, Model-Based Object Recognition Using Robust Metrics and Bayesian Statistics
Zografos, Vasileios; 10.1007/11559573_51
2010-01-01
We revisit the problem of model-based object recognition for intensity images and attempt to address some of the shortcomings of existing Bayesian methods, such as unsuitable priors and the treatment of residuals with a non-robust error norm. We do so by using a refor- mulation of the Huber metric and carefully chosen prior distributions. Our proposed method is invariant to 2-dimensional affine transforma- tions and, because it is relatively easy to train and use, it is suited for general object matching problems.
High-throughput and multiplexed regeneration buffer scouting for affinity-based interactions.
Geuijen, Karin P M; Schasfoort, Richard B; Wijffels, Rene H; Eppink, Michel H M
2014-06-01
Affinity-based analyses on biosensors depend partly on regeneration between measurements. Regeneration is performed with a buffer that efficiently breaks all interactions between ligand and analyte while maintaining the active binding site of the ligand. We demonstrated a regeneration buffer scouting using the combination of a continuous flow microspotter with a surface plasmon resonance imaging platform to simultaneously test 48 different regeneration buffers on a single biosensor. Optimal regeneration conditions are found within hours and consume little amounts of buffers, analyte, and ligand. This workflow can be applied to any ligand that is coupled through amine, thiol, or streptavidin immobilization.
Inulin-based glycopolymer: its preparation, lectin-affinity and gellation property.
Izawa, Kazumi; Akiyama, Kento; Abe, Haruka; Togashi, Yosuke; Hasegawa, Teruaki
2013-06-01
The glycopolymer composed of an inulin scaffold and pendent β-lactosides was developed from commercially available inulin through sequential chemical modification processes composed of tosylation, azidation, and the subsequent Huisgen cyclocoupling with an alkyne-terminated β-lactoside. The resultant inulin-based glycopolymer has unique dual affinity towards β-galactoside and α-glucoside specific lectins which is attributable to its pendent β-lactosides and terminal α-glucoside. Its gellation property was also accessed to find that the inulin-based glycopolymer forms hydrogels whose critical gellation concentration (CGC) was lower than that required for hydrogels made from native inulin. Drug release properties of the inulin-based glycopolymer were also discussed in this paper.
Energy Technology Data Exchange (ETDEWEB)
Moaddel, Ruin [Gerontology Research Center, National Institute on Aging, National Institutes of Health, 5600 Nathan Shock Drive, Baltimore, MD 21224-6825 (United States); Wainer, Irving W. [Gerontology Research Center, National Institute on Aging, National Institutes of Health, 5600 Nathan Shock Drive, Baltimore, MD 21224-6825 (United States)]. E-mail: Wainerir@grc.nia.nih.gov
2006-03-30
Membranes obtained from cell lines that express or do not express a target membrane bound protein have been immobilized on a silica-based liquid chromatographic support or on the surface of an activated glass capillary. The resulting chromatographic columns have been placed in liquid chromatographic systems and used to characterize the target proteins and to identify small molecules that bind to the target. Membranes containing ligand gated ion channels, G-protein coupled receptors and drug transporters have been prepared and characterized. If a marker ligand has been identified for the target protein, frontal or zonal displacement chromatographic techniques can be used to determine binding affinities (K {sub d} values) and non-linear chromatography can be used to assess the association (k {sub on}) and dissociation (k {sub off}) rate constants and the thermodynamics of the binding process. Membrane-based affinity columns have been created using membranes from a cell line that does not express the target protein (control) and the same cell line that expresses the target protein (experimental) after genomic transfection. The resulting columns can be placed in a parallel chromatography system and the differential retention between the control and experimental columns can be used to identify small molecules and protein that bind to the target protein. These applications will be illustrated using columns created using cellular membranes containing nicotinic acetylcholine receptors and the drug transporter P-glycoprotein.
Nguyen, Ha P; Koutsoukas, Alexios; Mohd Fauzi, Fazlin; Drakakis, Georgios; Maciejewski, Mateusz; Glen, Robert C; Bender, Andreas
2013-09-01
Diversity selection is a frequently applied strategy for assembling high-throughput screening libraries, making the assumption that a diverse compound set increases chances of finding bioactive molecules. Based on previous work on experimental 'affinity fingerprints', in this study, a novel diversity selection method is benchmarked that utilizes predicted bioactivity profiles as descriptors. Compounds were selected based on their predicted activity against half of the targets (training set), and diversity was assessed based on coverage of the remaining (test set) targets. Simultaneously, fingerprint-based diversity selection was performed. An original version of the method exhibited on average 5% and an improved version on average 10% increase in target space coverage compared with the fingerprint-based methods. As a typical case, bioactivity-based selection of 231 compounds (2%) from a particular data set ('Cutoff-40') resulted in 47.0% and 50.1% coverage, while fingerprint-based selection only achieved 38.4% target coverage for the same subset size. In conclusion, the novel bioactivity-based selection method outperformed the fingerprint-based method in sampling bioactive chemical space on the data sets considered. The structures retrieved were structurally more acceptable to medicinal chemists while at the same time being more lipophilic, hence bioactivity-based diversity selection of compounds would best be combined with physicochemical property filters in practice.
A multiobjective evolutionary algorithm to find community structures based on affinity propagation
Shang, Ronghua; Luo, Shuang; Zhang, Weitong; Stolkin, Rustam; Jiao, Licheng
2016-07-01
Community detection plays an important role in reflecting and understanding the topological structure of complex networks, and can be used to help mine the potential information in networks. This paper presents a Multiobjective Evolutionary Algorithm based on Affinity Propagation (APMOEA) which improves the accuracy of community detection. Firstly, APMOEA takes the method of affinity propagation (AP) to initially divide the network. To accelerate its convergence, the multiobjective evolutionary algorithm selects nondominated solutions from the preliminary partitioning results as its initial population. Secondly, the multiobjective evolutionary algorithm finds solutions approximating the true Pareto optimal front through constantly selecting nondominated solutions from the population after crossover and mutation in iterations, which overcomes the tendency of data clustering methods to fall into local optima. Finally, APMOEA uses an elitist strategy, called "external archive", to prevent degeneration during the process of searching using the multiobjective evolutionary algorithm. According to this strategy, the preliminary partitioning results obtained by AP will be archived and participate in the final selection of Pareto-optimal solutions. Experiments on benchmark test data, including both computer-generated networks and eight real-world networks, show that the proposed algorithm achieves more accurate results and has faster convergence speed compared with seven other state-of-art algorithms.
Barron, Mace G.
2017-01-01
The flexible hydrophobic ligand binding pocket (LBP) of estrogen receptor α (ERα) allows the binding of a wide variety of endocrine disruptors. Upon ligand binding, the LBP reshapes around the contours of the ligand and stabilizes the complex by complementary hydrophobic interactions and specific hydrogen bonds with the ligand. Here we present a framework for quantitative analysis of the steric and electronic features of the human ERα-ligand complex using three dimensional (3D) protein-ligand interaction description combined with 3D-QSAR approach. An empirical hydrophobicity density field is applied to account for hydrophobic contacts of ligand within the LBP. The obtained 3D-QSAR model revealed that hydrophobic contacts primarily determine binding affinity and govern binding mode with hydrogen bonds. Several residues of the LBP appear to be quite flexible and adopt a spectrum of conformations in various ERα-ligand complexes, in particular His524. The 3D-QSAR was combined with molecular docking based on three receptor conformations to accommodate receptor flexibility. The model indicates that the dynamic character of the LBP allows accommodation and stable binding of structurally diverse ligands, and proper representation of the protein flexibility is critical for reasonable description of binding of the ligands. Our results provide a quantitative and mechanistic understanding of binding affinity and mode of ERα agonists and antagonists that may be applicable to other nuclear receptors. PMID:28061508
Batch affinity adsorption of His-tagged proteins with EDTA-based chitosan.
Hua, Weiwei; Lou, Yimin; Xu, Weiyuan; Cheng, Zhixian; Gong, Xingwen; Huang, Jianying
2016-01-01
Affinity adsorption purification of hexahistidine-tagged (His-tagged) proteins using EDTA-chitosan-based adsorption was designed and carried out. Chitosan was elaborated with ethylenediaminetetraacetic acid (EDTA), and the resulting polymer was characterized by FTIR, TGA, and TEM. Different metals including Ni(2+), Cu(2+), and Zn(2+) were immobilized with EDTA-chitosan, and their capability to the specific adsorption of His-tagged proteins were then investigated. The results showed that Ni(2+)-EDTA-chitosan and Zn(2+)-EDTA-chitosan had high affinity toward the His-tagged proteins, thus isolating them from protein mixture. The target fluorescent-labeled hexahistidine protein remained its fluorescent characteristic throughout the purification procedure when Zn(2+)-EDTA-chitosan was used as a sorbent, wherein the real-time monitor was performed to examine the immigration of fluorescent-labeled His-tagged protein. Comparatively, Zn(2+)-EDTA-chitosan showed more specific binding ability for the target protein, but with less binding capacity. It was further proved that this purification system could be recovered and reused at least for 5 times and could run on large scales. The presented M(2+)-EDTA-chitosan system, with the capability to specifically bind His-tagged proteins, make the purification of His-tagged proteins easy to handle, leaving out fussy preliminary treatment, and with the possibility of continuous processing and a reduction in operational cost in relation to the costs of conventional processes.
Gene Therapy Vectors with Enhanced Transfection Based on Hydrogels Modified with Affinity Peptides
Shepard, Jaclyn A.; Wesson, Paul J.; Wang, Christine E.; Stevans, Alyson C.; Holland, Samantha J.; Shikanov, Ariella; Grzybowski, Bartosz A.; Shea, Lonnie D.
2011-01-01
Regenerative strategies for damaged tissue aim to present biochemical cues that recruit and direct progenitor cell migration and differentiation. Hydrogels capable of localized gene delivery are being developed to provide a support for tissue growth, and as a versatile method to induce the expression of inductive proteins; however, the duration, level, and localization of expression isoften insufficient for regeneration. We thus investigated the modification of hydrogels with affinity peptides to enhance vector retention and increase transfection within the matrix. PEG hydrogels were modified with lysine-based repeats (K4, K8), which retained approximately 25% more vector than control peptides. Transfection increased 5- to 15-fold with K8 and K4 respectively, over the RDG control peptide. K8- and K4-modified hydrogels bound similar quantities of vector, yet the vector dissociation rate was reduced for K8, suggesting excessive binding that limited transfection. These hydrogels were subsequently applied to an in vitro co-culture model to induce NGF expression and promote neurite outgrowth. K4-modified hydrogels promoted maximal neurite outgrowth, likely due to retention of both the vector and the NGF. Thus, hydrogels modified with affinity peptides enhanced vector retention and increased gene delivery, and these hydrogels may provide a versatile scaffold for numerous regenerative medicine applications. PMID:21514659
Spreadsheet Based Scaling Calculations and Membrane Performance
Energy Technology Data Exchange (ETDEWEB)
Wolfe, T D; Bourcier, W L; Speth, T F
2000-12-28
Many membrane element manufacturers provide a computer program to aid buyers in the use of their elements. However, to date there are few examples of fully integrated public domain software available for calculating reverse osmosis and nanofiltration system performance. The Total Flux and Scaling Program (TFSP), written for Excel 97 and above, provides designers and operators new tools to predict membrane system performance, including scaling and fouling parameters, for a wide variety of membrane system configurations and feedwaters. The TFSP development was funded under EPA contract 9C-R193-NTSX. It is freely downloadable at www.reverseosmosis.com/download/TFSP.zip. TFSP includes detailed calculations of reverse osmosis and nanofiltration system performance. Of special significance, the program provides scaling calculations for mineral species not normally addressed in commercial programs, including aluminum, iron, and phosphate species. In addition, ASTM calculations for common species such as calcium sulfate (CaSO{sub 4}{times}2H{sub 2}O), BaSO{sub 4}, SrSO{sub 4}, SiO{sub 2}, and LSI are also provided. Scaling calculations in commercial membrane design programs are normally limited to the common minerals and typically follow basic ASTM methods, which are for the most part graphical approaches adapted to curves. In TFSP, the scaling calculations for the less common minerals use subsets of the USGS PHREEQE and WATEQ4F databases and use the same general calculational approach as PHREEQE and WATEQ4F. The activities of ion complexes are calculated iteratively. Complexes that are unlikely to form in significant concentration were eliminated to simplify the calculations. The calculation provides the distribution of ions and ion complexes that is used to calculate an effective ion product ''Q.'' The effective ion product is then compared to temperature adjusted solubility products (Ksp's) of solids in order to calculate a Saturation Index (SI
Functional group based Ligand binding affinity scoring function at atomic environmental level
Varadwaj, Pritish Kumar; Lahiri, Tapobrata
2009-01-01
Use of knowledge based scoring function (KBSF) for virtual screening and molecular docking has become an established method for drug discovery. Lack of a precise and reliable free energy function that describes several interactions including water-mediated atomic interaction between amino-acid residues and ligand makes distance based statistical measure as the only alternative. Till now all the distance based scoring functions in KBSF arena use atom singularity concept, which neglects the environmental effect of the atom under consideration. We have developed a novel knowledge-based statistical energy function for protein-ligand complexes which takes atomic environment in to account hence functional group as a singular entity. The proposed knowledge based scoring function is fast, simple to construct, easy to use and moreover it tackle the existing problem of handling molecular orientation in active site pocket. We have designed and used Functional group based Ligand retrieval (FBLR) system which can identify and detect the orientation of functional groups in ligand. This decoy searching was used to build the above KBSF to quantify the activity and affinity of high resolution protein-ligand complexes. We have proposed the probable use of these decoys in molecular build-up as a de-novo drug designing approach. We have also discussed the possible use of the said KSBF in pharmacophore fragment detection and pseudo center based fragment alignment procedure. PMID:19255647
Danikiewicz, Witold
2009-08-01
Gas-phase proton affinities (PA) of a series of 25 small, aliphatic carbanions were computed using different Gaussian-3 methods: G3, G3(B3LYP), G3(MP2) and G3(MP2, B3LYP) and Complete Basis Set Extrapolation methods: CBS-4M, CBS-Q, CBS-QB3, and CBS-APNO. The results were compared with critically selected experimental data. The analysis of the results shows that for the majority of the studied molecules all compound methods (Gaussian-3 and CBS), except for CBS-4M, give comparable results, which differ no more than +/-2 kcal mol-1 from the experimental data. Taking into account the calculation time, G3(MP2) and G3(MP2, B3LYP) methods offer the best compromise between accuracy and computational cost. As an additional proof, the results obtained by these two methods were compared with the values obtained using CCSD(T) ab initio method with large basis set. It was found also that some of the published experimental data are erroneous and should be corrected. The results described in this work show that for the majority of the studied compounds PA values calculated using compound methods can be used with the same or even higher confidence as the experimental ones because even the largest differences between Gaussian-3 and CBS methods listed above are still comparable with the accuracy of the typical PA measurements.
Improving Network Performance with Affinity based Mobility Model in Opportunistic Network
Batabyal, Suvadip; 10.5121/ijwmn.2012.4213
2012-01-01
Opportunistic network is a type of Delay Tolerant Network which is characterized by intermittent connectivity amongst the nodes and communication largely depends upon the mobility of the participating nodes. The network being highly dynamic, traditional MANET protocols cannot be applied and the nodes must adhere to store-carry-forward mechanism. Nodes do not have the information about the network topology, number of participating nodes and the location of the destination node. Hence, message transfer reliability largely depends upon the mobility pattern of the nodes. In this paper we have tried to find the impact of RWP (Random Waypoint) mobility on packet delivery ratio. We estimate mobility factors like number of node encounters, contact duration(link time) and inter-contact time which in turn depends upon parameters like playfield area (total network area), number of nodes, node velocity, bit-rate and RF range of the nodes. We also propose a restricted form of RWP mobility model, called the affinity based ...
Zhang, Yu-Chen; Zhang, Shao-Wu; Liu, Lian; Liu, Hui; Zhang, Lin; Cui, Xiaodong; Huang, Yufei; Meng, Jia
2015-01-01
With the development of new sequencing technology, the entire N6-methyl-adenosine (m(6)A) RNA methylome can now be unbiased profiled with methylated RNA immune-precipitation sequencing technique (MeRIP-Seq), making it possible to detect differential methylation states of RNA between two conditions, for example, between normal and cancerous tissue. However, as an affinity-based method, MeRIP-Seq has yet provided base-pair resolution; that is, a single methylation site determined from MeRIP-Seq data can in practice contain multiple RNA methylation residuals, some of which can be regulated by different enzymes and thus differentially methylated between two conditions. Since existing peak-based methods could not effectively differentiate multiple methylation residuals located within a single methylation site, we propose a hidden Markov model (HMM) based approach to address this issue. Specifically, the detected RNA methylation site is further divided into multiple adjacent small bins and then scanned with higher resolution using a hidden Markov model to model the dependency between spatially adjacent bins for improved accuracy. We tested the proposed algorithm on both simulated data and real data. Result suggests that the proposed algorithm clearly outperforms existing peak-based approach on simulated systems and detects differential methylation regions with higher statistical significance on real dataset.
Calculating Traffic based on Road Sensor Data
Bisseling, Rob; Gao, Fengnan; Hafkenscheid, Patrick; Idema, Reijer; Jetka, Tomasz; Guerra Ones, Valia; Sikora, Monika
2014-01-01
Road sensors gather a lot of statistical data about traffic. In this paper, we discuss how a measure for the amount of traffic on the roads can be derived from this data, such that the measure is independent of the number and placement of sensors, and the calculations can be performed quickly for la
Energy Technology Data Exchange (ETDEWEB)
Haruyama, Tetsuya; Wakabayashi, Ryo; Cho, Takeshi; Matsuyama, Sho-taro, E-mail: haruyama@life.kyutech.as.jp [Kyushu Institute of Technology, Department of Biological Functions and Engineering, Kitakyushu Science and Research Park, Hibikino, Kitakyushu, Fukuoka 808-0196 (Japan)
2011-10-29
Photo-excited current can be generated at a molecular interface between a photo-excited molecules and a semi-conductive material in appropriate condition. The system has been recognized for promoting photo-energy devices such as an organic dye sensitized solar-cell. The photo-current generated reactions are totally dependent on the interfacial energy reactions, which are in a highly fluctuated interfacial environment. The authors investigated the photo-excited current reaction to develop a smart affinity detection method. However, in order to perform both an affinity reaction and a photo-excited current reaction at a molecular interface, ordered fabrications of the functional (affinity, photo-excitation, etc.) molecules layer on a semi-conductive surface is required. In the present research, we would like to present the fabrication and functional performance of photo-excited current-based affinity assay device and its application for detection of endocrine disrupting chemicals. On the FTO surface, fluorescent pigment labelled affinity peptide was immobilized through the EC tag (electrochemical-tag) method. The modified FTO produced a current when it was irradiated with diode laser light. However, the photo current decreased drastically when estrogen (ES) coexisted in the reaction solution. In this case, immobilized affinity probe molecules formed a complex with ES and estrogen receptor (ER). The result strongly suggests that the photo-excited current transduction between probe molecule-labelled cyanine pigment and the FTO surface was partly inhibited by a complex that formed at the affinity oligo-peptide region in a probe molecule on the FTO electrode. The bound bulky complex may act as an impediment to perform smooth transduction of photo-excited current in the molecular interface. The present system is new type of photo-reaction-based analysis. This system can be used to perform simple high-sensitive homogeneous assays.
Gultekin, Kemal
2015-01-01
In this study, we give a thorough analysis of a general affine gravity with torsion. After a brief exposition of the affine gravities considered by Eddington and Schroedinger, we construct and analyze different affine gravities based on determinants of the Ricci tensor, torsion tensor, Riemann tensor and their combinations. In each case we reduce equations of motion to their simplest forms and give a detailed analysis of their solutions. Our analyses lead to construction of the affine connection in terms of curvature and torsion tensors. Our solutions of the dynamical equations show that curvature tensors at different points are correlated via non-local, exponential rescaling factors determined by the torsion tensor.
Rational tailoring of substrate and inhibitor affinity via ATRP polymer-based protein engineering.
Murata, Hironobu; Cummings, Chad S; Koepsel, Richard R; Russell, Alan J
2014-07-14
Atom transfer radical polymerization (ATRP)-based protein engineering of chymotrypsin with a cationic polymer was used to tune the substrate specificity and inhibitor binding. Poly(quaternary ammonium) was grown from the surface of the enzyme using ATRP after covalent attachment of a protein reactive, water-soluble ATRP-initiator. This "grafting from" conjugation approach generated a high density of cationic ammonium ions around the biocatalytic core. Modification increased the surface area of the protein over 40-fold, and the density of modification on the protein surface was approximately one chain per 4 nm(2). After modification, bioactivity was increased at low pH relative to the activity of the native enzyme. In addition, the affinity of the enzyme for a peptide substrate was increased over a wide pH range. The massively cationic chymotrypsin, which included up to 2000 additional positive charges per molecule of enzyme, was also more stable at extremes of temperature and pH. Most interestingly, we were able to rationally control the binding of two oppositely charged polypeptide protease inhibitors, aprotinin and the Bowman-Birk trypsin-chymotrypsin inhibitor from Glycine max, to the cationic derivative of chymotrypsin. This study expands upon our efforts to use polymer-based protein engineering to predictably engineer enzyme properties without the need for molecular biology.
High-affinity DNA base analogs as supramolecular, nanoscale promoters of macroscopic adhesion.
Anderson, Cyrus A; Jones, Amanda R; Briggs, Ellen M; Novitsky, Eric J; Kuykendall, Darrell W; Sottos, Nancy R; Zimmerman, Steven C
2013-05-15
Adhesion phenomena are essential to many biological processes and to synthetic adhesives and manufactured coatings and composites. Supramolecular interactions are often implicated in various adhesion mechanisms. Recently, supramolecular building blocks, such as synthetic DNA base-pair mimics, have drawn attention in the context of molecular recognition, self-assembly, and supramolecular polymers. These reversible, hydrogen-bonding interactions have been studied extensively for their adhesive capabilities at the nano- and microscale, however, much less is known about their utility for practical adhesion in macroscopic systems. Herein, we report the preparation and evaluation of supramolecular coupling agents based on high-affinity, high-fidelity quadruple hydrogen-bonding units (e.g., DAN·DeUG, Kassoc = 10(8) M(-1) in chloroform). Macroscopic adhesion between polystyrene films and glass surfaces modified with 2,7-diamidonaphthyridine (DAN) and ureido-7-deazaguanine (DeUG) units was evaluated by mechanical testing. Structure-property relationships indicate that the designed supramolecular interaction at the nanoscale plays a key role in the observed macroscopic adhesive response. Experiments probing reversible adhesion or self-healing properties of bulk samples indicate that significant recovery of initial strength can be realized after failure but that the designed noncovalent interaction does not lead to healing during the process of adhesion loss.
Determining the ice-binding planes of antifreeze proteins by fluorescence-based ice plane affinity.
Basu, Koli; Garnham, Christopher P; Nishimiya, Yoshiyuki; Tsuda, Sakae; Braslavsky, Ido; Davies, Peter
2014-01-15
Antifreeze proteins (AFPs) are expressed in a variety of cold-hardy organisms to prevent or slow internal ice growth. AFPs bind to specific planes of ice through their ice-binding surfaces. Fluorescence-based ice plane affinity (FIPA) analysis is a modified technique used to determine the ice planes to which the AFPs bind. FIPA is based on the original ice-etching method for determining AFP-bound ice-planes. It produces clearer images in a shortened experimental time. In FIPA analysis, AFPs are fluorescently labeled with a chimeric tag or a covalent dye then slowly incorporated into a macroscopic single ice crystal, which has been preformed into a hemisphere and oriented to determine the a- and c-axes. The AFP-bound ice hemisphere is imaged under UV light to visualize AFP-bound planes using filters to block out nonspecific light. Fluorescent labeling of the AFPs allows real-time monitoring of AFP adsorption into ice. The labels have been found not to influence the planes to which AFPs bind. FIPA analysis also introduces the option to bind more than one differently tagged AFP on the same single ice crystal to help differentiate their binding planes. These applications of FIPA are helping to advance our understanding of how AFPs bind to ice to halt its growth and why many AFP-producing organisms express multiple AFP isoforms.
Affinity-Based Network Interfaces for Efficient Communication on Multicore Architectures
Institute of Scientific and Technical Information of China (English)
Andrés Ortiz; Julio Ortega; Antonio F.Díaz; Alberto Prieto
2013-01-01
Improving the network interface performance is needed by the demand of applications with high communication requirements (for example,some multimedia,real-time,and high-performance computing applications),and the availability of network links providing multiple gigabits per second bandwidths that could require many processor cycles for communication tasks.Multicore architectures,the current trend in the microprocessor development to cope with the difficulties to further increase clock frequencies and microarchitecture efficiencies,provide new opportunities to exploit the parallelism available in the nodes for designing efficient communication architectures.Nevertheless,although present OS network stacks include multiple threals that make it possible to execute network tasks concurrently in the kernel,the implementations of packet-based or connection-based parallelism are not trivial as they have to take into account issues related with the cost of synchronization in the access to shared resources and the efficient use of caches.Therefore,a common trend in many recent researches on this topic is to assign network interrupts and the corresponding protocol and network application processing to the same core,as with this affinity scheduling it would be possible to reduce the contention for shared resources and the cache misses.In this paper we propose and analyze several configurations to distribute the network interface among the different cores available in the server.These alternatives have been devised according to the affinity of the corresponding communication tasks with the location (proximity to the memories where the different data structures are stored) and characteristics of the processing core.As this approach uses several cores to accelerate the communication path of a given connection,it can be seen as complementary to those that consider several cores to simultaneously process packets belonging to either the same or different connections.Message passing
Cell-based proteome profiling of potential dasatinib targets by use of affinity-based probes.
Shi, Haibin; Zhang, Chong-Jing; Chen, Grace Y J; Yao, Shao Q
2012-02-15
Protein kinases (PKs) play an important role in the development and progression of cancer by regulating cell growth, survival, invasion, metastasis, and angiogenesis. Dasatinib (BMS-354825), a dual Src/Abl inhibitor, is a promising therapeutic agent with oral bioavailability. It has been used for the treatment of imatinib-resistant chronic myelogenous leukemia (CML). Most kinase inhibitors, including Dasatinib, inhibit multiple cellular targets and do not possess exquisite cellular specificity. Recent efforts in kinase research thus focus on the development of large-scale, proteome-wide chemical profiling methods capable of rapid identification of potential cellular (on- and off-) targets of kinase inhibitors. Most existing approaches, however, are still problematic and in many cases not compatible with live-cell studies. In this work, we have successfully developed a cell-permeable kinase probe (DA-2) capable of proteome-wide profiling of potential cellular targets of Dasatinib. In this way, highly regulated, compartmentalized kinase-drug interactions were maintained. By comparing results obtained from different proteomic setups (live cells, cell lysates, and immobilized affinity matrix), we found DA-2 was able to identify significantly more putative kinase targets. In addition to Abl and Src family tyrosine kinases, a number of previously unknown Dasatinib targets have been identified, including several serine/threonine kinases (PCTK3, STK25, eIF-2A, PIM-3, PKA C-α, and PKN2). They were further validated by pull-down/immunoblotting experiments as well as kinase inhibition assays. Further studies are needed to better understand the exact relevance of Dasatinib and its pharmacological effects in relation to these newly identified cellular targets. The approach developed herein should be amenable to the study of many of the existing reversible drugs/drug candidates.
Development of an aptamer-based affinity purification method for vascular endothelial growth factor
Directory of Open Access Journals (Sweden)
Maren Lönne
2015-12-01
Full Text Available Since aptamers bind their targets with high affinity and specificity, they are promising alternative ligands in protein affinity purification. As aptamers are chemically synthesized oligonucleotides, they can be easily produced in large quantities regarding GMP conditions allowing their application in protein production for therapeutic purposes. Several advantages of aptamers compared to antibodies are described in general within this paper. Here, an aptamer directed against the human Vascular Endothelial Growth Factor (VEGF was used as affinity ligand for establishing a purification platform for VEGF in small scale. The aptamer was covalently immobilized on magnetic beads in a controlled orientation resulting in a functional active affinity matrix. Target binding was optimized by introduction of spacer molecules and variation of aptamer density. Further, salt-induced target elution was demonstrated as well as VEGF purification from a complex protein mixture proving the specificity of protein-aptamer binding.
Physics-based enzyme design: predicting binding affinity and catalytic activity.
Sirin, Sarah; Pearlman, David A; Sherman, Woody
2014-12-01
Computational enzyme design is an emerging field that has yielded promising success stories, but where numerous challenges remain. Accurate methods to rapidly evaluate possible enzyme design variants could provide significant value when combined with experimental efforts by reducing the number of variants needed to be synthesized and speeding the time to reach the desired endpoint of the design. To that end, extending our computational methods to model the fundamental physical-chemical principles that regulate activity in a protocol that is automated and accessible to a broad population of enzyme design researchers is essential. Here, we apply a physics-based implicit solvent MM-GBSA scoring approach to enzyme design and benchmark the computational predictions against experimentally determined activities. Specifically, we evaluate the ability of MM-GBSA to predict changes in affinity for a steroid binder protein, catalytic turnover for a Kemp eliminase, and catalytic activity for α-Gliadin peptidase variants. Using the enzyme design framework developed here, we accurately rank the most experimentally active enzyme variants, suggesting that this approach could provide enrichment of active variants in real-world enzyme design applications.
Image-Moment Based Affine Invariant Watermarking Scheme Utilizing Neural Networks
Institute of Scientific and Technical Information of China (English)
无
2006-01-01
A new image watermarking scheme is proposed to resist rotation, scaling and translation (RST) attacks. Six combined low order image moments are utilized to represent image information on rotation, scaling and translation. Affine transform parameters are registered by feedforward neural networks. Watermark is adaptively embedded in discrete wavelet transform (DWT) domain while watermark extraction is carried out without original image after attacked watermarked image has been synchronized by making inverse transform through parameters learned by neural networks. Experimental results show that the proposed scheme can effectively register affine transform parameters, embed watermark more robustly and resist geometric attacks as well as JPEG2000 compression.
Synthesis of Glutamic Acid-based Cluster Galactosides and Their Binding Affinities with Liver Cells
Institute of Scientific and Technical Information of China (English)
ZHANG,Xiao-Ru(张晓茹); Ll,Ying-Xia(李英霞); CHU,Shi-Dong(褚世栋); DING,Ning(丁宁); Ll,Chun-Xia(李春霞); GUAN,Hua-Shi(管华诗)
2004-01-01
Structurally well defined di-,tri-and tetra-valent cluster galactosides were synthesized in a convenient way.Oligo-glutamic acids were assembled as scaffolds.The presence of amine groups in these three ligands is expected to couple with drugs or genes for delivery.The binding affinities of these cluster galactoses to liver cells were determined by in vitro binding studies.Among them,the tetravalent cluster galactose (19) showed the highest affinity to liver cell.It is therefore a promising targeting device for the specific delivery of drugs or genes to parenchymal liver cells.
Nonlinear scoring functions for similarity-based ligand docking and binding affinity prediction.
Brylinski, Michal
2013-11-25
A common strategy for virtual screening considers a systematic docking of a large library of organic compounds into the target sites in protein receptors with promising leads selected based on favorable intermolecular interactions. Despite a continuous progress in the modeling of protein-ligand interactions for pharmaceutical design, important challenges still remain, thus the development of novel techniques is required. In this communication, we describe eSimDock, a new approach to ligand docking and binding affinity prediction. eSimDock employs nonlinear machine learning-based scoring functions to improve the accuracy of ligand ranking and similarity-based binding pose prediction, and to increase the tolerance to structural imperfections in the target structures. In large-scale benchmarking using the Astex/CCDC data set, we show that 53.9% (67.9%) of the predicted ligand poses have RMSD of <2 Å (<3 Å). Moreover, using binding sites predicted by recently developed eFindSite, eSimDock models ligand binding poses with an RMSD of 4 Å for 50.0-39.7% of the complexes at the protein homology level limited to 80-40%. Simulations against non-native receptor structures, whose mean backbone rearrangements vary from 0.5 to 5.0 Å Cα-RMSD, show that the ratio of docking accuracy and the estimated upper bound is at a constant level of ∼0.65. Pearson correlation coefficient between experimental and predicted by eSimDock Ki values for a large data set of the crystal structures of protein-ligand complexes from BindingDB is 0.58, which decreases only to 0.46 when target structures distorted to 3.0 Å Cα-RMSD are used. Finally, two case studies demonstrate that eSimDock can be customized to specific applications as well. These encouraging results show that the performance of eSimDock is largely unaffected by the deformations of ligand binding regions, thus it represents a practical strategy for across-proteome virtual screening using protein models. eSimDock is freely
Button, D K; Robertson, Betsy; Gustafson, Elizabeth; Zhao, Xiaoming
2004-09-01
A theory for solute uptake by whole cells was derived with a focus on the ability of oligobacteria to sequester nutrients. It provided a general relationship that was used to obtain the kinetic constants for in situ marine populations in the presence of naturally occurring substrates. In situ affinities of 0.9 to 400 liters g of cells(-1) h(-1) found were up to 10(3) times smaller than those from a "Marinobacter arcticus " isolate, but springtime values were greatly increased by warming. Affinities of the isolate for usual polar substrates but not for hydrocarbons were diminished by ionophores. A kinetic curve or Monod plot was constructed from the best available data for cytoarchitectural components of the isolate by using the theory together with concepts and calculations from first principles. The order of effect of these components on specific affinity was membrane potential > cytoplasmic enzyme concentration > cytoplasmic enzyme affinity > permease concentration > area of the permease site > translation coefficient > porin concentration. Component balance was influential as well; a small increase in cytoplasmic enzyme concentration gave a large increase in the effect of permease concentration. The effect of permease concentration on specific affinity was large, while the effect on K(m) was small. These results are in contrast to the Michaelis-Menten theory as applied by Monod that has uptake kinetics dependent on the quality of the permease molecules, with K(m) as an independent measure of affinity. Calculations demonstrated that most oligobacteria in the environment must use multiple substrates simultaneously to attain sufficient energy and material for growth, a requirement consistent with communities largely comprising few species.
Swart, Marcel; Bickelhaupt, F Matthias
2006-03-01
We have carried out an extensive exploration of the gas-phase basicity of archetypal anionic bases across the periodic system using the generalized gradient approximation of density functional theory (DFT) at BP86/QZ4P//BP86/TZ2P. First, we validate DFT as a reliable tool for computing proton affinities and related thermochemical quantities: BP86/QZ4P//BP86/TZ2P is shown to yield a mean absolute deviation of 1.6 kcal/mol for the proton affinity at 0 K with respect to high-level ab initio benchmark data. The main purpose of this work is to provide the proton affinities (and corresponding entropies) at 298 K of the anionic conjugate bases of all main-group-element hydrides of groups 14-17 and periods 2-6. We have also studied the effect of stepwise methylation of the protophilic center of the second- and third-period bases.
Fractal-Based Exponential Distribution of Urban Density and Self-Affine Fractal Forms of Cities
Chen, Yanguang
2016-01-01
Urban population density always follows the exponential distribution and can be described with Clark's model. Because of this, the spatial distribution of urban population used to be regarded as non-fractal pattern. However, Clark's model differs from the exponential function in mathematics because that urban population is distributed on the fractal support of landform and land-use form. By using mathematical transform and empirical evidence, we argue that there are self-affine scaling relations and local power laws behind the exponential distribution of urban density. The scale parameter of Clark's model indicating the characteristic radius of cities is not a real constant, but depends on the urban field we defined. So the exponential model suggests local fractal structure with two kinds of fractal parameters. The parameters can be used to characterize urban space filling, spatial correlation, self-affine properties, and self-organized evolution. The case study of the city of Hangzhou, China, is employed to ...
Passive Fault Tolerant Control of Piecewise Affine Systems Based on H Infinity Synthesis
DEFF Research Database (Denmark)
Gholami, Mehdi; Cocquempot, vincent; Schiøler, Henrik
2011-01-01
In this paper we design a passive fault tolerant controller against actuator faults for discretetime piecewise affine (PWA) systems. By using dissipativity theory and H analysis, fault tolerant state feedback controller design is expressed as a set of Linear Matrix Inequalities (LMIs). In the cur......). In the current paper, the PWA system switches not only due to the state but also due to the control input. The method is applied on a large scale livestock ventilation model....
Directory of Open Access Journals (Sweden)
Haiquan Zhao
2014-01-01
Full Text Available A new improved memorised improved proportionate affine projection algorithm (IMIPAPA is proposed to improve the convergence performance of sparse system identification, which incorporates l(0-norm as a measure of sparseness into a recently proposed MIPAPA algorithm. In addition, a simplified implementation of the IMIPAPA (SIMIPAPA with low-computational burden is presented while maintaining the consistent convergence performance. The simulation results demonstrate that the IMIPAPA and SIMIPAPA algorithms outperform the MIPAPA algorithm for sparse system identification.
Uzun, Lokman; Yavuz, Handan; Osman, Bilgen; Celik, Hamdi; Denizli, Adil
2010-07-01
The preparation of polymeric membrane using affinity technology for application in blood filtration devices is described here. DNA attached poly(hydroxyethyl methacrylate) (PHEMA) based microporous affinity membrane was prepared for selective removal of anti-dsDNA antibodies from systemic lupus erythematosus (SLE) patient plasma in in vitro. In order to further increase blood-compatibility of affinity membrane, aminoacid based comonomer N-methacryloyl-L-alanine (MAAL) was included in the polymerization recipe. PHEMAAL membrane was produced by a photopolymerization technique and then characterized by swelling tests and scanning electron microscope (SEM) studies. Blood-compatibility tests were also performed. The water swelling ratio of PHEMAAL membrane increased significantly (133.2%) compared with PHEMA (58%). PHEMAAL membrane has large pores around in the range of 5-10 microm. All the clotting times increased when compared with PHEMA membrane. Loss of platelets and leukocytes was very low. DNA loading was 7.8 mg/g. There was a very low anti-dsDNA-antibody adsorption onto the plain PHEMAAL membrane, about 78 IU/g. The PHEMAAL-DNA membrane adsorbed anti-dsDNA-antibody in the range of 10-68 x 10(3)IU/g from SLE plasma. Anti-dsDNA-antibody concentration decreased significantly from 875 to 144 IU/ml with the time. Anti-dsDNA-antibodies could be repeatedly adsorbed and eluted without noticeable loss in the anti-dsDNA-antibody adsorption amount.
Calculation of electromagnetic parameter based on interpolation algorithm
Energy Technology Data Exchange (ETDEWEB)
Zhang, Wenqiang, E-mail: zwqcau@gmail.com [College of Engineering, China Agricultural University, Beijing 100083 (China); Bionic and Micro/Nano/Bio Manufacturing Technology Research Center, Beihang University, Beijing 100191 (China); Yuan, Liming; Zhang, Deyuan [Bionic and Micro/Nano/Bio Manufacturing Technology Research Center, Beihang University, Beijing 100191 (China)
2015-11-01
Wave-absorbing material is an important functional material of electromagnetic protection. The wave-absorbing characteristics depend on the electromagnetic parameter of mixed media. In order to accurately predict the electromagnetic parameter of mixed media and facilitate the design of wave-absorbing material, based on the electromagnetic parameters of spherical and flaky carbonyl iron mixture of paraffin base, this paper studied two different interpolation methods: Lagrange interpolation and Hermite interpolation of electromagnetic parameters. The results showed that Hermite interpolation is more accurate than the Lagrange interpolation, and the reflectance calculated with the electromagnetic parameter obtained by interpolation is consistent with that obtained through experiment on the whole. - Highlights: • We use interpolation algorithm on calculation of EM-parameter with limited samples. • Interpolation method can predict EM-parameter well with different particles added. • Hermite interpolation is more accurate than Lagrange interpolation. • Calculating RL based on interpolation is consistent with calculating RL from experiment.
Energy Technology Data Exchange (ETDEWEB)
Gueltekin, Kemal [Izmir Institute of Technology, Department of Physics, Izmir (Turkey)
2016-03-15
In this study, we give a thorough analysis of a general affine gravity with torsion. After a brief exposition of the affine gravities considered by Eddington and Schroedinger, we construct and analyze different affine gravities based on the determinants of the Ricci tensor, the torsion tensor, the Riemann tensor, and their combinations. In each case we reduce equations of motion to their simplest forms and give a detailed analysis of their solutions. Our analyses lead to the construction of the affine connection in terms of the curvature and torsion tensors. Our solutions of the dynamical equations show that the curvature tensors at different points are correlated via non-local, exponential rescaling factors determined by the torsion tensor. (orig.)
Li, Shangyong; Wang, Linna; Xu, Ximing; Lin, Shengxiang; Wang, Yuejun; Hao, Jianhua; Sun, Mi
2016-01-01
Metalloproteases are emerging as useful agents in the treatment of many diseases including arthritis, cancer, cardiovascular diseases, and fibrosis. Studies that could shed light on the metalloprotease pharmaceutical applications require the pure enzyme. Here, we reported the structure-based design and synthesis of the affinity medium for the efficient purification of metalloprotease using the 4-aminophenylboronic acid (4-APBA) as affinity ligand, which was coupled with Sepharose 6B via cyanuric chloride as spacer. The molecular docking analysis showed that the boron atom was interacting with the hydroxyl group of Ser176 residue, whereas the hydroxyl group of the boronic moiety is oriented toward Leu175 and His177 residues. In addition to the covalent bond between the boron atom and hydroxyl group of Ser176, the spacer between boronic acid derivatives and medium beads contributes to the formation of an enzyme-medium complex. With this synthesized medium, we developed and optimized a one-step purification procedure and applied it for the affinity purification of metalloproteases from three commercial enzyme products. The native metalloproteases were purified to high homogeneity with more than 95% purity. The novel purification method developed in this work provides new opportunities for scientific, industrial and pharmaceutical projects. PMID:28036010
Directory of Open Access Journals (Sweden)
Shangyong Li
2016-12-01
Full Text Available Metalloproteases are emerging as useful agents in the treatment of many diseases including arthritis, cancer, cardiovascular diseases, and fibrosis. Studies that could shed light on the metalloprotease pharmaceutical applications require the pure enzyme. Here, we reported the structure-based design and synthesis of the affinity medium for the efficient purification of metalloprotease using the 4-aminophenylboronic acid (4-APBA as affinity ligand, which was coupled with Sepharose 6B via cyanuric chloride as spacer. The molecular docking analysis showed that the boron atom was interacting with the hydroxyl group of Ser176 residue, whereas the hydroxyl group of the boronic moiety is oriented toward Leu175 and His177 residues. In addition to the covalent bond between the boron atom and hydroxyl group of Ser176, the spacer between boronic acid derivatives and medium beads contributes to the formation of an enzyme-medium complex. With this synthesized medium, we developed and optimized a one-step purification procedure and applied it for the affinity purification of metalloproteases from three commercial enzyme products. The native metalloproteases were purified to high homogeneity with more than 95% purity. The novel purification method developed in this work provides new opportunities for scientific, industrial and pharmaceutical projects.
Wang, Zheng; Zhao, Jin-cheng; Lian, Hong-zhen; Chen, Hong-yuan
2015-06-01
A novel strategy for preparing aptamer-based organic-silica hybrid monolithic column was developed via "thiol-ene" click chemistry. Due to the large specific surface area of the hybrid matrix and the simplicity, rapidness and high efficiency of "thiol-ene" click reaction, the average coverage density of aptamer on the organic-silica hybrid monolith reached 420 pmol μL(-1). Human α-thrombin can be captured on the prepared affinity monolithic column with high specificity and eluted by NaClO4 solution. N-p-tosyl-Gly-Pro-Arg p-nitroanilide acetate was used as the sensitive chromogenic substrate of thrombin. The thrombin enriched by this affinity column was detected with a detection of limit of 0.01 μM by spectrophotometry. Furthermore, the extraction recovery of thrombin at 0.15 μM in human serum was 91.8% with a relative standard deviation of 4.0%. These results indicated that "thiol-ene" click chemistry provided a promising technique to immobilize aptamer on organic-inorganic hybrid monolith and the easily-assembled affinity monolithic material could be used to realize highly selective recognition of trace proteins.
Institute of Scientific and Technical Information of China (English)
无
2011-01-01
In the paper,we investigate the problem of finding a piecewise output feedback control law for an uncertain affine system such that the resulting closed-loop output satisfies a desired linear temporal logic (LTL) specification.A two-level hierarchical approach is proposed to solve the problem in a triangularized output space.In the lower level,we explore whether there exists a robust output feedback control law to make the output starting in a simplex either remains in it or leaves via a specific facet.In t...
Innovative Product Design Based on Customer Requirement Weight Calculation Model
Institute of Scientific and Technical Information of China (English)
Chen-Guang Guo; Yong-Xian Liu; Shou-Ming Hou; Wei Wang
2010-01-01
In the processes of product innovation and design, it is important for the designers to find and capture customer's focus through customer requirement weight calculation and ranking. Based on the fuzzy set theory and Euclidean space distance, this paper puts forward a method for customer requirement weight calculation called Euclidean space distances weighting ranking method. This method is used in the fuzzy analytic hierarchy process that satisfies the additive consistent fuzzy matrix. A model for the weight calculation steps is constructed;meanwhile, a product innovation design module on the basis of the customer requirement weight calculation model is developed. Finally, combined with the instance of titanium sponge production, the customer requirement weight calculation model is validated. By the innovation design module, the structure of the titanium sponge reactor has been improved and made innovative.
Web based brain volume calculation for magnetic resonance images.
Karsch, Kevin; Grinstead, Brian; He, Qing; Duan, Ye
2008-01-01
Brain volume calculations are crucial in modern medical research, especially in the study of neurodevelopmental disorders. In this paper, we present an algorithm for calculating two classifications of brain volume, total brain volume (TBV) and intracranial volume (ICV). Our algorithm takes MRI data as input, performs several preprocessing and intermediate steps, and then returns each of the two calculated volumes. To simplify this process and make our algorithm publicly accessible to anyone, we have created a web-based interface that allows users to upload their own MRI data and calculate the TBV and ICV for the given data. This interface provides a simple and efficient method for calculating these two classifications of brain volume, and it also removes the need for the user to download or install any applications.
Institute of Scientific and Technical Information of China (English)
LIU Jia-Ming; LIU Zhen-Bo; LI Zhi-Ming; HE Hang-Xia; LIN Wei-Nü; HUANG Ya-Hong; WANG Fang-Mei
2008-01-01
In the presence of heavy atom perturber Pb2+,silicon dioxide nanoparticle containing fluorescein isothiocyanate (FITC-SiO2) could emit a strong and stable room temperature phosphorescence (RTP) signal on the surface of acetyl cellulose membrane (ACM).It was found in the research that a quantitative specific affinity adsorption (AA) reaction between triticum vulgare lectin (WGA) labeled with luminescent nanoparticle and glucose (g)could be carried on the surface of ACM.The product (WGA-G-WGA-FITC-SiO2) of the reaction could emit a stronger RTP signal,and the △Ip had linear correlation to the content of G.According to the facts above,a new method to determine G by affinity adsorption solid substrate room temperature phosphorimetry (AA-SS-RTP) was established,based on WGA labeled with FITC-SiO2.The detection limit (LD) of this method calculated by 3Sb/k was 0.47 pg·spot-1 (corresponding to a concentration value 1.2 × 10-9 g·mL-1,namely 5.3 × 10-9 mol·L-1),the sensitivity was high.Meanwhile,the mechanism for the determination of G by AA-SS-RTP was discussed.
Adjoint affine fusion and tadpoles
Urichuk, Andrew; Walton, Mark A.
2016-06-01
We study affine fusion with the adjoint representation. For simple Lie algebras, elementary and universal formulas determine the decomposition of a tensor product of an integrable highest-weight representation with the adjoint representation. Using the (refined) affine depth rule, we prove that equally striking results apply to adjoint affine fusion. For diagonal fusion, a coefficient equals the number of nonzero Dynkin labels of the relevant affine highest weight, minus 1. A nice lattice-polytope interpretation follows and allows the straightforward calculation of the genus-1 1-point adjoint Verlinde dimension, the adjoint affine fusion tadpole. Explicit formulas, (piecewise) polynomial in the level, are written for the adjoint tadpoles of all classical Lie algebras. We show that off-diagonal adjoint affine fusion is obtained from the corresponding tensor product by simply dropping non-dominant representations.
Adjoint affine fusion and tadpoles
Urichuk, Andrew
2016-01-01
We study affine fusion with the adjoint representation. For simple Lie algebras, elementary and universal formulas determine the decomposition of a tensor product of an integrable highest-weight representation with the adjoint representation. Using the (refined) affine depth rule, we prove that equally striking results apply to adjoint affine fusion. For diagonal fusion, a coefficient equals the number of nonzero Dynkin labels of the relevant affine highest weight, minus 1. A nice lattice-polytope interpretation follows, and allows the straightforward calculation of the genus-1 1-point adjoint Verlinde dimension, the adjoint affine fusion tadpole. Explicit formulas, (piecewise) polynomial in the level, are written for the adjoint tadpoles of all classical Lie algebras. We show that off-diagonal adjoint affine fusion is obtained from the corresponding tensor product by simply dropping non-dominant representations.
A current perspective on applications of macrocyclic‐peptide‐based high‐affinity ligands
Leenheer, Daniël; ten Dijke, Peter
2016-01-01
Abstract Monoclonal antibodies can bind with high affinity and high selectivity to their targets. As a tool in therapeutics or diagnostics, however, their large size (∼150 kDa) reduces penetration into tissue and prevents passive cellular uptake. To overcome these and other problems, minimized protein scaffolds have been chosen or engineered, with care taken to not compromise binding affinity or specificity. An alternate approach is to begin with a minimal non‐antibody scaffold and select functional ligands from a de novo library. We will discuss the structure, production, applications, strengths, and weaknesses of several classes of antibody‐derived ligands, that is, antibodies, intrabodies, and nanobodies, and nonantibody‐derived ligands, that is, monobodies, affibodies, and macrocyclic peptides. In particular, this review is focussed on macrocyclic peptides produced by the Random non‐standard Peptides Integrated Discovery (RaPID) system that are small in size (typically ∼2 kDa), but are able to perform tasks typically handled by larger proteinaceous ligands. PMID:27352774
Peptide array-based characterization and design of ZnO-high affinity peptides.
Okochi, Mina; Sugita, Tomoya; Furusawa, Seiji; Umetsu, Mitsuo; Adschiri, Tadafumi; Honda, Hiroyuki
2010-08-15
Peptides with both an affinity for ZnO and the ability to generate ZnO nanoparticles have attracted attention for the self-assembly and templating of nanoscale building blocks under ambient conditions with compositional uniformity. In this study, we have analyzed the specific binding sites of the ZnO-binding peptide, EAHVMHKVAPRP, which was identified using a phage display peptide library. The peptide binding assay against ZnO nanoparticles was performed using peptides synthesized on a cellulose membrane using the spot method. Using randomized rotation of amino acids in the ZnO-binding peptide, 125 spot-synthesized peptides were assayed. The peptide binding activity against ZnO nanoparticles varied greatly. This indicates that ZnO binding does not depend on total hydrophobicity or other physical parameters of these peptides, but rather that ZnO recognizes the specific amino acid alignment of these peptides. In addition, several peptides were found to show higher binding ability compared with that of the original peptides. Identification of important binding sites in the EAHVMHKVAPRP peptide was investigated by shortened, stepwise sequence from both termini. Interestingly, two ZnO-binding sites were found as 6-mer peptides: HVMHKV and HKVAPR. The peptides identified by amino acid substitution of HKVAPR were found to show high affinity and specificity for ZnO nanoparticles.
On the calculation of percentile-based bibliometric indicators
Waltman, Ludo
2012-01-01
A percentile-based bibliometric indicator is an indicator that values publications based on their position within the citation distribution of their field. The most straightforward percentile-based indicator is the proportion of frequently cited publications, for instance the proportion of publications that belong to the top 10% most frequently cited of their field. Recently, more complex percentile-based indicators were proposed. A difficulty in the calculation of percentile-based indicators is caused by the discrete nature of citation distributions combined with the presence of many publications with the same number of citations. We introduce an approach to calculating percentile-based indicators that deals with this difficulty in a more satisfactory way than earlier approaches suggested in the literature. We show in a formal mathematical framework that our approach leads to indicators that do not suffer from biases in favor of or against particular fields of science.
A Poisson-based adaptive affinity propagation clustering for SAGE data.
Tang, DongMing; Zhu, QingXin; Yang, Fan
2010-02-01
Serial analysis of gene expression (SAGE) is a powerful tool to obtain gene expression profiles. Clustering analysis is a valuable technique for analyzing SAGE data. In this paper, we propose an adaptive clustering method for SAGE data analysis, namely, PoissonAPS. The method incorporates a novel clustering algorithm, Affinity Propagation (AP). While AP algorithm has demonstrated good performance on many different data sets, it also faces several limitations. PoissonAPS overcomes the limitations of AP using the clustering validation measure as a cost function of merging and splitting, and as a result, it can automatically cluster SAGE data without user-specified parameters. We evaluated PoissonAPS and compared its performance with other methods on several real life SAGE datasets. The experimental results show that PoissonAPS can produce meaningful and interpretable clusters for SAGE data.
Rivera-Delgado, Edgardo; Sadeghi, Zhina; Wang, Nick X; Kenyon, Jonathan; Satyanarayan, Sapna; Kavran, Michael; Flask, Chris; Hijaz, Adonis Z; von Recum, Horst A
2016-04-21
The protein chemokine (C-C motif) ligand 7 (CCL7) is significantly over-expressed in urethral and vaginal tissues immediately following vaginal distention in a rat model of stress urinary incontinence. Further evidence, in this scenario and other clinical scenarios, indicates CCL7 stimulates stem cell homing for regenerative repair. This CCL7 gradient is likely absent or compromised in the natural repair process of women who continue to suffer from SUI into advanced age. We evaluated the feasibility of locally providing this missing CCL7 gradient by means of an affinity-based implantable polymer. To engineer these polymers we screened the affinity of different proteoglycans, to use them as CCL7-binding hosts. We found heparin to be the strongest binding host for CCL7 with a 0.323 nM dissociation constant. Our experimental approach indicates conjugation of heparin to a polymer backbone (using either bovine serum albumin or poly (ethylene glycol) as the base polymer) can be used as a delivery system capable of providing sustained concentrations of CCL7 in a therapeutically useful range up to a month in vitro. With this approach we are able to detect, after polymer implantation, significant increase in CCL7 in the urethral tissue directly surrounding the polymer implants with only trace amounts of human CCL7 present in the blood of the animals. Whole animal serial sectioning shows evidence of retention of locally injected human mesenchymal stem cells (hMSCs) only in animals with sustained CCL7 delivery, 2 weeks after affinity-polymers were implanted.
Tse, Jenny; Wang, Yuanyuan; Zengeya, Thomas; Rozners, Eriks; Tan-Wilson, Anna
2015-02-01
We describe a new method for protein affinity purification that capitalizes on the high affinity of streptavidin for biotin but does not require dissociation of the biotin-streptavidin complex for protein retrieval. Conventional reagents place both the selectively reacting group (the "warhead") and the biotin on the same molecule. We place the warhead and the biotin on separate molecules, each linked to a short strand of peptide nucleic acid (PNA), synthetic polymers that use the same bases as DNA but attached to a backbone that is resistant to attack by proteases and nucleases. As in DNA, PNA strands with complementary base sequences hybridize. In conditions that favor PNA duplex formation, the warhead strand (carrying the tagged protein) and the biotin strand form a complex that is held onto immobilized streptavidin. As in DNA, the PNA duplex dissociates at moderately elevated temperature; therefore, retrieval of the tagged protein is accomplished by a brief exposure to heat. Using iodoacetate as the warhead, 8-base PNA strands, biotin, and streptavidin-coated magnetic beads, we demonstrate retrieval of the cysteine protease papain. We were also able to use our iodoacetyl-PNA:PNA-biotin probe for retrieval and identification of a thiol reductase and a glutathione transferase from soybean seedling cotyledons.
Directory of Open Access Journals (Sweden)
Jieun Han
Full Text Available Repeat proteins are increasingly attracting much attention as alternative scaffolds to immunoglobulin antibodies due to their unique structural features. Nonetheless, engineering interaction interface and understanding molecular basis for affinity maturation of repeat proteins still remain a challenge. Here, we present a structure-based rational design of a repeat protein with high binding affinity for a target protein. As a model repeat protein, a Toll-like receptor4 (TLR4 decoy receptor composed of leucine-rich repeat (LRR modules was used, and its interaction interface was rationally engineered to increase the binding affinity for myeloid differentiation protein 2 (MD2. Based on the complex crystal structure of the decoy receptor with MD2, we first designed single amino acid substitutions in the decoy receptor, and obtained three variants showing a binding affinity (K(D one-order of magnitude higher than the wild-type decoy receptor. The interacting modes and contributions of individual residues were elucidated by analyzing the crystal structures of the single variants. To further increase the binding affinity, single positive mutations were combined, and two double mutants were shown to have about 3000- and 565-fold higher binding affinities than the wild-type decoy receptor. Molecular dynamics simulations and energetic analysis indicate that an additive effect by two mutations occurring at nearby modules was the major contributor to the remarkable increase in the binding affinities.
Yuan, Mingquan; Alocilja, Evangelyn C; Chakrabartty, Shantanu
2016-08-01
This paper presents a wireless, self-powered, affinity-based biosensor based on the integration of paper-based microfluidics with our previously reported method for self-assembling radio-frequency (RF) antennas. At the core of the proposed approach is a silver-enhancement technique that grows portions of a RF antenna in regions where target antigens hybridize with target specific affinity probes. The hybridization regions are defined by a network of nitrocellulose based microfluidic channels which implement a self-powered approach to sample the reagent and control its flow and mixing. The integration substrate for the biosensor has been constructed using polyethylene and the patterning of the antenna on the substrate has been achieved using a low-cost ink-jet printing technique. The substrate has been integrated with passive radio-frequency identification (RFID) tags to demonstrate that the resulting sensor-tag can be used for continuous monitoring in a food supply-chain where direct measurement of analytes is typically considered to be impractical. We validate the proof-of-concept operation of the proposed sensor-tag using IgG as a model analyte and using a 915 MHz Ultra-high-frequency (UHF) RFID tagging technology.
Software-Based Visual Loan Calculator For Banking Industry
Isizoh, A. N.; Anazia, A. E.; Okide, S. O. 3; Onyeyili, T. I.; Okwaraoka, C. A. P.
2012-03-01
industry is very necessary in modern day banking system using many design techniques for security reasons. This paper thus presents the software-based design and implementation of a Visual Loan calculator for banking industry using Visual Basic .Net (VB.Net). The fundamental approach to this is to develop a Graphical User Interface (GUI) using VB.Net operating tools, and then developing a working program which calculates the interest of any loan obtained. The VB.Net programming was done, implemented and the software proved satisfactory.
Directory of Open Access Journals (Sweden)
Masato Kiyoshi
Full Text Available The optimization of antibodies is a desirable goal towards the development of better therapeutic strategies. The antibody 11K2 was previously developed as a therapeutic tool for inflammatory diseases, and displays very high affinity (4.6 pM for its antigen the chemokine MCP-1 (monocyte chemo-attractant protein-1. We have employed a virtual library of mutations of 11K2 to identify antibody variants of potentially higher affinity, and to establish benchmarks in the engineering of a mature therapeutic antibody. The most promising candidates identified in the virtual screening were examined by surface plasmon resonance to validate the computational predictions, and to characterize their binding affinity and key thermodynamic properties in detail. Only mutations in the light-chain of the antibody are effective at enhancing its affinity for the antigen in vitro, suggesting that the interaction surface of the heavy-chain (dominated by the hot-spot residue Phe101 is not amenable to optimization. The single-mutation with the highest affinity is L-N31R (4.6-fold higher affinity than wild-type antibody. Importantly, all the single-mutations showing increase affinity incorporate a charged residue (Arg, Asp, or Glu. The characterization of the relevant thermodynamic parameters clarifies the energetic mechanism. Essentially, the formation of new electrostatic interactions early in the binding reaction coordinate (transition state or earlier benefits the durability of the antibody-antigen complex. The combination of in silico calculations and thermodynamic analysis is an effective strategy to improve the affinity of a matured therapeutic antibody.
The Dac-tag, an affinity tag based on penicillin-binding protein 5.
Lee, David Wei; Peggie, Mark; Deak, Maria; Toth, Rachel; Gage, Zoe Olivia; Wood, Nicola; Schilde, Christina; Kurz, Thimo; Knebel, Axel
2012-09-01
Penicillin-binding protein 5 (PBP5), a product of the Escherichia coli gene dacA, possesses some β-lactamase activity. On binding to penicillin or related antibiotics via an ester bond, it deacylates and destroys them functionally by opening the β-lactam ring. This process takes several minutes. We exploited this process and showed that a fragment of PBP5 can be used as a reversible and monomeric affinity tag. At ambient temperature (e.g., 22°C), a PBP5 fragment binds rapidly and specifically to ampicillin Sepharose. Release can be facilitated either by eluting with 10mM ampicillin or in a ligand-free manner by incubation in the cold (1-10°C) in the presence of 5% glycerol. The "Dac-tag", named with reference to the gene dacA, allows the isolation of remarkably pure fusion protein from a wide variety of expression systems, including (in particular) eukaryotic expression systems.
An extended affinity propagation clustering method based on different data density types.
Zhao, XiuLi; Xu, WeiXiang
2015-01-01
Affinity propagation (AP) algorithm, as a novel clustering method, does not require the users to specify the initial cluster centers in advance, which regards all data points as potential exemplars (cluster centers) equally and groups the clusters totally by the similar degree among the data points. But in many cases there exist some different intensive areas within the same data set, which means that the data set does not distribute homogeneously. In such situation the AP algorithm cannot group the data points into ideal clusters. In this paper, we proposed an extended AP clustering algorithm to deal with such a problem. There are two steps in our method: firstly the data set is partitioned into several data density types according to the nearest distances of each data point; and then the AP clustering method is, respectively, used to group the data points into clusters in each data density type. Two experiments are carried out to evaluate the performance of our algorithm: one utilizes an artificial data set and the other uses a real seismic data set. The experiment results show that groups are obtained more accurately by our algorithm than OPTICS and AP clustering algorithm itself.
Affinity Purification of Insulin by Peptide-Ligand Affinity Chromatography
Institute of Scientific and Technical Information of China (English)
无
2007-01-01
The affinity heptapeptide (HWWWPAS) for insulin, selected from phage display library,was coupled to EAH Sepharose 4B gel and packed to a 1-mL column. The column was used for the affinity purification of insulin from protein mixture and commercial insulin preparation. It was observed that the minor impurity in the commercial insulin was removed by the affinity chromatography. Nearly 40 mg of insulin could be purified with the 1-mL affinity column. The results revealed the high specificity and capacity of the affinity column for insulin purification. Moreover, based on the analysis of the amino acids in the peptide sequence, shorter peptides were designed and synthesized for insulin chromatography. As a result, HWWPS was found to be a good alternative to HWWWPAS, while the other two peptides with three or four amino acids showed weak affinity for insulin. The results indicated that the peptide sequence of HWWWPAS was quite conservative for specific binding of insulin.
Directory of Open Access Journals (Sweden)
Yansheng Li
2016-08-01
Full Text Available With the urgent demand for automatic management of large numbers of high-resolution remote sensing images, content-based high-resolution remote sensing image retrieval (CB-HRRS-IR has attracted much research interest. Accordingly, this paper proposes a novel high-resolution remote sensing image retrieval approach via multiple feature representation and collaborative affinity metric fusion (IRMFRCAMF. In IRMFRCAMF, we design four unsupervised convolutional neural networks with different layers to generate four types of unsupervised features from the fine level to the coarse level. In addition to these four types of unsupervised features, we also implement four traditional feature descriptors, including local binary pattern (LBP, gray level co-occurrence (GLCM, maximal response 8 (MR8, and scale-invariant feature transform (SIFT. In order to fully incorporate the complementary information among multiple features of one image and the mutual information across auxiliary images in the image dataset, this paper advocates collaborative affinity metric fusion to measure the similarity between images. The performance evaluation of high-resolution remote sensing image retrieval is implemented on two public datasets, the UC Merced (UCM dataset and the Wuhan University (WH dataset. Large numbers of experiments show that our proposed IRMFRCAMF can significantly outperform the state-of-the-art approaches.
D.F. Schrager
2006-01-01
We propose a new model for stochastic mortality. The model is based on the literature on affine term structure models. It satisfies three important requirements for application in practice: analytical tractibility, clear interpretation of the factors and compatibility with financial option pricing m
Caso, Giuseppe; de Nardis, Luca; di Benedetto, Maria-Gabriella
2015-10-30
The weighted k-nearest neighbors (WkNN) algorithm is by far the most popular choice in the design of fingerprinting indoor positioning systems based on WiFi received signal strength (RSS). WkNN estimates the position of a target device by selecting k reference points (RPs) based on the similarity of their fingerprints with the measured RSS values. The position of the target device is then obtained as a weighted sum of the positions of the k RPs. Two-step WkNN positioning algorithms were recently proposed, in which RPs are divided into clusters using the affinity propagation clustering algorithm, and one representative for each cluster is selected. Only cluster representatives are then considered during the position estimation, leading to a significant computational complexity reduction compared to traditional, flat WkNN. Flat and two-step WkNN share the issue of properly selecting the similarity metric so as to guarantee good positioning accuracy: in two-step WkNN, in particular, the metric impacts three different steps in the position estimation, that is cluster formation, cluster selection and RP selection and weighting. So far, however, the only similarity metric considered in the literature was the one proposed in the original formulation of the affinity propagation algorithm. This paper fills this gap by comparing different metrics and, based on this comparison, proposes a novel mixed approach in which different metrics are adopted in the different steps of the position estimation procedure. The analysis is supported by an extensive experimental campaign carried out in a multi-floor 3D indoor positioning testbed. The impact of similarity metrics and their combinations on the structure and size of the resulting clusters, 3D positioning accuracy and computational complexity are investigated. Results show that the adoption of metrics different from the one proposed in the original affinity propagation algorithm and, in particular, the combination of different
New Products and Technologies, Based on Calculations Developed Areas
Directory of Open Access Journals (Sweden)
Gheorghe Vertan
2013-09-01
Full Text Available Following statistics, currently prosperous and have high GDP / capita, only countries that have and fructify intensively large natural resources and/or produce and export products massive based on patented inventions accordingly. Without great natural wealth and the lowest GDP / capita in the EU, Romania will prosper only with such products. Starting from the top experience in the country, some patented, can develop new and competitive technologies and patentable and exportable products, based on exact calculations of developed areas, such as that double shells welded assemblies and plating of ships' propellers and blade pump and hydraulic turbines.
An improved LMI-based approach for stability of piecewise affine time-delay systems with uncertainty
Duan, Shiming; Ni, Jun; Galip Ulsoy, A.
2012-09-01
The stability problem for uncertain piecewise affine (PWA) time-delay systems is investigated in this article. It is assumed that there exists a known constant time delay in the system and the uncertainly is norm-bounded. Sufficient conditions for the stability of nominal systems and the stability of systems subject to uncertainty are derived using the Lyapunov-Krasovskii functional with a triple integration term. This approach handles switching based on the delayed states (in addition to the states) for a PWA time-delay system, considers structured as well as unstructured uncertainty and reduces the conservativeness of previous approaches. The effectiveness of the proposed approach is demonstrated by comparing with the existing methods through numerical examples.
Han, Wenhua; Shen, Xiaohui; Xu, Jun; Wang, Ping; Tian, Guiyun; Wu, Zhengyang
2014-09-04
Magnetic flux leakage (MFL) inspection is one of the most important and sensitive nondestructive testing approaches. For online MFL inspection of a long-range railway track or oil pipeline, a fast and effective defect profile estimating method based on a multi-power affine projection algorithm (MAPA) is proposed, where the depth of a sampling point is related with not only the MFL signals before it, but also the ones after it, and all of the sampling points related to one point appear as serials or multi-power. Defect profile estimation has two steps: regulating a weight vector in an MAPA filter and estimating a defect profile with the MAPA filter. Both simulation and experimental data are used to test the performance of the proposed method. The results demonstrate that the proposed method exhibits high speed while maintaining the estimated profiles clearly close to the desired ones in a noisy environment, thereby meeting the demand of accurate online inspection.
Calculating track-based observables for the LHC.
Chang, Hsi-Ming; Procura, Massimiliano; Thaler, Jesse; Waalewijn, Wouter J
2013-09-06
By using observables that only depend on charged particles (tracks), one can efficiently suppress pileup contamination at the LHC. Such measurements are not infrared safe in perturbation theory, so any calculation of track-based observables must account for hadronization effects. We develop a formalism to perform these calculations in QCD, by matching partonic cross sections onto new nonperturbative objects called track functions which absorb infrared divergences. The track function Ti(x) describes the energy fraction x of a hard parton i which is converted into charged hadrons. We give a field-theoretic definition of the track function and derive its renormalization group evolution, which is in excellent agreement with the pythia parton shower. We then perform a next-to-leading order calculation of the total energy fraction of charged particles in e+ e-→ hadrons. To demonstrate the implications of our framework for the LHC, we match the pythia parton shower onto a set of track functions to describe the track mass distribution in Higgs plus one jet events. We also show how to reduce smearing due to hadronization fluctuations by measuring dimensionless track-based ratios.
Functional group based Ligand binding affinity scoring function at atomic environmental level
Varadwaj, Pritish Kumar; Lahiri, Tapobrata
2009-01-01
Use of knowledge based scoring function (KBSF) for virtual screening and molecular docking has become an established method for drug discovery. Lack of a precise and reliable free energy function that describes several interactions including water-mediated atomic interaction between amino-acid residues and ligand makes distance based statistical measure as the only alternative. Till now all the distance based scoring functions in KBSF arena use atom singularity concept, which neglects the env...
Calculation of the debris flow concentration based on clay content
Institute of Scientific and Technical Information of China (English)
CHEN Ningsheng; CUI Peng; LIU Zhonggang; WEI Fangqiang
2003-01-01
The debris flow clay content has very tremendous influence on its concentration (γC). It is reported that the concentration can be calculated by applying the relative polynomial based on the clay content. Here one polynomial model and one logarithm model to calculate the concentration based on the clay content for both the ordinary debris flow and viscous debris flow are obtained. The result derives from the statistics and analysis of the relationship between the debris flow concentrations and clay content in 45 debris flow sites located in the southwest of China. The models can be applied for the concentration calculation to those debris flows that are impossible to observe. The models are available to calculate the debris flow concentration, the principles of which are in the clay content affecting on the debris flow formation, movement and suspending particle diameter. The mechanism of the relationship of the clay content and concentration is clear and reliable. The debris flow is usually of micro-viscous when the clay content is low (<3%), by analyzing the developing tendency on the basics of the relationship between the clay content and debris flow concentration. Indeed, the less the clay content, the less the concentration for most debris flows. The debris flow tends to become the water rock flow or the hyperconcentrated flow with the clay content decrease. Through statistics it is apt to transform the soil into the viscous debris flow when the clay content of ranges is in 3%-18%. Its concentration increases with the increasing of the clay content when the clay content is between 5% and 10%. But the value decreases with the increasing of the clay content when the clay content is between 10% and 18%. It is apt to transform the soil into the mudflow, when the clay content exceeds 18%. The concentration of the mudflow usually decreases with the increase of the clay content, and this developing tendency reverses to that of the micro-viscous debris flow. There is
Qi, Wenjing; Liu, Zhongyuan; Zhang, Wei; Halawa, Mohamed Ibrahim; Xu, Guobao
2016-01-01
Zr(IV) can form phosphate and Zr(IV) (–PO32−–Zr4+–) complex owing to the high affinity between Zr(IV) with phosphate. Zr(IV) can induce the aggregation of gold nanoparticles (AuNPs), while adenosine triphosphate(ATP) can prevent Zr(IV)-induced aggregation of AuNPs. Herein, a visual and plasmon resonance absorption (PRA)sensor for ATP have been developed using AuNPs based on the high affinity between Zr(IV)with ATP. AuNPs get aggregated in the presence of certain concentrations of Zr(IV). After the addition of ATP, ATP reacts with Zr(IV) and prevents AuNPs from aggregation, enabling the detection of ATP. Because of the fast interaction of ATP with Zr(IV), ATP can be detected with a detection limit of 0.5 μM within 2 min by the naked eye. Moreover, ATP can be detected by the PRA technique with higher sensitivity. The A520nm/A650nm values in PRA spectra increase linearly with the concentrations of ATP from 0.1 μM to 15 μM (r = 0.9945) with a detection limit of 28 nM. The proposed visual and PRA sensor exhibit good selectivity against adenosine, adenosine monophosphate, guanosine triphosphate, cytidine triphosphate and uridine triphosphate. The recoveries for the analysis of ATP in synthetic samples range from 95.3% to 102.0%. Therefore, the proposed novel sensor for ATP is promising for real-time or on-site detection of ATP. PMID:27754349
Directory of Open Access Journals (Sweden)
Wenjing Qi
2016-10-01
Full Text Available Zr(IV can form phosphate and Zr(IV (–PO32−–Zr4+– complex owing to the high affinity between Zr(IV with phosphate. Zr(IV can induce the aggregation of gold nanoparticles (AuNPs, while adenosine triphosphate(ATP can prevent Zr(IV-induced aggregation of AuNPs. Herein, a visual and plasmon resonance absorption (PRAsensor for ATP have been developed using AuNPs based on the high affinity between Zr(IVwith ATP. AuNPs get aggregated in the presence of certain concentrations of Zr(IV. After the addition of ATP, ATP reacts with Zr(IV and prevents AuNPs from aggregation, enabling the detection of ATP. Because of the fast interaction of ATP with Zr(IV, ATP can be detected with a detection limit of 0.5 μM within 2 min by the naked eye. Moreover, ATP can be detected by the PRA technique with higher sensitivity. The A520nm/A650nm values in PRA spectra increase linearly with the concentrations of ATP from 0.1 μM to 15 μM (r = 0.9945 with a detection limit of 28 nM. The proposed visual and PRA sensor exhibit good selectivity against adenosine, adenosine monophosphate, guanosine triphosphate, cytidine triphosphate and uridine triphosphate. The recoveries for the analysis of ATP in synthetic samples range from 95.3% to 102.0%. Therefore, the proposed novel sensor for ATP is promising for real-time or on-site detection of ATP.
Sheth, Rahul D; Bhut, Bharat V; Jin, Mi; Li, Zhengjian; Chen, Wilfred; Cramer, Steven M
2014-12-20
In this work, a proof of concept elastin-like polypeptide-Z domain fusion (ELP-Z) based monoclonal antibody (mAb) affinity precipitation process is developed using scaled-down filtration techniques. Tangential flow filtration (TFF) is examined for the recovery of ELP-Z-mAb precipitates formed during the mAb binding step and the ELP-Z precipitates formed during the mAb elution step. TFF results in complete precipitate recovery during both stages of the process and high host cell protein and DNA impurity clearance after diafiltration. Total recycle TFF experiments are then employed to determine permeate flux as a function of the precipitate concentration for both stages of the process. While the ELP-Z-mAb precipitate recovery step resulted in high permeate flux (550-600L/m(2)/h/bar), the ELP-Z precipitates are shown to severely foul the TFF membrane, causing rapid flux decay. Confocal microscopy of the ELP-Z-mAb and ELP-Z precipitates suggests significant differences in the morphology and the kinetics of formation of these precipitates, which is likely responsible for their different behavior during TFF. Finally, an alternative normal flow filtration strategy is developed for the ELP-Z precipitate recovery step during mAb elution, using a combination of 5μm and a 0.45/0.2μm filters. Using this approach, the ELP-Z precipitates are separated from the final mAb elution pool at high volumetric throughputs and high ELP-Z recovery (96%) is obtained after resolubilization from the filter. This study demonstrates that the ELP-Z affinity precipitation process can be readily scaled up using conventional membrane processing.
Luo, Jing; Huang, Jing; Cong, Jiaojiao; Wei, Wei; Liu, Xiaoya
2017-03-01
Specific recognition and separation of glycoproteins from complex biological solutions is very important in clinical diagnostics considering the close relationship between glycoproteins with the occurrence of diverse diseases, but the lack of materials with high selectivity and superior capture capacity still makes it a challenge. In this work, graphene oxide (GO) based molecularly imprinted polymers (MIPs) possessing double recognition abilities have been synthesized and applied as highly efficient adsorbents for glycoprotein recognition and separation. Boronic acid functionalized graphene oxide (GO-APBA) was first prepared and a template glycoprotein (ovalbumin, OVA) was then immobilized onto the surface of GO-APBA through boronate affinity. An imprinting layer was subsequently deposited onto GO-APBA surface by a sol-gel polymerization of organic silanes in aqueous solution. After the removal of the template glycoprotein, 3D cavities with double recognition abilities toward OVA were obtained in the as-prepared imprinted materials (GO-APBA/MIPs) because of the combination of boronate affinity and molecularly imprinted spatial matched cavities. The obtained GO-APBA/MIPs exhibited superior specific recognition toward OVA with imprinted factor (α) as high as 9.5, significantly higher than the corresponding value (4.0) of GO/MIPs without the introduction of boronic acid groups. Meanwhile, because of the synergetic effect of large surface area of graphene and surface imprinting, high binding capacity and fast adsorption/elution rate of GO-APBA/MIPs toward OVA were demonstrated and the saturation binding capacity of GO-APBA/MIPs could reach 278 mg/g within 40 min. The outstanding recognizing behavior (high adsorption capacity, highly specific recognition, and rapid binding rate) coupled to the facile and environmentally friendly preparation procedure makes GO-APBA/MIPs promising in the recognition, separation, and analysis of glycoproteins in clinics in the future.
A density gradient theory based method for surface tension calculations
DEFF Research Database (Denmark)
Liang, Xiaodong; Michelsen, Michael Locht; Kontogeorgis, Georgios
2016-01-01
The density gradient theory has been becoming a widely used framework for calculating surface tension, within which the same equation of state is used for the interface and bulk phases, because it is a theoretically sound, consistent and computationally affordable approach. Based on the observation...... systems, from non-polar binary mixtures to complex multicomponent associating fluids, combined with the Peng-Robinson and the Cubic Plus Association equations of state. From an overall point of view, the approximation method with the density path profile passing the saddle point and the full density...
Novel and high affinity fluorescent ligands for the serotonin transporter based on (s)-citalopram
DEFF Research Database (Denmark)
Kumar, Vivek; Rahbek-Clemmensen, Troels; Billesbølle, Christian B
2014-01-01
Novel rhodamine-labeled ligands, based on (S)-citalopram, were synthesized and evaluated for uptake inhibition at the human serotonin, dopamine, and norepinephrine transporters (hSERT, hDAT, and hNET, respectively) and for binding at SERT, in transiently transfected COS7 cells. Compound 14 demons...
Data-based fault-tolerant control for affine nonlinear systems with actuator faults.
Xie, Chun-Hua; Yang, Guang-Hong
2016-09-01
This paper investigates the fault-tolerant control (FTC) problem for unknown nonlinear systems with actuator faults including stuck, outage, bias and loss of effectiveness. The upper bounds of stuck faults, bias faults and loss of effectiveness faults are unknown. A new data-based FTC scheme is proposed. It consists of the online estimations of the bounds and a state-dependent function. The estimations are adjusted online to compensate automatically the actuator faults. The state-dependent function solved by using real system data helps to stabilize the system. Furthermore, all signals in the resulting closed-loop system are uniformly bounded and the states converge asymptotically to zero. Compared with the existing results, the proposed approach is data-based. Finally, two simulation examples are provided to show the effectiveness of the proposed approach.
Finite-difference calculation of traveltimes based on rectangular grid
Institute of Scientific and Technical Information of China (English)
李振春; 刘玉莲; 张建磊; 马在田; 王华忠
2004-01-01
To the most of velocity fields, the traveltimes of the first break that seismic waves propagate along rays can be computed on a 2-D or 3-D numerical grid by finite-difference extrapolation. Under ensuring accuracy, to improve calculating efficiency and adaptability, the calculation method of first-arrival traveltime of finite-difference is derived based on any rectangular grid and a local plane wavefront approximation. In addition, head waves and scattering waves are properly treated and shadow and caustic zones cannot be encountered, which appear in traditional ray-tracing. The testes of two simple models and the complex Marmousi model show that the method has higher accuracy and adaptability to complex structure with strong vertical and lateral velocity variation, and Kirchhoff prestack depth migration based on this method can basically achieve the position imaging effects of wave equation prestack depth migration in major structures and targets. Because of not taking account of the later arrivals energy, the effect of its amplitude preservation is worse than that by wave equation method, but its computing efficiency is higher than that by total Green's function method and wave equation method.
Lifeline system network reliability calculation based on GIS and FTA
Institute of Scientific and Technical Information of China (English)
TANG Ai-ping; OU Jin-ping; LU Qin-nian; ZHANG Ke-xu
2006-01-01
Lifelines, such as pipeline, transportation, communication, electric transmission and medical rescue systems, are complicated networks that always distribute spatially over large geological and geographic units.The quantification of their reliability under an earthquake occurrence should be highly regarded, because the performance of these systems during a destructive earthquake is vital in order to estimate direct and indirect economic losses from lifeline failures, and is also related to laying out a rescue plan. The research in this paper aims to develop a new earthquake reliability calculation methodology for lifeline systems. The methodology of the network reliability for lifeline systems is based on fault tree analysis (FTA) and geological information system(GIS). The interactions existing in a lifeline system are considered herein. The lifeline systems are idealized as equivalent networks, consisting of nodes and links, and are described by network analysis in GIS. Firstly, the node is divided into two types: simple node and complicated node, where the reliability of the complicated node is calculated by FTA and interaction is regarded as one factor to affect performance of the nodes. The reliability of simple node and link is evaluated by code. Then, the reliability of the entire network is assessed based on GIS and FTA. Lastly, an illustration is given to show the methodology.
An Empirically Based Calculation of the Extragalactic Infrared Background
Malkan, M A
1998-01-01
Using the excellent observed correlations among various infrared wavebands with 12 and 60 micron luminosities, we calculate the 2-300 micron spectra of galaxies as a function of luminosity. We then use 12 micron and 60 micron galaxy luminosity functions derived from IRAS data, together with recent data on the redshift evolution of galaxy emissivity, to derive a new, empirically based IR background spectrum from stellar and dust emission in galaxies. Our best estimate for the IR background is of order 2-3 nW/m^2/sr with a peak around 200 microns reaching 6-8 nW/m^2/sr. Our empirically derived background spectrum is fairly flat in the mid-IR, as opposed to spectra based on modeling with discrete temperatures which exhibit a "valley" in the mid-IR. We also derive a conservative lower limit to the IR background which is more than a factor of 2 lower than our derived flux.
Coderch, Claire; Tang, Yong; Klett, Javier; Zhang, Shu-En; Ma, Yun-Tao; Shaorong, Wang; Matesanz, Ruth; Pera, Benet; Canales, Angeles; Jiménez-Barbero, Jesús; Morreale, Antonio; Díaz, J Fernando; Fang, Wei-Shuo; Gago, Federico
2013-05-14
Ten novel taxanes bearing modifications at the C2 and C13 positions of the baccatin core have been synthesized and their binding affinities for mammalian tubulin have been experimentally measured. The design strategy was guided by (i) calculation of interaction energy maps with carbon, nitrogen and oxygen probes within the taxane-binding site of β-tubulin, and (ii) the prospective use of a structure-based QSAR (COMBINE) model derived from an earlier series comprising 47 congeneric taxanes. The tubulin-binding affinity displayed by one of the new compounds (CTX63) proved to be higher than that of docetaxel, and an updated COMBINE model provided a good correlation between the experimental binding free energies and a set of weighted residue-based ligand-receptor interaction energies for 54 out of the 57 compounds studied. The remaining three outliers from the original training series have in common a large unfavourable entropic contribution to the binding free energy that we attribute to taxane preorganization in aqueous solution in a conformation different from that compatible with tubulin binding. Support for this proposal was obtained from solution NMR experiments and molecular dynamics simulations in explicit water. Our results shed additional light on the determinants of tubulin-binding affinity for this important class of antitumour agents and pave the way for further rational structural modifications.
Generation of monospecific antibodies based on affinity capture of polyclonal antibodies.
Hjelm, Barbara; Forsström, Björn; Igel, Ulrika; Johannesson, Henrik; Stadler, Charlotte; Lundberg, Emma; Ponten, Fredrik; Sjöberg, Anna; Rockberg, Johan; Schwenk, Jochen M; Nilsson, Peter; Johansson, Christine; Uhlén, Mathias
2011-11-01
A method is described to generate and validate antibodies based on mapping the linear epitopes of a polyclonal antibody followed by sequential epitope-specific capture using synthetic peptides. Polyclonal antibodies directed towards four proteins RBM3, SATB2, ANLN, and CNDP1, potentially involved in human cancers, were selected and antibodies to several non-overlapping epitopes were generated and subsequently validated by Western blot, immunohistochemistry, and immunofluorescence. For all four proteins, a dramatic difference in functionality could be observed for these monospecific antibodies directed to the different epitopes. In each case, at least one antibody was obtained with full functionality across all applications, while other epitope-specific fractions showed no or little functionality. These results present a path forward to use the mapped binding sites of polyclonal antibodies to generate epitope-specific antibodies, providing an attractive approach for large-scale efforts to characterize the human proteome by antibodies.
Hu, Fangxin; Chen, Shihong; Wang, Chengyan; Yuan, Ruo; Xiang, Yun; Wang, Cun
2012-04-15
In this paper, a novel method for detecting concanavalin A (Con A) was developed based on lectin-carbohydrate biospecific interactions. Multi-wall carbon nanotube-polyaniline (MWNT-PANI) nanocomposites, synthesized by in situ polymerization, were chosen to immobilize d-glucose through the Schiff-base reaction. The immobilized D-glucose showed high binding sensitivity and excellent selectivity to its target lectin, Con A. Cyclic voltammetry (CV), electrochemical impedance spectroscopy (EIS), transmission electron microscopy (TEM) and atomic force microscopy (AFM) were applied to characterize the assembly process of the modified electrode. Due to the high affinity of Con A for D-glucose and high stability of the propounded sensing platform, the fabricated biosensor achieved ultrasensitive detection of Con A with good sensitivity, acceptable reproducibility and stability. The changes of response current were proportional to the Con A concentrations from 3.3 pM to 9.3 nM, with a detection limit of 1.0 pM. Therefore, the combination of MWNT-PANI nanocomposites and the special binding force between lectin and carbohydrate provides an efficient and promising platform for the fabrication of bioelectrochemical devices.
Naeni, Leila M; Craig, Hugh; Berretta, Regina; Moscato, Pablo
2016-01-01
In this study we propose a novel, unsupervised clustering methodology for analyzing large datasets. This new, efficient methodology converts the general clustering problem into the community detection problem in graph by using the Jensen-Shannon distance, a dissimilarity measure originating in Information Theory. Moreover, we use graph theoretic concepts for the generation and analysis of proximity graphs. Our methodology is based on a newly proposed memetic algorithm (iMA-Net) for discovering clusters of data elements by maximizing the modularity function in proximity graphs of literary works. To test the effectiveness of this general methodology, we apply it to a text corpus dataset, which contains frequencies of approximately 55,114 unique words across all 168 written in the Shakespearean era (16th and 17th centuries), to analyze and detect clusters of similar plays. Experimental results and comparison with state-of-the-art clustering methods demonstrate the remarkable performance of our new method for identifying high quality clusters which reflect the commonalities in the literary style of the plays.
Craig, Hugh; Berretta, Regina; Moscato, Pablo
2016-01-01
In this study we propose a novel, unsupervised clustering methodology for analyzing large datasets. This new, efficient methodology converts the general clustering problem into the community detection problem in graph by using the Jensen-Shannon distance, a dissimilarity measure originating in Information Theory. Moreover, we use graph theoretic concepts for the generation and analysis of proximity graphs. Our methodology is based on a newly proposed memetic algorithm (iMA-Net) for discovering clusters of data elements by maximizing the modularity function in proximity graphs of literary works. To test the effectiveness of this general methodology, we apply it to a text corpus dataset, which contains frequencies of approximately 55,114 unique words across all 168 written in the Shakespearean era (16th and 17th centuries), to analyze and detect clusters of similar plays. Experimental results and comparison with state-of-the-art clustering methods demonstrate the remarkable performance of our new method for identifying high quality clusters which reflect the commonalities in the literary style of the plays. PMID:27571416
Zhang, Tao; Han, Shengli; Liu, Qi; Guo, Ying; He, Langchong
2014-11-01
An affinity two-dimensional chromatography method was developed for the recognition, separation, and identification of allergic components from tubeimu saponin extracts, a preparation often injected to treat various conditions as indicated by traditional Chinese medicine. Rat basophilic leukemia-2H3 cell membranes were used as the stationary phase of a membrane affinity chromatography column to capture components with affinity for mast cells that could be involved in a degranulation reaction. The retained components were enriched and analyzed by membrane affinity chromatography with liquid chromatography and mass spectrometry via a port switch valve. Suitability and reliability of the method was investigated using appropriate standards, and then, the method was applied to identify components retained from tubeimu saponin extracts. Tubeimoside A was identified in this way as a potential allergen, and degranulation assays confirmed that tubeimoside A induces RBL-2H3 cell degranulation in a dose-dependent manner. An increase in Ca(2+) influx indicated that degranulation induced by tubeimoside A is likely Ca(2+) dependent. Coupled with the degranulation assay, RBL-2H3 cell-based affinity chromatography coupled with liquid chromatography and mass spectrometry is an effective method for screening and identifying allergic components from tubeimu saponin extracts.
Schwartz, S A
1981-02-01
Control and bromodeoxyuridine-containing rat-embryo-cell DNA were digested by the restriction endonucleases Hpa II and Msp I and were subsequently analyzed by agarose-gel electrophoresis as well as DNA-affinity chromatography. By the former technique, it appeared that no substantial differences existed between the two DNA samples with respect to the amount or distribution of methylcytosine. On the other hand, it was obvious following base-specific DNA chromatography that the virogenic analog was markedly concentrated in particular nucleotide sequences which demonstrated a proportionately greater affinity for the (A-T)-specific adsorbent irrespective of digestion by either restriction endonuclease.
Institute of Scientific and Technical Information of China (English)
无
2000-01-01
Combinatorial peptide libraries have become powerful tools to screen functional ligands by the principle of affinity selection. We screened in a phage peptide library to investigate potential peptide affinity ligands for the purification of human granulocyte colony-stimulation factor(hG-CSF). Peptide ligands will be promising to replace monoclonal antibodies as they have advantages of high stability, efficiency, selectivity and low price.
Kawato, Tatsuya; Mizohata, Eiichi; Shimizu, Yohei; Meshizuka, Tomohiro; Yamamoto, Tomohiro; Takasu, Noriaki; Matsuoka, Masahiro; Matsumura, Hiroyoshi; Kodama, Tatsuhiko; Kanai, Motomu; Doi, Hirofumi; Inoue, Tsuyoshi; Sugiyama, Akira
2015-01-01
The streptavidin/biotin interaction has been widely used as a useful tool in research fields. For application to a pre-targeting system, we previously developed a streptavidin mutant that binds to an iminobiotin analog while abolishing affinity for natural biocytin. Here, we design a bivalent iminobiotin analog that shows 1000-fold higher affinity than before, and determine its crystal structure complexed with the mutant protein.
Hierarchical Affinity Propagation
Givoni, Inmar; Frey, Brendan J
2012-01-01
Affinity propagation is an exemplar-based clustering algorithm that finds a set of data-points that best exemplify the data, and associates each datapoint with one exemplar. We extend affinity propagation in a principled way to solve the hierarchical clustering problem, which arises in a variety of domains including biology, sensor networks and decision making in operational research. We derive an inference algorithm that operates by propagating information up and down the hierarchy, and is efficient despite the high-order potentials required for the graphical model formulation. We demonstrate that our method outperforms greedy techniques that cluster one layer at a time. We show that on an artificial dataset designed to mimic the HIV-strain mutation dynamics, our method outperforms related methods. For real HIV sequences, where the ground truth is not available, we show our method achieves better results, in terms of the underlying objective function, and show the results correspond meaningfully to geographi...
DEFF Research Database (Denmark)
Pinto, Andrea; Conti, Paola; Grazioso, Giovanni;
2011-01-01
The synthesis of four new isoxazoline-based amino acids being analogues of previously described glutamate receptor ligands is reported and their affinity for ionotropic glutamate receptors is analyzed in comparison with that of selected model compounds. Molecular modelling investigations have been...
Calculation base of flooded type evaporators with finned tubes
Energy Technology Data Exchange (ETDEWEB)
Brod, W.; Slipcevic, B.
1989-03-01
For the construction of flooded type evaporators with halogen refrigerants, the refrigeration industry is using finned tubes. Equations for thermodynamical calculations of the apparaturs are given, and explained with the aid of a calculation example.
Rossetti, Cecilia; Levernæs, Maren C S; Reubsaet, Léon; Halvorsen, Trine G
2016-11-04
Mass spectrometric assays are now of great relevance for trace compound analysis in complex matrices such as serum and plasma samples. Especially in the quantification of low abundant protein-biomarkers, the choice of the sample preparation is crucial. In the present paper immunocapture and Molecular Imprinted Polymers (MIPs) have been applied in the determination of pro-gastrin-releasing peptide, a Small Cell Lung Cancer marker. These affinity-based techniques were compared in terms of matrix effect, limits of detection, repeatability and extraction specificity. In addition, protein precipitation was included for comparison as it is a typical sample preparation method of biological matrices. The results highlighted differences in the methods' performance and specificity, strongly affecting the outcome of the mass spectrometric determination. Plastic and monoclonal antibodies confirmed to be sensitive and specific sample preparations able to determine ProGRP at clinical relevant concentration, although only the use of monoclonal antibodies allowed the reliable quantification of ProGRP at reference levels (8pM). In addition better insight in the specificity of the three sample preparation techniques was gained. This might also be of interest for other biological applications.
Arabzadeh, Abbas; Salimi, Abdollah
2015-12-15
In this study, we reported iminodiacetic acid-copper ion complex (IDA-Cu) immobilized onto gold nanoparticles (GNPs)-modified glassy carbon electrode as a novel electrochemical platform for selective and sensitive determination of lysozyme (Lys). IDA-Cu complex acted as an efficient recognition element capable of capturing Lys molecules. GNPs acts as a substrate to immobilize IDA-Cu coordinative complex and its interaction with Lys leds to a great signal amplification through measuring changes in differential pulse voltammetric (DPV) peak current of [Fe(CN)6](3-/4-) redox probe. Upon the recognition of the Lys to the IDA-Cu, the peak current decreased due to the hindered electron transfer reaction on the electrode surface. Under optimum condition, it was found that the proposed method could detect Lys at wide linear concentration range (0.1 pM to 0.10 mM) with detection limit of 60 fM. Furthermore, electrochemical impedance spectroscopy (EIS) detection of Lys was demonstrated as a simple and rapid alternative analytical technique with detection limit of 80 fM at concentration range up to 0.1mM. In addition, the proposed sensor was satisfactorily applied to the determination of Lys in real samples such as hen egg white. The proposed modified electrode showing the high selectivity, good sensitivity and stability toward Lys detection may hold a great promise in developing other electrochemical sensors based on metal-chelate affinity complexes.
Aronoff-Spencer, Eliah; Venkatesh, A G; Sun, Alex; Brickner, Howard; Looney, David; Hall, Drew A
2016-12-15
Yeast cell lines were genetically engineered to display Hepatitis C virus (HCV) core antigen linked to gold binding peptide (GBP) as a dual-affinity biobrick chimera. These multifunctional yeast cells adhere to the gold sensor surface while simultaneously acting as a "renewable" capture reagent for anti-HCV core antibody. This streamlined functionalization and detection strategy removes the need for traditional purification and immobilization techniques. With this biobrick construct, both optical and electrochemical immunoassays were developed. The optical immunoassays demonstrated detection of anti-HCV core antibody down to 12.3pM concentrations while the electrochemical assay demonstrated higher binding constants and dynamic range. The electrochemical format and a custom, low-cost smartphone-based potentiostat ($20 USD) yielded comparable results to assays performed on a state-of-the-art electrochemical workstation. We propose this combination of synthetic biology and scalable, point-of-care sensing has potential to provide low-cost, cutting edge diagnostic capability for many pathogens in a variety of settings.
Directory of Open Access Journals (Sweden)
Adel A.A. Emara
2014-12-01
Full Text Available Oxygen absorption–desorption processes for square planar Mn(II, Co(II and Mn(II complexes of tetradentate Schiff base ligands in DMF and chloroform solvents were investigated. The tetradentate Schiff base ligands were obtained by condensation reaction of ethylenediamine with salcyldehyde, o-hydroxyacetophenone or acetylacetone in the molar ratio 1:2. The square planar complexes were prepared by the reaction of the Schiff base ligands with Mn(II acetate, Co(II nitrate and Ni(II nitrate in dry ethanol under nitrogen atmosphere. The sorption processes were undertaken in the presence and absence of (pyridine axial-base in 1:1 M ratio of (pyridine:metal(II complexes. Complexes in DMF indicate significant oxygen affinity than in chloroform solvent. Cobalt(II complexes showed significant sorption processes compared to Mn(II and Ni(II complexes. The presence of pyridine axial base clearly increases oxygen affinity.
Oriented angles in affine space
Directory of Open Access Journals (Sweden)
Włodzimierz Waliszewski
2004-05-01
Full Text Available The concept of a smooth oriented angle in an arbitrary affine space is introduced. This concept is based on a kinematics concept of a run. Also, a concept of an oriented angle in such a space is considered. Next, it is shown that the adequacy of these concepts holds if and only if the affine space, in question, is of dimension 2 or 1.
Electric field calculations in brain stimulation based on finite elements
DEFF Research Database (Denmark)
Windhoff, Mirko; Opitz, Alexander; Thielscher, Axel
2013-01-01
, allowing for the creation of tetrahedral volume head meshes that can finally be used in the numerical calculations. The pipeline integrates and extends established (and mainly free) software for neuroimaging, computer graphics, and FEM calculations into one easy-to-use solution. We demonstrate...... elements. The latter is crucial to guarantee the numerical robustness of the FEM calculations. The pipeline will be released as open-source, allowing for the first time to perform realistic field calculations at an acceptable methodological complexity and moderate costs....
Application of CFD based wave loads in aeroelastic calculations
DEFF Research Database (Denmark)
Schløer, Signe; Paulsen, Bo Terp; Bredmose, Henrik
2014-01-01
realizations compare well with corresponding surface elevations from laboratory experiments. In aeroelastic calculations of an offshore wind turbine on a monopile foundation the hydrodynamic loads due to the potential flow solver and Morison’s equation and the hydrodynamic loads calculated by the coupled......Two fully nonlinear irregular wave realizations with different significant wave heights are considered. The wave realizations are both calculated in the potential flow solver Ocean-Wave3D and in a coupled domain decomposed potential-flow CFD solver. The surface elevations of the calculated wave...
Ice flood velocity calculating approach based on single view metrology
Wu, X.; Xu, L.
2017-02-01
Yellow River is the river in which the ice flood occurs most frequently in China, hence, the Ice flood forecasting has great significance for the river flood prevention work. In various ice flood forecast models, the flow velocity is one of the most important parameters. In spite of the great significance of the flow velocity, its acquisition heavily relies on manual observation or deriving from empirical formula. In recent years, with the high development of video surveillance technology and wireless transmission network, the Yellow River Conservancy Commission set up the ice situation monitoring system, in which live videos can be transmitted to the monitoring center through 3G mobile networks. In this paper, an approach to get the ice velocity based on single view metrology and motion tracking technique using monitoring videos as input data is proposed. First of all, River way can be approximated as a plane. On this condition, we analyze the geometry relevance between the object side and the image side. Besides, we present the principle to measure length in object side from image. Secondly, we use LK optical flow which support pyramid data to track the ice in motion. Combining the result of camera calibration and single view metrology, we propose a flow to calculate the real velocity of ice flood. At last we realize a prototype system by programming and use it to test the reliability and rationality of the whole solution.
Lee, Kyungmin; Cho, Soohyun
2017-01-26
Mathematics anxiety (MA) refers to the experience of negative affect when engaging in mathematical activity. According to Ashcraft and Kirk (2001), MA selectively affects calculation with high working memory (WM) demand. On the other hand, Maloney, Ansari, and Fugelsang (2011) claim that MA affects all mathematical activities, including even the most basic ones such as magnitude comparison. The two theories make opposing predictions on the negative effect of MA on magnitude processing and simple calculation that make minimal demands on WM. We propose that MA has a selective impact on mathematical problem solving that likely involves processing of magnitude representations. Based on our hypothesis, MA will impinge upon magnitude processing even though it makes minimal demand on WM, but will spare retrieval-based, simple calculation, because it does not require magnitude processing. Our hypothesis can reconcile opposing predictions on the negative effect of MA on magnitude processing and simple calculation. In the present study, we observed a negative relationship between MA and performance on magnitude comparison and calculation with high but not low WM demand. These results demonstrate that MA has an impact on a wide range of mathematical performance, which depends on one's sense of magnitude, but spares over-practiced, retrieval-based calculation.
Polepally, Prabhakar R.; Huben, Krzysztof; Vardy, Eyal; Setola, Vincent; Mosier, Philip D.; Roth, Bryan L.; Zjawiony, Jordan K.
2014-01-01
The neoclerodane diterpenoid salvinorin A is a major secondary metabolite isolated from the psychoactive plant Salvia divinorum. Salvinorin A has been shown to have high affinity and selectivity for the κ-opioid receptor (KOR). To study the ligand–receptor interactions that occur between salvinorin A and the KOR, a new series of salvinorin A derivatives bearing potentially reactive Michael acceptor functional groups at C-2 was synthesized and used to probe the salvinorin A binding site. The κ-, δ-, and μ-opioid receptor (KOR, DOR and MOR, respectively) binding affinities and KOR efficacies were measured for the new compounds. Although none showed wash-resistant irreversible binding, most of them showed high affinity for the KOR, and some exhibited dual affinity to KOR and MOR. Molecular modeling techniques based on the recently-determined crystal structure of the KOR combined with results from mutagenesis studies, competitive binding, functional assays and structure–activity relationships, and previous salvinorin A–KOR interaction models were used to identify putative interaction modes of the new compounds with the KOR and MOR. PMID:25193297
Polepally, Prabhakar R; Huben, Krzysztof; Vardy, Eyal; Setola, Vincent; Mosier, Philip D; Roth, Bryan L; Zjawiony, Jordan K
2014-10-06
The neoclerodane diterpenoid salvinorin A is a major secondary metabolite isolated from the psychoactive plant Salvia divinorum. Salvinorin A has been shown to have high affinity and selectivity for the κ-opioid receptor (KOR). To study the ligand-receptor interactions that occur between salvinorin A and the KOR, a new series of salvinorin A derivatives bearing potentially reactive Michael acceptor functional groups at C-2 was synthesized and used to probe the salvinorin A binding site. The κ-, δ-, and μ-opioid receptor (KOR, DOR and MOR, respectively) binding affinities and KOR efficacies were measured for the new compounds. Although none showed wash-resistant irreversible binding, most of them showed high affinity for the KOR, and some exhibited dual affinity to KOR and MOR. Molecular modeling techniques based on the recently-determined crystal structure of the KOR combined with results from mutagenesis studies, competitive binding, functional assays and structure-activity relationships, and previous salvinorin A-KOR interaction models were used to identify putative interaction modes of the new compounds with the KOR and MOR.
Measurement of the electron affinity of lanthanum
Energy Technology Data Exchange (ETDEWEB)
Covington, A.M.; Calabrese, D.; Thompson, J.S. [Department of Physics and Chemical Physics Programme, University of Nevada, Reno, NV 89557-0058 (United States); Kvale, T.J. [Department of Physics and Astronomy, University of Toledo, OH 43606-3390 (United States)
1998-10-28
The electron affinity of lanthanum has been measured using laser photoelectron energy spectroscopy. This is the first electron affinity measurement for lanthanum and one of the first measurements of an electron affinity of a rare-earth series element. The electron affinity of lanthanum was measured to be 0.47{+-}0.02 eV. At least one bound excited state of La{sup -} was also observed in the photoelectron spectra, and the binding energy relative to the ground state of lanthanum was measured as 0.17{+-}0.02 eV. The present experimental measurements are compared to a recent calculation. (author). Letter-to-the-editor.
Martin, Brent R; Giepmans, Ben N G; Adams, Stephen R; Tsien, Roger Y
2005-01-01
Membrane-permeant biarsenical dyes such as FlAsH and ReAsH fluoresce upon binding to genetically encoded tetracysteine motifs expressed in living cells, yet spontaneous nonspecific background staining can prevent detection of weakly expressed or dilute proteins. If the affinity of the tetracysteine
[CUDA-based fast dose calculation in radiotherapy].
Wang, Xianliang; Liu, Cao; Hou, Qing
2011-10-01
Dose calculation plays a key role in treatment planning of radiotherapy. Algorithms for dose calculation require high accuracy and computational efficiency. Finite size pencil beam (FSPB) algorithm is a method commonly adopted in the treatment planning system for radiotherapy. However, improvement on its computational efficiency is still desirable for such purpose as real time treatment planning. In this paper, we present an implementation of the FSPB, by which the most time-consuming parts in the algorithm are parallelized and ported on graphic processing unit (GPU). Compared with the FSPB completely running on central processing unit (CPU), the GPU-implemented FSPB can speed up the dose calculation for 25-35 times on a low price GPU (Geforce GT320) and for 55-100 times on a Tesla C1060, indicating that the GPU-implemented FSPB can provide fast enough dose calculations for real-time treatment planning.
Directory of Open Access Journals (Sweden)
Patricio Iturriaga-Vásquez
2013-04-01
Full Text Available A series of novel 2-pyridylbenzimidazole derivatives was rationally designed and synthesized based on our previous studies on benzimidazole 14, a CB1 agonist used as a template for optimization. In the present series, 21 compounds displayed high affinities with Ki values in the nanomolar range. JM-39 (compound 39 was the most active of the series (KiCB1 = 0.53 nM, while compounds 31 and 44 exhibited similar affinities to WIN 55212-2. CoMFA analysis was performed based on the biological data obtained and resulted in a statistically significant CoMFA model with high predictive value (q2 = 0.710, r2 = 0.998, r2pred = 0.823.
Calculation of VPP basing on functional analyzing method
Institute of Scientific and Technical Information of China (English)
无
2007-01-01
VPP can be used to deter mine the maxi mumvelocities of a sailboard at various sailing-routes,byestablishing the forces and moments balance-equa-tions on the sail and board in accordance with theprinciple of the maxi mal drive-force.Selectingroute is the most i mportant issue in upwind-sailing,and VPP calculations could provide the basis for de-ter mining the opti mal routes.VPP calculation of the sailboard perfor mance isa complex and difficult research task,and there arefew projects in this research-field...
Fan affinity laws from a collision model
Bhattacharjee, Shayak
2012-01-01
The performance of a fan is usually estimated from hydrodynamical considerations. The calculations are long and involved and the results are expressed in terms of three affinity laws. In this work we use kinetic theory to attack this problem. A hard sphere collision model is used, and subsequently a correction to account for the flow behaviour of air is incorporated. Our calculations prove the affinity laws and provide numerical estimates of the air delivery, thrust and drag on a rotating fan.
Tao, Yi; Zhang, Yufeng; Wang, Yi; Cheng, Yiyu
2013-06-27
A novel kind of immobilized enzyme affinity selection strategy based on hollow fibers has been developed for screening inhibitors from extracts of medicinal plants. Lipases from porcine pancreas were adsorbed onto the surface of polypropylene hollow fibers to form a stable matrix for ligand fishing, which was called hollow fibers based affinity selection (HF-AS). A variety of factors related to binding capability, including enzyme concentration, incubation time, temperature, buffer pH and ion strength, were optimized using a known lipase inhibitor hesperidin. The proposed approach was applied in screening potential lipase bound ligands from extracts of lotus leaf, followed by rapid characterization of active compounds using high performance liquid chromatography-mass spectrometry. Three flavonoids including quercetin-3-O-β-D-arabinopyranosyl-(1→2)-β-D-galactopyranoside, quercetin-3-O-β-D-glucuronide and kaempferol-3-O-β-d-glucuronide were identified as lipase inhibitors by the proposed HF-AS approach. Our findings suggested that the hollow fiber-based affinity selection could be a rapid and convenient approach for drug discovery from natural products resources.
Directory of Open Access Journals (Sweden)
Steven J. Gortler
2013-12-01
Full Text Available We study the properties of affine rigidity of a hypergraph and prove a variety of fundamental results. First, we show that affine rigidity is a generic property (i.e., depends only on the hypergraph, not the particular embedding. Then we prove that a graph is generically neighborhood affinely rigid in d-dimensional space if it is (d+1-vertex-connected. We also show neighborhood affine rigidity of a graph implies universal rigidity of its squared graph. Our results, and affine rigidity more generally, have natural applications in point registration and localization, as well as connections to manifold learning.
[Cell-ELA-based determination of binding affinity of DNA aptamer against U87-EGFRvIII cell].
Tan, Yan; Liang, Huiyu; Wu, Xidong; Gao, Yubo; Zhang, Xingmei
2013-05-01
A15, a DNA aptamer with binding specificity for U87 glioma cells stably overexpressing the epidermal growth factor receptor variant III (U87-EGFRvIII), was generated by cell systematic evolution of ligands by exponential enrichment (cell-SELEX) using a random nucleotide library. Subsequently, we established a cell enzyme-linked assay (cell-ELA) to detect the affinity of A15 compared to an EGFR antibody. We used A15 as a detection probe and cultured U87-EGFRvIII cells as targets. Our data indicate that the equilibrium dissociation constants (K(d)) for A15 were below 100 nmol/L and had similar affinity compared to an EGFR antibody for U87-EGFRvIII. We demonstrated that the cell-ELA was a useful method to determine the equilibrium dissociation constants (K(d)) of aptamers generated by cell-SELEX.
REPRESENTATIONS OF AFFINE HECKE ALGEBRAS OF TYPE G2
Institute of Scientific and Technical Information of China (English)
Xi Nanhua
2009-01-01
Let k be a field and q a nonzero element in k such that the square roots of q are in k.We use Hq to denote an affine Hecke algebra over k of type G2 with parameter q.The purpose of this paper is to study representations of Hq by using based rings of two-sided cells of an affine Weyl group W of type G2.We shall give the classification of irreducible representations of Hq.We also remark that a calculation in [11] actually shows that Theorem 2 in [1] needs a modification, a fact is known to Grojnowski and Tanisaki long time ago.In this paper we also show an interesting relation between Hq and an Hecke algebra corresponding to a certain Coxeter group.Apparently the idea in this paper works for all affine Weyl groups, but that is the theme of another paper.
Shiraki, Ryoji; Brantley, Susan L.
1995-04-01
Three affinity-based rate models based upon physical growth mechanisms were used to fit surface-controlled precipitation rate data for calcite using a continuously stirred tank reactor in NaOHCaCl 2CO 2H 2O solutions at 100°C and 100 bars total pressure between pH 6.38 and 6.98. At higher stirring speeds, when a H 2CO 3∗ was smaller than 2.33 × 10 -3, rate showed a parabolic dependence upon exp( Δ G/RT) for exp( Δ G/RT) 1.72 and followed a rate law based upon the assumption that surface nucleation is rate-limiting. When α H 2CO 3∗ was greater than 5.07 × 10 -3, the rate showed a linear dependence upon exp( Δ G/RT), suggesting growth by a simple surface adsorption mechanism. The rate of these three mechanisms at 100°C can be expressed by the following equations: ( spiral growth) R ppt = 10 -9.00±0.15expΔG/RT- 1 1.93±0.14, ( adsorption) R ppt = 10 -8.64±0.07expΔG/RT- 1 1.09±0.10, ( surface nucleation) R ppt = 10 -7.28±0.49exp- 2.36±0.21/ΔG/RT. The mechanistic model of Plummer et al. (1978) given by R net = k 1a H+ + k 2a H2CO3∗ + k 3a H2O - k 4a Ca2+a HCO-3 also describes the precipitation rate when growth followed the spiral growth equation. The rate constant for precipitation, k4, ranges between 7.08 × 10 -4 to 1.01 × 10 -3 moles cm -2 s -1 in the a H 2CO 3∗ range studied. This work shows that precipitation at 100°C in the spiral growth regime is well fit by both the mechanistic model of Plummer et al. (1978), based on multiple elementary reactions, and by a model derived for growth at screw dislocations. Outside of the regime of spiral growth, however, the model of Plummer et al. (1978) fails, suggesting that different elementary reactions control growth in the adsorption or two-dimensional nucleation regimes. However, the model of Plummer et al. (1978), based upon individual elementary reactions, accurately predicts both dissolution and precipitation of calcite under certain conditions; tests of the affinity based models
Petros, Amy K; Reddi, Amit R; Kennedy, Michelle L; Hyslop, Alison G; Gibney, Brian R
2006-12-11
Metal-ligand interactions are critical components of metalloprotein assembly, folding, stability, electrochemistry, and catalytic function. Research over the past 3 decades on the interaction of metals with peptide and protein ligands has progressed from the characterization of amino acid-metal and polypeptide-metal complexes to the design of folded protein scaffolds containing multiple metal cofactors. De novo metalloprotein design has emerged as a valuable tool both for the modular synthesis of these complex metalloproteins and for revealing the fundamental tenets of metalloprotein structure-function relationships. Our research has focused on using the coordination chemistry of de novo designed metalloproteins to probe the interactions of metal cofactors with protein ligands relevant to biological phenomena. Herein, we present a detailed thermodynamic analysis of Fe(II), Co(II), Zn(II), and[4Fe-4S]2(+/+) binding to IGA, a 16 amino acid peptide ligand containing four cysteine residues, H2N-KLCEGG-CIGCGAC-GGW-CONH2. These studies were conducted to delineate the inherent metal-ion preferences of this unfolded tetrathiolate peptide ligand as well as to evaluate the role of the solution pH on metal-peptide complex speciation. The [4Fe-4S]2(+/+)-IGA complex is both an excellent peptide-based synthetic analogue for natural ferredoxins and is flexible enough to accommodate mononuclear metal-ion binding. Incorporation of a single ferrous ion provides the FeII-IGA complex, a spectroscopic model of a reduced rubredoxin active site that possesses limited stability in aqueous buffers. As expected based on the Irving-Williams series and hard-soft acid-base theory, the Co(II) and Zn(II) complexes of IGA are significantly more stable than the Fe(II) complex. Direct proton competition experiments, coupled with determinations of the conditional dissociation constants over a range of pH values, fully define the thermodynamic stabilities and speciation of each MII-IGA complex. The
Calculation of VPP basing on functional analyzing method
Institute of Scientific and Technical Information of China (English)
Bai Kaixiang; Wang Dexun; Han Jiurui
2007-01-01
The establishment and realization of the VPP calucation's model for the functional analytic theory are discussed in this paper. Functional analyzing method is a theoretical model of the VPP calculation which can eliminate the influence of the sail and board's size skillfully, so it can be regarded as a brief standard of the sailboard's VPP results. As a brief watery dynamical model, resistance on board can be regarded as a direct proportion to the square of the boat-velocity. The boat-velocities at the state of six wind-velocities (3 m/s-8 m/s) with angles of 25°-180° are obtained by calculating, which provides an important gist of the sailing-route's selection in upwind-sailing.
First-Principle-Based Calculations of the Hugoniot of Cu
Institute of Scientific and Technical Information of China (English)
XIANG Shi-Kai; CAI Ling-Cang; JING Fu-Qian; WANG Shun-Jin
2005-01-01
@@ The equation of state of face-centred-cubic (fcc) copper crystals at pressures up to 500 GPa and relative volume to 0.55 have been evaluated by using the full-potential linear muffin-tin orbital (FPLMTO) total-energy method combining with a mean-field model of the vibrational partition function. The mean-field is constructed from the sum of all the pair potentials between the reference atom and the others of the system. The calculated properties are in good agreement with the available shock-wave experimental measurements.
Space resection model calculation based on Random Sample Consensus algorithm
Liu, Xinzhu; Kang, Zhizhong
2016-03-01
Resection has been one of the most important content in photogrammetry. It aims at the position and attitude information of camera at the shooting point. However in some cases, the observed values for calculating are with gross errors. This paper presents a robust algorithm that using RANSAC method with DLT model can effectually avoiding the difficulties to determine initial values when using co-linear equation. The results also show that our strategies can exclude crude handicap and lead to an accurate and efficient way to gain elements of exterior orientation.
Tasaki-Handa, Yuiko; Abe, Yukie; Ooi, Kenta; Tanaka, Mikiya; Wakisaka, Akihiro
2014-01-01
In this paper the exchange of lanthanide(III) ions (Ln(3+)) between a solution and a coordination polymer (CP) of di(2-ethylhexyl)phosphoric acid (Hdehp), [Ln(dehp)3], is studied. Kinetic and selectivity studies suggest that a polymeric network of [Ln(dehp)3] has different characteristics than the corresponding monomeric complex. The reaction rate is remarkably slow and requires over 600 h to reach in nearly equilibrium, and this can be explained by the polymeric crystalline structure and high valency of Ln(3+). The affinity of the exchange reaction reaches a maximum with the Ln(3+) possessing an ionic radius 7% smaller than that of the central Ln(3+), therefore, the affinity of the [Ln(dehp)3] is tunable based on the choice of the central metal ion. Such unique affinity, which differs from the monomeric complex, can be explained by two factors: the coordination preference and steric strain caused by the polymeric structure. The latter likely becomes predominant for Ln(3+) exchange when the ionic radius of the ion in solution is smaller than the original Ln(3+) by more than 7%. Structural studies suggest that the incoming Ln(3+) forms a new phase though an exchange reaction, and this could plausibly cause the structural strain.
Ray-based calculations of laser backscatter in ICF targets
Strozzi, D J; Hinkel, D E; Froula, D H; London, R A; Callahan, D A
2008-01-01
A steady-state model for Brillouin and Raman backscatter along a laser ray path is presented. The daughter plasma waves are treated in the strong damping limit, and have amplitudes given by the (linear) kinetic response to the ponderomotive drive. Pump depletion, inverse-bremsstrahlung damping, bremsstrahlung emission, Thomson scattering off density fluctuations, and whole-beam focusing are included. The numerical code Deplete, which implements this model, is described. The model is compared with traditional linear gain calculations, as well as ``plane-wave'' simulations with the paraxial propagation code pF3D. Comparisons with Brillouin-scattering experiments at the Omega Laser Facility show that laser speckles greatly enhance the reflectivity over the Deplete results. An approximate upper bound on this enhancement is given by doubling the Deplete coupling coefficient. Analysis with Deplete of an ignition design for the National Ignition Facility (NIF), with a peak radiation temperature of 285 eV, shows enco...
Inverse boundary element calculations based on structural modes
DEFF Research Database (Denmark)
Juhl, Peter Møller
2007-01-01
The inverse problem of calculating the flexural velocity of a radiating structure of a general shape from measurements in the field is often solved by combining a Boundary Element Method with the Singular Value Decomposition and a regularization technique. In their standard form these methods solve...... for the unknown normal velocities of the structure at the relatively large number of nodes in the numerical model. Efficiently the regularization technique smoothes the solution spatially, since a fast spatial variation is associated with high index singular values, which is filtered out or damped...... in the regularization. Hence, the effective number of degrees of freedom in the model is often much lower than the number of nodes in the model. The present paper deals with an alternative formulation possible for the subset of radiation problems in which a (structural) modal expansion is known for the structure...
基于Zernike矩的模糊与仿射混合不变量研究%Study on Blur and Affine Combined Invariants Based on Zernike Moment
Institute of Scientific and Technical Information of China (English)
蔡小帅; 张荣国; 李富萍; 刘小君
2014-01-01
Zernike moment,as a shape descriptor,has been widely used in image characteristics extraction and pattern recognition. It is low information redundancy and not sensitive to noise. To improve the shape description capability of the images which are degraded by combined blur and affine transformation,a new shape descriptor based on Zernike moment is proposed. The normalization method is used to construct affine invariants of Zernike moment. The combined blur and affine moment invariants of Zernike moment is achieved by the help of the blur invariants. The combined moment invariants is used as the shape descriptor to describe the shape feature of images,and is implemented comparison with the combined affine and blur invariants based on geometric moment with relative error. Experimental results show that the combined blur and affine invariants of Zernike moment can get better shape description and invariance in combined degrades,and robustness to noise.%Zernike矩作为形状描述子,其信息冗余度低且对噪声不敏感,在图像特征提取和模式识别中得到了广泛应用。为提高Zernike矩对含有模糊和仿射图像的形状描述能力,提出一种基于Zernike矩的形状描述子,该描述子使用规范化方法构造Zernike矩的仿射不变量,结合Zernike矩的模糊不变量得到Zernike矩的模糊和仿射混合不变量。将该矩混合不变量作为形状描述子描述图像的形状特征,并与几何矩模糊和仿射混合不变量进行对比实验,结果表明,Zernike矩的模糊和仿射混合不变量在混合形变下形状描述能力较强,具有不变性,并且对噪声的鲁棒性较好。
Institute of Scientific and Technical Information of China (English)
BO Chun-Miao; GONG Bo-Lin; HU Wen-Zhi
2008-01-01
Three hydrophilic immobilized metal affinity chromatographic packings for HPLC have been synthesized by chemical modification of 3.0 μm monodisperse non-porous poly(glycidyl methacrylate-co-ethylenedimethacrylate)(PGMMEDMA)beads.The retention behavior of proteins on the metal ion chelated columns loaded with copper(Ⅱ),nickel(Ⅱ)and zin(Ⅱ)ion was studied.The effect of pH on the protein retention Was investigated on both the naked and metal ion chelated columns in the range from 4.0 to 9.0.Four proteins were quickly separated in 3.0 min with linear gradient elution at a flow rate of 3.0 mL/min by using the synthesized Ni2+ -IDA(iminodiacetic acid)packings.The separation time was shorter than other immobilized metal affinity chromatography reported in the literature.Purification of lysozyme from egg white and trypsin on the commercially available trypsin was performed on the naked-IDA and Cu2+ -IDA columns,respectively.The purities of the purified trypsin and lysozyme were more than 92%and 95%,respectively.
Säll, Anna; Sjöholm, Kristoffer; Waldemarson, Sofia; Happonen, Lotta; Karlsson, Christofer; Persson, Helena; Malmström, Johan
2015-11-06
Disease and death caused by bacterial infections are global health problems. Effective bacterial strategies are required to promote survival and proliferation within a human host, and it is important to explore how this adaption occurs. However, the detection and quantification of bacterial virulence factors in complex biological samples are technically demanding challenges. These can be addressed by combining targeted affinity enrichment of antibodies with the sensitivity of liquid chromatography-selected reaction monitoring mass spectrometry (LC-SRM MS). However, many virulence factors have evolved properties that make specific detection by conventional antibodies difficult. We here present an antibody format that is particularly well suited for detection and analysis of immunoglobulin G (IgG)-binding virulence factors. As proof of concept, we have generated single chain fragment variable (scFv) antibodies that specifically target the IgG-binding surface proteins M1 and H of Streptococcus pyogenes. The binding ability of the developed scFv is demonstrated against both recombinant soluble protein M1 and H as well as the intact surface proteins on a wild-type S. pyogenes strain. Additionally, the capacity of the developed scFv antibodies to enrich their target proteins from both simple and complex backgrounds, thereby allowing for detection and quantification with LC-SRM MS, was demonstrated. We have established a workflow that allows for affinity enrichment of bacterial virulence factors.
Pashov, A D; Calvez, T; Gilardin, L; Maillère, B; Repessé, Y; Oldenburg, J; Pavlova, A; Kaveri, S V; Lacroix-Desmazes, S
2014-03-01
Forty per cent of haemophilia A (HA) patients have missense mutations in the F8 gene. Yet, all patients with identical mutations are not at the same risk of developing factor VIII (FVIII) inhibitors. In severe HA patients, human leucocyte antigen (HLA) haplotype was identified as a risk factor for onset of FVIII inhibitors. We hypothesized that missense mutations in endogenous FVIII alter the affinity of the mutated peptides for HLA class II, thus skewing FVIII-specific T-cell tolerance and increasing the risk that the corresponding wild-type FVIII-derived peptides induce an anti-FVIII immune response during replacement therapy. Here, we investigated whether affinity for HLA class II of wild-type FVIII-derived peptides that correspond to missense mutations described in the Haemophilia A Mutation, Structure, Test and Resource database is associated with inhibitor development. We predicted the mean affinity for 10 major HLA class II alleles of wild-type FVIII-derived peptides that corresponded to 1456 reported cases of missense mutations. Linear regression analysis confirmed a significant association between the predicted mean peptide affinity and the mutation inhibitory status (P = 0.006). Significance was lost after adjustment on mutation position on FVIII domains. Although analysis of the A1-A2-A3-C1 domains yielded a positive correlation between predicted HLA-binding affinity and inhibitory status (OR = 0.29 [95% CI: 0.14-0.60] for the high affinity tertile, P = 0.002), the C2 domain-restricted analysis indicated an inverse correlation (OR = 3.56 [1.10-11.52], P = 0.03). Our data validate the importance of the affinity of FVIII peptides for HLA alleles to the immunogenicity of therapeutic FVIII in patients with missense mutations.
QED Based Calculation of the Fine Structure Constant
Energy Technology Data Exchange (ETDEWEB)
Lestone, John Paul [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-10-13
Quantum electrodynamics is complex and its associated mathematics can appear overwhelming for those not trained in this field. Here, semi-classical approaches are used to obtain a more intuitive feel for what causes electrostatics, and the anomalous magnetic moment of the electron. These intuitive arguments lead to a possible answer to the question of the nature of charge. Virtual photons, with a reduced wavelength of λ, are assumed to interact with isolated electrons with a cross section of πλ^{2}. This interaction is assumed to generate time-reversed virtual photons that are capable of seeking out and interacting with other electrons. This exchange of virtual photons between particles is assumed to generate and define the strength of electromagnetism. With the inclusion of near-field effects the model presented here gives a fine structure constant of ~1/137 and an anomalous magnetic moment of the electron of ~0.00116. These calculations support the possibility that near-field corrections are the key to understanding the numerical value of the dimensionless fine structure constant.
Coupled-cluster based basis sets for valence correlation calculations
Claudino, Daniel; Gargano, Ricardo; Bartlett, Rodney J.
2016-03-01
Novel basis sets are generated that target the description of valence correlation in atoms H through Ar. The new contraction coefficients are obtained according to the Atomic Natural Orbital (ANO) procedure from CCSD(T) (coupled-cluster singles and doubles with perturbative triples correction) density matrices starting from the primitive functions of Dunning et al. [J. Chem. Phys. 90, 1007 (1989); ibid. 98, 1358 (1993); ibid. 100, 2975 (1993)] (correlation consistent polarized valence X-tuple zeta, cc-pVXZ). The exponents of the primitive Gaussian functions are subject to uniform scaling in order to ensure satisfaction of the virial theorem for the corresponding atoms. These new sets, named ANO-VT-XZ (Atomic Natural Orbital Virial Theorem X-tuple Zeta), have the same number of contracted functions as their cc-pVXZ counterparts in each subshell. The performance of these basis sets is assessed by the evaluation of the contraction errors in four distinct computations: correlation energies in atoms, probing the density in different regions of space via (-3 ≤ n ≤ 3) in atoms, correlation energies in diatomic molecules, and the quality of fitting potential energy curves as measured by spectroscopic constants. All energy calculations with ANO-VT-QZ have contraction errors within "chemical accuracy" of 1 kcal/mol, which is not true for cc-pVQZ, suggesting some improvement compared to the correlation consistent series of Dunning and co-workers.
Energy Technology Data Exchange (ETDEWEB)
Politi, Regina [Laboratory for Molecular Modeling, Division of Chemical Biology and Medicinal Chemistry, University of North Carolina, Chapel Hill, NC 27599 (United States); Department of Environmental Sciences and Engineering, University of North Carolina, Chapel Hill, NC 27599 (United States); Rusyn, Ivan, E-mail: iir@unc.edu [Department of Environmental Sciences and Engineering, University of North Carolina, Chapel Hill, NC 27599 (United States); Tropsha, Alexander, E-mail: alex_tropsha@unc.edu [Laboratory for Molecular Modeling, Division of Chemical Biology and Medicinal Chemistry, University of North Carolina, Chapel Hill, NC 27599 (United States)
2014-10-01
The thyroid hormone receptor (THR) is an important member of the nuclear receptor family that can be activated by endocrine disrupting chemicals (EDC). Quantitative Structure–Activity Relationship (QSAR) models have been developed to facilitate the prioritization of THR-mediated EDC for the experimental validation. The largest database of binding affinities available at the time of the study for ligand binding domain (LBD) of THRβ was assembled to generate both continuous and classification QSAR models with an external accuracy of R{sup 2} = 0.55 and CCR = 0.76, respectively. In addition, for the first time a QSAR model was developed to predict binding affinities of antagonists inhibiting the interaction of coactivators with the AF-2 domain of THRβ (R{sup 2} = 0.70). Furthermore, molecular docking studies were performed for a set of THRβ ligands (57 agonists and 15 antagonists of LBD, 210 antagonists of the AF-2 domain, supplemented by putative decoys/non-binders) using several THRβ structures retrieved from the Protein Data Bank. We found that two agonist-bound THRβ conformations could effectively discriminate their corresponding ligands from presumed non-binders. Moreover, one of the agonist conformations could discriminate agonists from antagonists. Finally, we have conducted virtual screening of a chemical library compiled by the EPA as part of the Tox21 program to identify potential THRβ-mediated EDCs using both QSAR models and docking. We concluded that the library is unlikely to have any EDC that would bind to the THRβ. Models developed in this study can be employed either to identify environmental chemicals interacting with the THR or, conversely, to eliminate the THR-mediated mechanism of action for chemicals of concern. - Highlights: • This is the largest curated dataset for ligand binding domain (LBD) of the THRβ. • We report the first QSAR model for antagonists of AF-2 domain of THRβ. • A combination of QSAR and docking enables
Vertical emission profiles for Europe based on plume rise calculations
Bieser, J.; Aulinger, A.; Matthias, V.; Quante, M.; Denier Van Der Gon, H.A.C.
2011-01-01
The vertical allocation of emissions has a major impact on results of Chemistry Transport Models. However, in Europe it is still common to use fixed vertical profiles based on rough estimates to determine the emission height of point sources. This publication introduces a set of new vertical profile
Chen, Yaqi; Chen, Zhui; Wang, Yi
2015-01-01
Screening and identifying active compounds from traditional Chinese medicine (TCM) and other natural products plays an important role in drug discovery. Here, we describe a magnetic beads-based multi-target affinity selection-mass spectrometry approach for screening bioactive compounds from natural products. Key steps and parameters including activation of magnetic beads, enzyme/protein immobilization, characterization of functional magnetic beads, screening and identifying active compounds from a complex mixture by LC/MS, are illustrated. The proposed approach is rapid and efficient in screening and identification of bioactive compounds from complex natural products.
Institute of Scientific and Technical Information of China (English)
WANG Aijun; AO Qiang; HE Qing; GONG Xiaoming; GONG Kai; GONG Yandao; ZHAO Nanming; ZHANG Xiufang
2006-01-01
Neural stem cells (NSCs) are currently considered as powerful candidate seeding cells for regeneration of both spinal cords and peripheral nerves. In this study, NSCs derived from fetal rat cortices were co-cultured with chitosan to evaluate the cell affinity of this material. The results showed that NSCs grew and proliferated well on chitosan films and most of them differentiated into neuron-like cells after 4 days of culture. Then, molded and braided chitosan conduits were fabricated and characterized for their cytotoxicity, swelling, and mechanical properties. Both types of conduits had no cytotoxic effects on fibroblasts (L929 cells) or neuroblastoma (Neuro-2a) cells. The molded conduits are much softer and more flexible while the braided conduits possess much better mechanical properties, which suggests different potential applications.
Lipani, Luca; Odadzic, Dalibor; Weizel, Lilia; Schwed, Johannes-Stephan; Sadek, Bassem; Stark, Holger
2014-10-30
The histamine H3 receptor (H3R) plays a role in cognitive and memory processes and is involved in different neurological disorders, including Alzheimer's disease, schizophrenia, and narcolepsy. Therefore, several hH3R antagonists/inverse agonists entered clinical phases for a broad spectrum of mainly centrally occurring diseases. However, many other promising candidates failed due to their pharmacokinetic profile, mostly because of their strong lipophilicity accompanied with low solubility. Analysis of previous potential H3R selective antagonists/inverse agonists, e.g. pitolisant, revealed promising results concerning physicochemical properties and drug-likeness. Herein, a series of new hH3R ligands 8-20 consisting of piperidin-1-yl or piperidin-1-yl-propoxyphenyl coupled to different uracil, thymine, and 5,6-dimethyluracil related moieties, were synthesized, evaluated on their binding properties at the hH3R and the estimation of different physicochemical and drug-likeness properties. Due to the coupling to various positions at pyrimidine-2,4-(1H,3H)-dione, affinity at hH3Rs and drug-likeness parameters have been improved. For instance, compound 9 showed in addition to high affinity at the hH3R (pKi (hH3R) = 8.14) clog S, clog P, LE, LipE, and drug-likeness score values of -4.36, 3.47, 0.34, 4.63, and 1.54, respectively. Also, the methyl substituted analog 17 (pKi (hH3R) = 8.15) revealed LE, LipE and drug-likeness score values of -3.29, 2.47, 0.49, 5.52, and 1.76, respectively.
UAV-based NDVI calculation over grassland: An alternative approach
Mejia-Aguilar, Abraham; Tomelleri, Enrico; Asam, Sarah; Zebisch, Marc
2016-04-01
The Normalised Difference Vegetation Index (NDVI) is one of the most widely used indicators for monitoring and assessing vegetation in remote sensing. The index relies on the reflectance difference between the near infrared (NIR) and red light and is thus able to track variations of structural, phenological, and biophysical parameters for seasonal and long-term monitoring. Conventionally, NDVI is inferred from space-borne spectroradiometers, such as MODIS, with moderate resolution up to 250 m ground resolution. In recent years, a new generation of miniaturized radiometers and integrated hyperspectral sensors with high resolution became available. Such small and light instruments are particularly adequate to be mounted on airborne unmanned aerial vehicles (UAV) used for monitoring services reaching ground sampling resolution in the order of centimetres. Nevertheless, such miniaturized radiometers and hyperspectral sensors are still very expensive and require high upfront capital costs. Therefore, we propose an alternative, mainly cheaper method to calculate NDVI using a camera constellation consisting of two conventional consumer-grade cameras: (i) a Ricoh GR modified camera that acquires the NIR spectrum by removing the internal infrared filter. A mounted optical filter additionally obstructs all wavelengths below 700 nm. (ii) A Ricoh GR in RGB configuration using two optical filters for blocking wavelengths below 600 nm as well as NIR and ultraviolet (UV) light. To assess the merit of the proposed method, we carry out two comparisons: First, reflectance maps generated by the consumer-grade camera constellation are compared to reflectance maps produced with a hyperspectral camera (Rikola). All imaging data and reflectance maps are processed using the PIX4D software. In the second test, the NDVI at specific points of interest (POI) generated by the consumer-grade camera constellation is compared to NDVI values obtained by ground spectral measurements using a
Dose calculation based on Cone Beam CT images
DEFF Research Database (Denmark)
Slot Thing, Rune
, several other factors contributing to the image quality degradation, and while one should, theoretically, be able to obtain CT-like image quality from CBCT scans, clinical image quality is often very far from this ideal realisation. The present thesis describes the investigation of potential image quality...... improvements in clinical CBCT imaging achieved through post-processing of the clinical image data. A Monte Carlo model was established to predict patient specific scattered radiation in CBCT imaging, based on anatomical information from the planning CT scan. This allowed the time consuming Monte Carlo......Cone beam CT (CBCT) imaging is frequently used in modern radiotherapy to ensure the proper positioning of the patient prior to each treatment fraction. With the increasing use of CBCT imaging for image guidance, interest has grown in exploring the potential use of these 3– or 4–D medical images...
Environment-based pin-power reconstruction method for homogeneous core calculations
Energy Technology Data Exchange (ETDEWEB)
Leroyer, H.; Brosselard, C.; Girardi, E. [EDF R and D/SINETICS, 1 av du General de Gaulle, F92141 Claman Cedex (France)
2012-07-01
Core calculation schemes are usually based on a classical two-step approach associated with assembly and core calculations. During the first step, infinite lattice assemblies calculations relying on a fundamental mode approach are used to generate cross-sections libraries for PWRs core calculations. This fundamental mode hypothesis may be questioned when dealing with loading patterns involving several types of assemblies (UOX, MOX), burnable poisons, control rods and burn-up gradients. This paper proposes a calculation method able to take into account the heterogeneous environment of the assemblies when using homogeneous core calculations and an appropriate pin-power reconstruction. This methodology is applied to MOX assemblies, computed within an environment of UOX assemblies. The new environment-based pin-power reconstruction is then used on various clusters of 3x3 assemblies showing burn-up gradients and UOX/MOX interfaces, and compared to reference calculations performed with APOLLO-2. The results show that UOX/MOX interfaces are much better calculated with the environment-based calculation scheme when compared to the usual pin-power reconstruction method. The power peak is always better located and calculated with the environment-based pin-power reconstruction method on every cluster configuration studied. This study shows that taking into account the environment in transport calculations can significantly improve the pin-power reconstruction so far as it is consistent with the core loading pattern. (authors)
Aeroelastic Calculations Based on Three-Dimensional Euler Analysis
Bakhle, Milind A.; Srivastava, Rakesh; Keith, Theo G., Jr.; Stefko, George L.
1998-01-01
This paper presents representative results from an aeroelastic code (TURBO-AE) based on an Euler/Navier-Stokes unsteady aerodynamic code (TURBO). Unsteady pressure, lift, and moment distributions are presented for a helical fan test configuration which is used to verify the code by comparison to two-dimensional linear potential (flat plate) theory. The results are for pitching and plunging motions over a range of phase angles, Good agreement with linear theory is seen for all phase angles except those near acoustic resonances. The agreement is better for pitching motions than for plunging motions. The reason for this difference is not understood at present. Numerical checks have been performed to ensure that solutions are independent of time step, converged to periodicity, and linearly dependent on amplitude of blade motion. The paper concludes with an evaluation of the current state of development of the TURBO-AE code and presents some plans for further development and validation of the TURBO-AE code.
Glass viscosity calculation based on a global statistical modelling approach
Energy Technology Data Exchange (ETDEWEB)
Fluegel, Alex
2007-02-01
A global statistical glass viscosity model was developed for predicting the complete viscosity curve, based on more than 2200 composition-property data of silicate glasses from the scientific literature, including soda-lime-silica container and float glasses, TV panel glasses, borosilicate fiber wool and E type glasses, low expansion borosilicate glasses, glasses for nuclear waste vitrification, lead crystal glasses, binary alkali silicates, and various further compositions from over half a century. It is shown that within a measurement series from a specific laboratory the reported viscosity values are often over-estimated at higher temperatures due to alkali and boron oxide evaporation during the measurement and glass preparation, including data by Lakatos et al. (1972) and the recently published High temperature glass melt property database for process modeling by Seward et al. (2005). Similarly, in the glass transition range many experimental data of borosilicate glasses are reported too high due to phase separation effects. The developed global model corrects those errors. The model standard error was 9-17°C, with R^2 = 0.985-0.989. The prediction 95% confidence interval for glass in mass production largely depends on the glass composition of interest, the composition uncertainty, and the viscosity level. New insights in the mixed-alkali effect are provided.
The Calculation of Material Requirements Based on BOP in Assembly Production
Institute of Scientific and Technical Information of China (English)
ZENG Hong-xin; BIN Hong-zan
2005-01-01
A calculation method for material requirements which is based on BOP( Bill of Process) in assembly production is presented in this paper. Firstly, the BOP of assembly production is constructed.Then, the calculation method Based on the BOP is brought forward for material requirements planning.
Directory of Open Access Journals (Sweden)
Tamara Bruna-Larenas
2012-01-01
Full Text Available We report the results of a search for model-based relationships between mu, delta, and kappa opioid receptor binding affinity and molecular structure for a group of molecules having in common a morphine structural core. The wave functions and local reactivity indices were obtained at the ZINDO/1 and B3LYP/6-31 levels of theory for comparison. New developments in the expression for the drug-receptor interaction energy expression allowed several local atomic reactivity indices to be included, such as local electronic chemical potential, local hardness, and local electrophilicity. These indices, together with a new proposal for the ordering of the independent variables, were incorporated in the statistical study. We found and discussed several statistically significant relationships for mu, delta, and kappa opioid receptor binding affinity at both levels of theory. Some of the new local reactivity indices incorporated in the theory appear in several equations for the first time in the history of model-based equations. Interaction pharmacophores were generated for mu, delta, and kappa receptors. We discuss possible differences regulating binding and selectivity in opioid receptor subtypes. This study, contrarily to the statistically backed ones, is able to provide a microscopic insight of the mechanisms involved in the binding process.
Indian Academy of Sciences (India)
Brijesh Kumar Sriwastava; Subhadip Basu; Ujjwal Maulik
2015-10-01
Protein–protein interaction (PPI) site prediction aids to ascertain the interface residues that participate in interaction processes. Fuzzy support vector machine (F-SVM) is proposed as an effective method to solve this problem, and we have shown that the performance of the classical SVM can be enhanced with the help of an interaction-affinity based fuzzy membership function. The performances of both SVM and F-SVM on the PPI databases of the Homo sapiens and E. coli organisms are evaluated and estimated the statistical significance of the developed method over classical SVM and other fuzzy membership-based SVM methods available in the literature. Our membership function uses the residue-level interaction affinity scores for each pair of positive and negative sequence fragments. The average AUC scores in the 10-fold cross-validation experiments are measured as 79.94% and 80.48% for the Homo sapiens and E. coli organisms respectively. On the independent test datasets, AUC scores are obtained as 76.59% and 80.17% respectively for the two organisms. In almost all cases, the developed F-SVM method improves the performances obtained by the corresponding classical SVM and the other classifiers, available in the literature.
Sriwastava, Brijesh Kumar; Basu, Subhadip; Maulik, Ujjwal
2015-10-01
Protein-protein interaction (PPI) site prediction aids to ascertain the interface residues that participate in interaction processes. Fuzzy support vector machine (F-SVM) is proposed as an effective method to solve this problem, and we have shown that the performance of the classical SVM can be enhanced with the help of an interaction-affinity based fuzzy membership function. The performances of both SVM and F-SVM on the PPI databases of the Homo sapiens and E. coli organisms are evaluated and estimated the statistical significance of the developed method over classical SVM and other fuzzy membership-based SVM methods available in the literature. Our membership function uses the residue-level interaction affinity scores for each pair of positive and negative sequence fragments. The average AUC scores in the 10-fold cross-validation experiments are measured as 79.94% and 80.48% for the Homo sapiens and E. coli organisms respectively. On the independent test datasets, AUC scores are obtained as 76.59% and 80.17% respectively for the two organisms. In almost all cases, the developed F-SVM method improves the performances obtained by the corresponding classical SVM and the other classifiers, available in the literature.
Rauthu, Subhash R; Shiao, Tze Chieh; André, Sabine; Miller, Michelle C; Madej, Élodie; Mayo, Kevin H; Gabius, Hans-Joachim; Roy, René
2015-01-01
The emerging significance of lectins for pathophysiological processes provides incentive for the design of potent inhibitors. To this end, systematic assessment of contributions to affinity and selectivity by distinct types of synthetic tailoring of glycosides is a salient step, here taken for the aglyconic modifications of two disaccharide core structures. Firstly we report the synthesis of seven N-linked-lactosides and of eight O-linked N-acetyllactosamines, each substituted with a 1,2,3-triazole unit, prepared by copper-catalyzed azide-alkyne cycloaddition (CuAAC). The totally regioselective β-D-(1 → 4) galactosylation of a 6-O-TBDPSi-protected N-acetylglucosamine acceptor provided efficient access to the N-acetyllactosamine precursor. The resulting compounds were then systematically tested for lectin reactivity in two binding assays of increasing biorelevance (inhibition of lectin binding to a surface-presented glycoprotein and to cell surfaces). As well as a plant toxin, we also screened the relative inhibitory potential with adhesion/growth-regulatory galectins (total of eight proteins). This type of modification yielded up to 2.5-fold enhancement for prototype proteins, with further increases for galectins-3 and -4. Moreover, the availability of (15)N-labeled proteins and full assignments enabled (1)H, (15)N HSQC-based measurements for hu- man galectins-1, -3, and -7 against p-nitrophenyl lactopyranoside, a frequently tested standard inhibitor containing an aromatic aglycone. The measurements confirmed the highest affinity against galectin-3 and detected chemical shift differences in its hydrophobic core upon ligand binding, besides common alterations around the canonical contact site for the lactoside residue. What can be accomplished in terms of affinity/selectivity by this type of core extension having been determined, the applied combined strategy should be instrumental for proceeding with defining structure-activity correlations at other bioinspired
Heegaard, Niels H H
2009-06-01
The journal Electrophoresis has greatly influenced my approaches to biomolecular affinity studies. The methods that I have chosen as my main tools to study interacting biomolecules--native gel and later capillary zone electrophoresis--have been the topic of numerous articles in Electrophoresis. Below, the role of the journal in the development and dissemination of these techniques and applications reviewed. Many exhaustive reviews on affinity electrophoresis and affinity CE have been published in the last few years and are not in any way replaced by the present deliberations that are focused on papers published by the journal.
Korkmaz, Nesrin; Aydın, Ali; Karadağ, Ahmet; Yanar, Yusuf; Maaşoğlu, Yelis; Şahin, Ertan; Tekin, Şaban
2017-02-01
Four compounds -two (2 and 3) completely new- of composition [Ni(edbea)Ag3(CN)5] (1), [Cu(edbea)Ag2(CN)4]·H2O (2), [Cd(edbea)Ag3(CN)5]·H2O (3) and [Cd(edbea)2] [Ag(CN)2]2·H2O (4) {edbea; 2,2‧-(ethylenedioxy)bis (ethylamine)}, were synthesized and characterized using elemental, FT-IR, X-Ray (4), thermal, variable temperature magnetic measurement (1 and 2) and biological techniques. The DNA/BSA binding affinities of 2 and 3 were evaluated by UV-Vis spectrophotometric titrations, ethidium bromide exchange experiments and electrophoretic mobility measurements. Compounds 1 and 4 have previously been characterized and shown to reduce the proliferation and migration of tumor cells. For the sake of clarity, 1 precise mechanism of action on microbial organisms and temperature magnetic measurement were determined. The crystallographic analyses showed that 4 was built up of [Cd(edbea)2]II cations and [Ag2(CN)4]II anions. Complexes demonstrated a remarkable antibacterial (1-4), antifungal (1-4) and antiproliferative activities (2 and 3) to ten human bacterial pathogens, four plant pathogenic fungi or three tumor cells (HeLa, HT29, and C6), respectively. Therefore, our results strongly confirm that cell proliferation, cell morphology, Bcl-2, P53 changes and apoptosis can be related to the pharmacological effects of the complexes as suitable candidate for clinical trials.
Calculation technique for special tank capacity based on setting-out
Fan, Bai-xing; Li, Zong-chun; Li, Guang-yun; Sun, Qing-wen
2008-12-01
Special tanks are important for energy source storage and transport. Based on the research of current calculation techniques for special tank capacity, a new principle and technique for special tank capacity calculations is discussed, which converts the capacity calculation into setting out and sets up a tank coordinate system. It completes all the work with a surveying robot online control mode. The new technology can calculate the tank capacity fast and precise and avoids the disadvantage of traditional measurement technology. Furthermore, special a software system is compiled, which can measure, calculate, adjust and create report tables automatically. Finally, the measurement accuracy and efficiency of the measured data are analyzed.
Theoretical proton affinity and fluoride affinity of nerve agent VX.
Bera, Narayan C; Maeda, Satoshi; Morokuma, Keiji; Viggiano, Al A
2010-12-23
Proton affinity and fluoride affinity of nerve agent VX at all of its possible sites were calculated at the RI-MP2/cc-pVTZ//B3LYP/6-31G* and RI-MP2/aug-cc-pVTZ//B3LYP/6-31+G* levels, respectively. The protonation leads to various unique structures, with H(+) attached to oxygen, nitrogen, and sulfur atoms; among which the nitrogen site possesses the highest proton affinity of -ΔE ∼ 251 kcal/mol, suggesting that this is likely to be the major product. In addition some H(2), CH(4) dissociation as well as destruction channels have been found, among which the CH(4) + [Et-O-P(═O)(Me)-S-(CH(2))(2)-N(+)(iPr)═CHMe] product and the destruction product forming Et-O-P(═O)(Me)-SMe + CH(2)═N(+)(iPr)(2) are only 9 kcal/mol less stable than the most stable N-protonated product. For fluoridization, the S-P destruction channel to give Et-O-P(═O)(Me)(F) + [S-(CH(2))(2)-N-(iPr)(2)](-) is energetically the most favorable, with a fluoride affinity of -ΔE ∼ 44 kcal. Various F(-) ion-molecule complexes are also found, with the one having F(-) interacting with two hydrogen atoms in different alkyl groups to be only 9 kcal/mol higher than the above destruction product. These results suggest VX behaves quite differently from surrogate systems.
Ab Initio Calculation on Self-Assembled Base-Functionalized Single-Walled Carbon Nanotubes
Institute of Scientific and Technical Information of China (English)
SONG Chen; XIA Yue-Yuan; ZHAO Ming-Wen; LIU Xiang-Dong; LI Ji-Ling; LI Li-Juan; LI Feng; HUANG Bo-Da
2006-01-01
@@ We perform ab initio calculations on the self-assembled base-functionalized single-walled carbon nanotubes (SWNTs) which exhibit the quasi-1D ‘ladder’ structure. The optimized configuration in the ab initio calculation is very similar to that obtainedfrom molecular dynamics simulation. We also calculate the electronic structures of the self-assembled base-functionalized SWNTs that exhibit distinct difference from the single-branch base-functionalized SWNT with a localized state lying just below the Fermi level, which may result from the coupling interaction between the bases accompanied by the self-assembly behaviour.
Institute of Scientific and Technical Information of China (English)
LIU Yuan-feng; ZHAO Mei
2005-01-01
An algorithm based on the data-adaptive filtering characteristics of singular spectrum analysis (SSA) is proposed to denoise chaotic data. Firstly, the empirical orthogonal functions ( EOFs ) and principal components ( PCs ) of the signal were calculated, reconstruct the signal using the EOFs and PCs, and choose the optimal reconstructing order based on sigular spectrum to obtain the denoised signal. The noise of the signal can influence the calculating precision of maximal Liapunov exponents. The proposed denoising algorithm was applied to the maximal Liapunov exponents calculations of two chaotic system, Henon map and Logistic map. Some numerical results show that this denoising algorithm could improve the calculating precision of maximal Liapunov exponent.
Pochet, Lionel; Heus, Ferry; Jonker, Niels; Lingeman, Henk; Smit, August B; Niessen, Wilfried M A; Kool, Jeroen
2011-06-15
A magnetic beads based affinity-selection methodology towards the screening of acetylcholine binding protein (AChBP) binders in mixtures and pure compound libraries was developed. The methodology works as follows: after in solution incubation of His-tagged AChBP with potential ligands, and subsequent addition of cobalt (II)-coated paramagnetic beads, the formed bead-AChBP-ligand complexes are fetched out of solution by injection and trapping in LC tubing with an external adjustable magnet. Non binders are then washed to the waste followed by elution of ligands to a SPE cartridge by flushing with denaturing solution. Finally, SPE-LC-MS analysis is performed to identify the ligands. The advantage of the current methodology is the in solution incubation followed by immobilized AChBP ligand trapping and the capability of using the magnetic beads system as mobile/online transportable affinity SPE material. The system was optimized and then successfully demonstrated for the identification of AChBP ligands injected as pure compounds and for the fishing of ligands in mixtures. The results obtained with AChBP as target protein demonstrated reliable discrimination between binders with pK(i) values ranging from at least 6.26 to 8.46 and non-binders.
Paul, Tanima; Chatterjee, Saptarshi; Bandyopadhyay, Arghya; Chattopadhyay, Dwiptirtha; Basu, Semanti; Sarkar, Keka
2015-08-18
Surface-functionalized adsorbant particles in combination with magnetic separation techniques have received considerable attention in recent years. Selective manipulation on such magnetic nanoparticles permits separation with high affinity in the presence of other suspended solids. Amylase is used extensively in food and allied industries. Purification of amylase from bacterial sources is a matter of concern because most of the industrial need for amylase is met by microbial sources. Here we report a simple, cost-effective, one-pot purification technique for bacterial amylase directly from fermented broth of Bacillus megaterium utilizing starch-coated superparamagnetic iron oxide nanoparticles (SPION). SPION was prepared by co-precipitation method and then functionalized by starch coating. The synthesized nanoparticles were characterized by transmission electron microscopy (TEM), a superconducting quantum interference device (SQUID, zeta potential, and ultraviolet-visible (UV-vis) and Fourier-transform infrared (FTIR) spectroscopy. The starch-coated nanoparticles efficiently purified amylase from bacterial fermented broth with 93.22% recovery and 12.57-fold purification. Sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE) revealed that the molecular mass of the purified amylase was 67 kD, and native gel showed the retention of amylase activity even after purification. Optimum pH and temperature of the purified amylase were 7 and 50°C, respectively, and it was stable over a range of 20°C to 50°C. Hence, an improved one-pot bacterial amylase purification method was developed using starch-coated SPION.
Self-affinity and nonextensivity of sunspots
Energy Technology Data Exchange (ETDEWEB)
Moret, M.A., E-mail: mamoret@gmail.com [Programa de Modelagem Computacional, SENAI, Cimatec, Av. Orlando Gomes, 1845, Piatã, 41650-010 Salvador, Bahia (Brazil); UNEB, Rua Silveira Martins, 2555, Cabula, 41150-000 Salvador, Bahia (Brazil)
2014-01-24
In this paper we study the time series of sunspots by using two different approaches, analyzing its self-affine behavior and studying its distribution. The long-range correlation exponent α has been calculated via Detrended Fluctuation Analysis and the power law vanishes to values greater than 11 years. On the other hand, the distribution of the sunspots obeys a q-exponential decay that suggests a non-extensive behavior. This observed characteristic seems to take an alternative interpretation of the sunspots dynamics. The present findings suggest us to propose a dynamic model of sunspots formation based on a nonlinear Fokker–Planck equation. Therefore its dynamic process follows the generalized thermostatistical formalism.
Affine and degenerate affine BMW algebras: Actions on tensor space
Daugherty, Zajj; Virk, Rahbar
2012-01-01
The affine and degenerate affine Birman-Murakami-Wenzl (BMW) algebras arise naturally in the context of Schur-Weyl duality for orthogonal and symplectic quantum groups and Lie algebras, respectively. Cyclotomic BMW algebras, affine and cyclotomic Hecke algebras, and their degenerate versions are quotients. In this paper we explain how the affine and degenerate affine BMW algebras are tantalizers (tensor power centralizer algebras) by defining actions of the affine braid group and the degenerate affine braid algebra on tensor space and showing that, in important cases, these actions induce actions of the affine and degenerate affine BMW algebras. We then exploit the connection to quantum groups and Lie algebras to determine universal parameters for the affine and degenerate affine BMW algebras. Finally, we show that the universal parameters are central elements--the higher Casimir elements for orthogonal and symplectic enveloping algebras and quantum groups.
Statistical inference for discrete-time samples from affine stochastic delay differential equations
DEFF Research Database (Denmark)
Küchler, Uwe; Sørensen, Michael
2013-01-01
Statistical inference for discrete time observations of an affine stochastic delay differential equation is considered. The main focus is on maximum pseudo-likelihood estimators, which are easy to calculate in practice. A more general class of prediction-based estimating functions is investigated...
Energy Technology Data Exchange (ETDEWEB)
Song, Chan-Ho; Park, Seung-Kook; Park, Hee-Seong; Moon, Jei-kwon [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
2014-10-15
KAERI is performing research to calculate a coefficient for decommissioning work unit productivity to calculate the estimated time decommissioning work and estimated cost based on decommissioning activity experience data for KRR-2. KAERI used to calculate the decommissioning cost and manage decommissioning activity experience data through systems such as the decommissioning information management system (DECOMMIS), Decommissioning Facility Characterization DB System (DEFACS), decommissioning work-unit productivity calculation system (DEWOCS). In particular, KAERI used to based data for calculating the decommissioning cost with the form of a code work breakdown structure (WBS) based on decommissioning activity experience data for KRR-2.. Defined WBS code used to each system for calculate decommissioning cost. In this paper, we developed a program that can calculate the decommissioning cost using the decommissioning experience of KRR-2, UCP, and other countries through the mapping of a similar target facility between NPP and KRR-2. This paper is organized as follows. Chapter 2 discusses the decommissioning work productivity calculation method, and the mapping method of the decommissioning target facility will be described in the calculating program for decommissioning work productivity. At KAERI, research on various decommissioning methodologies of domestic NPPs will be conducted in the near future. In particular, It is difficult to determine the cost of decommissioning because such as NPP facility have the number of variables, such as the material of the target facility decommissioning, size, radiographic conditions exist.
Directory of Open Access Journals (Sweden)
Shan Yang
2016-01-01
Full Text Available Power flow calculation and short circuit calculation are the basis of theoretical research for distribution network with inverter based distributed generation. The similarity of equivalent model for inverter based distributed generation during normal and fault conditions of distribution network and the differences between power flow and short circuit calculation are analyzed in this paper. Then an integrated power flow and short circuit calculation method for distribution network with inverter based distributed generation is proposed. The proposed method let the inverter based distributed generation be equivalent to Iθ bus, which makes it suitable to calculate the power flow of distribution network with a current limited inverter based distributed generation. And the low voltage ride through capability of inverter based distributed generation can be considered as well in this paper. Finally, some tests of power flow and short circuit current calculation are performed on a 33-bus distribution network. The calculated results from the proposed method in this paper are contrasted with those by the traditional method and the simulation method, whose results have verified the effectiveness of the integrated method suggested in this paper.
Giovannoli, Cristina; Passini, Cinzia; Volpi, Giorgio; Di Nardo, Fabio; Anfossi, Laura; Baggiani, Claudio
2015-11-01
A suitable sample clean up is a key point in the development of an analytical method. Peptide-based affinity media have recently gained attention in the selective extraction of defined target analytes from complex samples. In this paper we investigated the thermodynamic and kinetic binding properties of different stationary phases (Amberlite IRC-50, Lewatit CNP105, Toyopearl CM-650 M, porous silica gel beads and micrometric glass beads) functionalized with a hexapeptide sequence binding the Ochratoxin A. The highest values of the equilibrium binding constant (Keq) and binding site concentration (Bmax) were obtained for Lewatit CNP105 (Keq: 98.1×10(6) M(-1), Bmax: 30.8 μmol/g), followed by Toyopearl and micrometric glass beads, whereas the worst performances were obtained with Amberlite IRC-50 and porous silica gel beads. Also kinetic performances show the same trend. These results highlight that the surface chemical nature has a key role in the binding properties of solid supports used as affinity media for the selective extraction of well-defined target molecules. Finally, Lewatit CNP105 was compared with Amberlite IRC-50 as solid support in SPE extraction of OTA from 14 wine samples fortified with OTA at 2 and 4 μg l(-1) levels. The extracts were analyzed by HPLC with fluorescence detection (λexc 333 nm, λem 460 nm) showing no significant matrix effects, with a LOD and LOQ of 0.45 and 1.5 μgl(-1), respectively, and good recoveries between 71% and 108% for Amberlite IRC-50 and 91% and 101% for Lewatit CNP105. While both supports showed a statistically comparable extraction accuracy, a statistically significant difference was found in terms of extraction precision, confirming that the solid phase based on Lewatit CNP105 performs better than the solid phase based on Amberlite IRC-50.
Bi, Xiaodong; Liu, Zhen
2014-01-01
Molecularly imprinted polymers (MIPs), as inexpensive and stable substitutes of antibodies, have shown great promise in immunoassays. Glycoproteins are of significant diagnostic value. To facilitate the application of MIPs in clinical diagnostics, a general and facile imprinting method toward glycoproteins oriented for an enzyme-linked immunosorbent assay (ELISA) in the form of a 96-well microplate is essential but has not been fully explored yet. In this study, a new method called boronate affinity-based oriented surface imprinting was proposed for facile preparation of glycoprotein-imprinted microplates. A template glycoprotein was first immobilized by a boronic acid-modified microplate through boronate affinity binding, and then, a thin layer of polyaniline was formed to cover the microplate surface via in-water self-copolymerization. After the template was removed by an acidic solution, 3D cavities that can rebind the template were fabricated on the microplate surface. Using horseradish peroxidase (HRP) as a model target, the effects of imprinting conditions as well as the properties and performance of the prepared MIPs were investigated. α-Fetoprotein (AFP)-imprinted microplate was then prepared, and thereby, a MIP-based ELISA method was established. The prepared MIPs exhibited several highly favorable features, including excellent specificity, widely applicable binding pH, superb tolerance for interference, high binding strength, fast equilibrium kinetics, and reusability. The MIP-based ELISA method was finally applied to the analysis of AFP in human serum. The result was in good agreement with that by radioimmunoassay, showing a promising prospect of the proposed method in clinical diagnostics.
Yakar, Ruya; Akten, Ebru Demet
2014-09-01
Novel high affinity compounds for human β2-adrenergic receptor (β2-AR) were searched among the clean drug-like subset of ZINC database consisting of 9,928,465 molecules that satisfy the Lipinski's rule of five. The screening protocol consisted of a high-throughput pharmacophore screening followed by an extensive amount of docking and rescoring. The pharmacophore model was composed of key features shared by all five inactive states of β2-AR in complex with inverse agonists and antagonists. To test the discriminatory power of the pharmacophore model, a small-scale screening was initially performed on a database consisting of 117 compounds of which 53 antagonists were taken as active inhibitors and 64 agonists as inactive inhibitors. Accordingly, 7.3% of the ZINC database subset (729,413 compounds) satisfied the pharmacophore requirements, along with 44 antagonists and 17 agonists. Afterwards, all these hit compounds were docked to the inactive apo form of the receptor using various docking and scoring protocols. Following each docking experiment, the best pose was further evaluated based on the existence of key residues for antagonist binding in its vicinity. After final evaluations based on the human intestinal absorption (HIA) and the blood brain barrier (BBB) penetration properties, 62 hit compounds have been clustered based on their structural similarity and as a result four scaffolds were revealed. Two of these scaffolds were also observed in three high affinity compounds with experimentally known Ki values. Moreover, novel chemical compounds with distinct structures have been determined as potential β2-AR drug candidates.
Study on the Cost Calculation of Local Fixed Telecom Network Based on Unbundled Network Elements
Institute of Scientific and Technical Information of China (English)
XU Liang; LIANG Xiong-jian; HUANG Xiu-qing
2005-01-01
In this paper, according to the practical condition of local fixed telecom network, based on the method of the realistic total element long-run incremental cost, the practical methods of dividing the network elements, calculating the cost of network elements and services are given, to provide reference for the cost calculation in telecom industry.
[Design of high performance DSP-based gradient calculation module for MRI].
Pan, Wenyu; Zhang, Fu; Luo, Hai; Zhou, Heqin
2011-05-01
A gradient calculation module based on high performance DSP was designed to meet the needs of digital MRI spectrometer. According to the requirements of users, this apparatus can achieve rotation transformation, pre-emphasis, shimming and other gradient calculation functions in a single chip of DSP. It then outputs gradient waveform data of channel X, Y, Z and shimming data of channel B0. Experiments show that the design has good versatility and can satisfy the functional, speed and accuracy requirements of MRI gradient calculation. It provides a practical gradient calculation solution for the development of digital spectrometer.
Jung, Kwan Ho; Oh, Eun-Taex; Park, Heon Joo; Lee, Keun-Hyeung
2016-11-15
Developing fluorescent probes for monitoring intracellular Cu(+) is important for human health and disease, whereas a few types of their receptors showing a limited range of binding affinities for Cu(+) have been reported. In the present study, we first report a novel peptide receptor of a fluorescent probe for the detection of Cu(+). Dansyl-labeled tripeptide probe (Dns-LLC) formed a 1:1 complex with Cu(+) and showed a turn-on fluorescent response to Cu(+) in aqueous buffered solutions. The dissociation constant of Dns-LLC for Cu(+) was determined to be 12 fM, showing that Dns-LLC had more potent binding affinity for Cu(+) than those of previously reported chemical probes for Cu(+). The binding mode study showed that the thiol group of the peptide receptor plays a critical role in potent binding with Cu(+) and the sulfonamide and amide groups of the probe might cooperate to form a complex with Cu(+). Dns-LLC detected Cu(+) selectively by a turn-on response among various biologically relevant metal ions, including Cu(2+) and Zn(2+). The selectivity of the peptide-based probe for Cu(+) was strongly dependent on the position of the cysteine residue in the peptide receptor part. The fluorescent peptide-based probe penetrated the living RKO cells and successfully detected Cu(+) in the Golgi apparatus in live cells by a turn-on response. Given the growing interest in imaging Cu(+) in live cells, a novel peptide receptor of Cu(+) will offer the potential for developing a variety of fluorescent probes for Cu(+) in the field of copper biochemistry.
Model-Based Radar Power Calculations for Ultra-Wideband (UWB) Synthetic Aperture Radar (SAR)
2013-06-01
performance in complex scenarios. Among these scenarios are ground penetrating radar and forward-looking radar for landmine and improvised explosive...Model-Based Radar Power Calculations for Ultra-Wideband (UWB) Synthetic Aperture Radar (SAR) by Traian Dogaru ARL-TN-0548 June 2013...2013 Model-Based Radar Power Calculations for Ultra-Wideband (UWB) Synthetic Aperture Radar (SAR) Traian Dogaru Sensors and Electron
Kehlenbeck, Matthias; Breitner, Michael H.
Business users define calculated facts based on the dimensions and facts contained in a data warehouse. These business calculation definitions contain necessary knowledge regarding quantitative relations for deep analyses and for the production of meaningful reports. The business calculation definitions are implementation and widely organization independent. But no automated procedures facilitating their exchange across organization and implementation boundaries exist. Separately each organization currently has to map its own business calculations to analysis and reporting tools. This paper presents an innovative approach based on standard Semantic Web technologies. This approach facilitates the exchange of business calculation definitions and allows for their automatic linking to specific data warehouses through semantic reasoning. A novel standard proxy server which enables the immediate application of exchanged definitions is introduced. Benefits of the approach are shown in a comprehensive case study.
Assessing high affinity binding to HLA-DQ2.5 by a novel peptide library based approach
DEFF Research Database (Denmark)
Jüse, Ulrike; Arntzen, Magnus; Højrup, Peter
2011-01-01
Here we report on a novel peptide library based method for HLA class II binding motif identification. The approach is based on water soluble HLA class II molecules and soluble dedicated peptide libraries. A high number of different synthetic peptides are competing to interact with a limited amoun...
Matsunaga, Ken-Ichiro; Kimoto, Michiko; Hirao, Ichiro
2017-01-11
The novel evolutionary engineering method ExSELEX (genetic alphabet expansion for systematic evolution of ligands by exponential enrichment) provides high-affinity DNA aptamers that specifically bind to target molecules, by introducing an artificial hydrophobic base analogue as a fifth component into DNA aptamers. Here, we present a newer version of ExSELEX, using a library with completely randomized sequences consisting of five components: four natural bases and one unnatural hydrophobic base, 7-(2-thienyl)imidazo[4,5-b]pyridine (Ds). In contrast to the limited number of Ds-containing sequence combinations in our previous library, the increased complexity of the new randomized library could improve the success rates of high-affinity aptamer generation. To this end, we developed a sequencing method for each clone in the enriched library after several rounds of selection. Using the improved library, we generated a Ds-containing DNA aptamer targeting von Willebrand factor A1-domain (vWF) with significantly higher affinity (KD = 75 pM), relative to those generated by the initial version of ExSELEX, as well as that of the known DNA aptamer consisting of only the natural bases. In addition, the Ds-containing DNA aptamer was stabilized by introducing a mini-hairpin DNA resistant to nucleases, without any loss of affinity (KD = 61 pM). This new version is expected to consistently produce high-affinity DNA aptamers.
Calculation of the Stabilization Energies of Oxidatively Damaged Guanine Base Pairs with Guanine
Directory of Open Access Journals (Sweden)
Hiroshi Miyazawa
2012-06-01
Full Text Available DNA is constantly exposed to endogenous and exogenous oxidative stresses. Damaged DNA can cause mutations, which may increase the risk of developing cancer and other diseases. G:C-C:G transversions are caused by various oxidative stresses. 2,2,4-Triamino-5(2H-oxazolone (Oz, guanidinohydantoin (Gh/iminoallantoin (Ia and spiro-imino-dihydantoin (Sp are known products of oxidative guanine damage. These damaged bases can base pair with guanine and cause G:C-C:G transversions. In this study, the stabilization energies of these bases paired with guanine were calculated in vacuo and in water. The calculated stabilization energies of the Ia:G base pairs were similar to that of the native C:G base pair, and both bases pairs have three hydrogen bonds. By contrast, the calculated stabilization energies of Gh:G, which form two hydrogen bonds, were lower than the Ia:G base pairs, suggesting that the stabilization energy depends on the number of hydrogen bonds. In addition, the Sp:G base pairs were less stable than the Ia:G base pairs. Furthermore, calculations showed that the Oz:G base pairs were less stable than the Ia:G, Gh:G and Sp:G base pairs, even though experimental results showed that incorporation of guanine opposite Oz is more efficient than that opposite Gh/Ia and Sp.
Altomare, Alessandra; Fasoli, Elisa; Colzani, Mara; Parra, Ximena Maria Paredes; Ferrari, Marina; Cilurzo, Francesco; Rumio, Cristiano; Cannizzaro, Luca; Carini, Marina; Righetti, Pier Giorgio; Aldini, Giancarlo
2016-03-20
Bovine colostrum (BC), the initial milk secreted by the mammary gland immediately after parturition, is widely used for several health applications. We here propose an off-target method based on proteomic analysis to explain at molecular level the potential health benefits of BC. The method is based on the set-up of an exhaustive protein data bank of bovine colostrum, including the minor protein components, followed by a bioinformatic functional analysis. The proteomic approach based on ProteoMiner technology combined to a highly selective affinity chromatography approach for the immunoglobulins depletion, identified 1786 proteins (medium confidence; 634 when setting high confidence), which were then clustered on the basis of their biological function. Protein networks were then created on the basis of the biological functions or health claims as input. A set of 93 proteins involved in the wound healing process was identified. Such an approach also permits the exploration of novel biological functions of BC by searching in the database the presence of proteins characterized by innovative functions. In conclusion an advanced approach based on an in depth proteomic analysis is reported which permits an explanation of the wound healing effect of bovine colostrum at molecular level and allows the search of novel potential beneficial effects.
Directory of Open Access Journals (Sweden)
Frane Delaš
2012-03-01
Full Text Available In the present study were conducted the effect of pH (5.5, 6.0 and 6.5 and concentration of new synthesized 3-/2-aminophenylimino-(p-toluoyl/-4-hydroxy-6-(p-tolyl-2H-pyrane-2-one (Schiff base on decrease the concentration of aflatoxin M1 (AFM1 in raw milk contaminated with known concentration of this toxin. Experiments were carried out at temperature of 4 °C during 35 days. At pH 5.5 Schiff base concentration of 0.1 µmol/L was lessening the concentration of AFM1 after 35 days by 55 %. However, at pH 6.5 the most effective concentration for lessening of AFM1 was 0.5 µmol/L. Schiff base was not effective at pH value of 7 or higher. The ability of Schiff base to act as antimycotoxigenic agent provides new perspective for possibly using this compound to control AFM1 contamination in milk and to extent shelf lives of this food. Detection of toxicity of investigated Schiff base was performed by using the brine shrimp (Artemia salina larvae as an biological indicator to determine their sensitivity to this chemical agent.
Fan Affinity Laws from a Collision Model
Bhattacharjee, Shayak
2012-01-01
The performance of a fan is usually estimated using hydrodynamical considerations. The calculations are long and involved and the results are expressed in terms of three affinity laws. In this paper we use kinetic theory to attack this problem. A hard sphere collision model is used, and subsequently a correction to account for the flow behaviour…
Energy Technology Data Exchange (ETDEWEB)
Zdenek Klika; Lenka Ambruzova; Ivana Sykorova; Jana Seidlerova; Ivan Kolomaznik [VSB-Technical University Ostrava, Ostrava (Czech Republic)
2009-10-15
The affinities of Ga and Ge in lignite were determined using sequential extraction (SE) and element affinity calculation (EAC) based on sink-float data. For this study a bulk lignite sample was fractioned into two sets. The first set of samples (A) consisted of the different grain sizes fractions; the second one set (B) was prepared by density fractionation. Sequential extractions (1) were performed on both sets of fractions with very good agreement between determined organic elements affinities (OEA of Ga evaluated from A data is 32%, from B data 35%; OEA of Ge evaluated from A data is 31% and from B data 26%). The data of B lignite fractions were evaluated using two element affinity calculations: (a) EAC (I) of Klika and Kolomaznk (2) and (b) newly prepared subroutine EAC (II) based on quantitative contents of lignite macerals and minerals. There was also good agreement between both methods obtained (OEA of Ga calculated by EAC (I) is 83% and by EAC (II) 77%; OEA of Ge calculated by EAC (I) is 89% and by EAC (II) 97%). The significant differences of organic elements affinities of Ga and Ge evaluated by sequential extraction and by element affinity calculation based on sink-float data are discussed. 34 refs., 7 figs., 6 tabs.
Minguzzi, E.
2016-11-01
We investigate spacetimes whose light cones could be anisotropic. We prove the equivalence of the structures: (a) Lorentz-Finsler manifold for which the mean Cartan torsion vanishes, (b) Lorentz-Finsler manifold for which the indicatrix (observer space) at each point is a convex hyperbolic affine sphere centered on the zero section, and (c) pair given by a spacetime volume and a sharp convex cone distribution. The equivalence suggests to describe (affine sphere) spacetimes with this structure, so that no algebraic-metrical concept enters the definition. As a result, this work shows how the metric features of spacetime emerge from elementary concepts such as measure and order. Non-relativistic spacetimes are obtained replacing proper spheres with improper spheres, so the distinction does not call for group theoretical elements. In physical terms, in affine sphere spacetimes the light cone distribution and the spacetime measure determine the motion of massive and massless particles (hence the dispersion relation). Furthermore, it is shown that, more generally, for Lorentz-Finsler theories non-differentiable at the cone, the lightlike geodesics and the transport of the particle momentum over them are well defined, though the curve parametrization could be undefined. Causality theory is also well behaved. Several results for affine sphere spacetimes are presented. Some results in Finsler geometry, for instance in the characterization of Randers spaces, are also included.
The Cutting Edge of Affinity Electrophoresis Technology
Directory of Open Access Journals (Sweden)
Eiji Kinoshita
2015-03-01
Full Text Available Affinity electrophoresis is an important technique that is widely used to separate and analyze biomolecules in the fields of biology and medicine. Both quantitative and qualitative information can be gained through affinity electrophoresis. Affinity electrophoresis can be applied through a variety of strategies, such as mobility shift electrophoresis, charge shift electrophoresis or capillary affinity electrophoresis. These strategies are based on changes in the electrophoretic patterns of biological macromolecules that result from interactions or complex-formation processes that induce changes in the size or total charge of the molecules. Nucleic acid fragments can be characterized through their affinity to other molecules, for example transcriptional factor proteins. Hydrophobic membrane proteins can be identified by means of a shift in the mobility induced by a charged detergent. The various strategies have also been used in the estimation of association/disassociation constants. Some of these strategies have similarities to affinity chromatography, in that they use a probe or ligand immobilized on a supported matrix for electrophoresis. Such methods have recently contributed to profiling of major posttranslational modifications of proteins, such as glycosylation or phosphorylation. Here, we describe advances in analytical techniques involving affinity electrophoresis that have appeared during the last five years.
The Cutting Edge of Affinity Electrophoresis Technology
Kinoshita, Eiji; Kinoshita-Kikuta, Emiko; Koike, Tohru
2015-01-01
Affinity electrophoresis is an important technique that is widely used to separate and analyze biomolecules in the fields of biology and medicine. Both quantitative and qualitative information can be gained through affinity electrophoresis. Affinity electrophoresis can be applied through a variety of strategies, such as mobility shift electrophoresis, charge shift electrophoresis or capillary affinity electrophoresis. These strategies are based on changes in the electrophoretic patterns of biological macromolecules that result from interactions or complex-formation processes that induce changes in the size or total charge of the molecules. Nucleic acid fragments can be characterized through their affinity to other molecules, for example transcriptional factor proteins. Hydrophobic membrane proteins can be identified by means of a shift in the mobility induced by a charged detergent. The various strategies have also been used in the estimation of association/disassociation constants. Some of these strategies have similarities to affinity chromatography, in that they use a probe or ligand immobilized on a supported matrix for electrophoresis. Such methods have recently contributed to profiling of major posttranslational modifications of proteins, such as glycosylation or phosphorylation. Here, we describe advances in analytical techniques involving affinity electrophoresis that have appeared during the last five years.
Development of a DSP-based real-time position calculation circuit for a beta camera
Yamamoto, S; Kanno, I
2000-01-01
A digital signal processor (DSP)-based position calculation circuit was developed and tested for a beta camera. The previous position calculation circuit which employed flash analog-to-digital (A-D) converters for A-D conversion and ratio calculation produced significant line artifacts in the image due to the differential non-linearity of the A-D converters. The new position calculation circuit uses four A-D converters for A-D conversion of the analog signals from the position sensitive photomultiplier tube (PSPMT). The DSP reads the A-D signals and calculates the ratio of X sub a /(X sub a +X sub b) and Y sub a /(Y sub a +Y sub b) on an event-by-event basis. The DSP also magnifies the image to fit the useful field of view (FOV) and rejects the events out of the FOV. The line artifacts in the image were almost eliminated.
Asadi, Mozaffar; Asadi, Zahra; Zarei, Leila; Sadi, Somaye Barzegar; Amirghofran, Zahra
2014-12-10
Metal Schiff-base complexes show biological activity but they are usually insoluble in water so four new water-soluble metal Schiff base complexes of Na2[M(5-SO3-1,2-salben]; (5-SO3-1,2-salben denoted N,N'-bis(5-sulphosalicyliden)-1,2-diaminobenzylamine and M=Mg, Mn, Cu, Zn) were synthesized and characterized. The formation constants of the metal complexes were determined by UV-Vis absorption spectroscopy. The interaction of these complexes with bovine serum albumin (BSA) was studied by fluorescence spectroscopy. Type of quenching, binding constants, number of binding sites and binding stoichiometries were determined by fluorescence quenching method. The results showed that the mentioned complexes strongly bound to BSA. Thermodynamic parameters indicated that hydrophobic association was the major binding force and that the interaction was entropy driven and enthalpically disfavoured. The displacement experiment showed that these complexes could bind to the subdomain IIA (site I) of albumin. Furthermore the synchronous fluorescence spectra showed that the microenvironment of the tryptophan residues was not apparently changed. Based on the Förster theory of non-radiation energy transfer, the distance between the donor (Trp residues) and the acceptor metal complexes was obtained. The growth inhibitory effect of complexes toward the K562 cancer cell line was measured.
Lipovsek, Dasa; Lippow, Shaun M; Hackel, Benjamin J; Gregson, Melissa W; Cheng, Paul; Kapila, Atul; Wittrup, K Dane
2007-05-11
The 10th human fibronectin type III domain ((10)Fn3) is one of several protein scaffolds used to design and select families of proteins that bind with high affinity and specificity to macromolecular targets. To date, the highest affinity (10)Fn3 variants have been selected by mRNA display of libraries generated by randomizing all three complementarity-determining region -like loops of the (10)Fn3 scaffold. The sub-nanomolar affinities of such antibody mimics have been attributed to the extremely large size of the library accessible by mRNA display (10(12) unique sequences). Here we describe the selection and affinity maturation of (10)Fn3-based antibody mimics with dissociation constants as low as 350 pM selected from significantly smaller libraries (10(7)-10(9) different sequences), which were constructed by randomizing only 14 (10)Fn3 residues. The finding that two adjacent loops in human (10)Fn3 provide a large enough variable surface area to select high-affinity antibody mimics is significant because a smaller deviation from wild-type (10)Fn3 sequence is associated with a higher stability of selected antibody mimics. Our results also demonstrate the utility of an affinity-maturation strategy that led to a 340-fold improvement in affinity by maximizing sampling of sequence space close to the original selected antibody mimic. A striking feature of the highest affinity antibody mimics selected against lysozyme is a pair of cysteines on adjacent loops, in positions 28 and 77, which are critical for the affinity of the (10)Fn3 variant for its target and are close enough to form a disulfide bond. The selection of this cysteine pair is structurally analogous to the natural evolution of disulfide bonds found in new antigen receptors of cartilaginous fish and in camelid heavy-chain variable domains. We propose that future library designs incorporating such an interloop disulfide will further facilitate the selection of high-affinity, highly stable antibody mimics from
Calculation of response of Chinese hamster cells to ions based on track structure theory
Institute of Scientific and Technical Information of China (English)
LiuXiao－Wei; ZhangChun－Xiang
1997-01-01
Considering biological cells as single target two-hit detectors,an analytic formula to calculate the response of cells to ions is developed based on track structure theory.In the calculation,the splitting deposition energy between ion kill mode and γ kill mode is not used.The results of calculation are in agreement with the experimental data for response of Chinese hamster cells,whose response to γ rays can be described by the response function of single target two hit detector to ions.
Simple atmospheric transmittance calculation based on a Fourier-transformed Voigt profile.
Kobayashi, Hirokazu
2002-11-20
A method of line-by-line transmission calculation for a homogeneous atmospheric layer that uses the Fourier-transformed Voigt profile is presented. The method is based on a pure Voigt function with no approximation and an interference term that takes into account the line-mixing effect. One can use the method to calculate transmittance, considering each line shape as it is affected by temperature and pressure, with a line database with an arbitrary wave-number range and resolution. To show that the method is feasible for practical model development, we compared the calculated transmittance with that obtained with a conventional model, and good consistency was observed.
Calculation Model for Current-voltage Relation of Silicon Quantum-dots-based Nano-memory
Institute of Scientific and Technical Information of China (English)
YANG Hong-guan; DAI Da-kang; YU Biao; SHANG Lin-lin; GUO You-hong
2007-01-01
Based on the capacitive coupling formalism, an analytic model for calculating the drain currents of the quantum-dots floating-gate memory cell is proposed. Using this model, one can calculate numerically the drain currents of linear, saturation and subthreshold regions of the device with/without charges stored on the floating dots. The read operation process of an n-channel Si quantum-dots floating-gate nano-memory cell is discussed after calculating the drain currents versus the drain to source voltages and control gate voltages in both high and low threshold states respectively.
Shafting Alignment Calculation Based on APDL%基于APDL的轴系校中计算
Institute of Scientific and Technical Information of China (English)
张兴涛; 蒋岚岚
2012-01-01
本文介绍了轴系合理校中的必要性以及基于APDL的校中计算方法,并通过对比计算结果,说明该计算方法是完全适用的。%This paper introduces the necessity of shafting alignment and the calculation method based on APDL. By comparing the calculation results, it is proved that the method is completely applicable.
Directory of Open Access Journals (Sweden)
José Carlos B. de Lima
2010-01-01
Full Text Available The CBS-4M, CBS-QB3, G2, G2(MP2, G3 and G3(MP2 model chemistry methods have been used to calculate proton and electron affinities for a set of molecular and atomic systems. Agreement with the experimental value for these electronic properties is quite good considering the uncertainty in the experimental data. A comparison among the six theories using statistical analysis (average value, standard deviation and root-mean-square showed a better performance of CBS-QB3 to obtain these properties.
Liu, Ying; Matharu, Zimple; Howland, Michael C; Revzin, Alexander; Simonian, Aleksandr L
2012-09-01
The applications of biosensors range from environmental testing and biowarfare agent detection to clinical testing and cell analysis. In recent years, biosensors have become increasingly prevalent in clinical testing and point-of-care testing. This is driven in part by the desire to decrease the cost of health care, to shift some of the analytical tests from centralized facilities to "frontline" physicians and nurses, and to obtain more precise information more quickly about the health status of a patient. This article gives an overview of recent advances in the field of biosensors, focusing on biosensors based on enzymes, aptamers, antibodies, and phages. In addition, this article attempts to describe efforts to apply these biosensors to clinical testing and cell analysis.
Periodic cyclic homology of affine Hecke algebras
Solleveld, Maarten
2009-01-01
This is the author's PhD-thesis, which was written in 2006. The version posted here is identical to the printed one. Instead of an abstract, the short list of contents: Preface 5 1 Introduction 9 2 K-theory and cyclic type homology theories 13 3 Affine Hecke algebras 61 4 Reductive p-adic groups 103 5 Parameter deformations in affine Hecke algebras 129 6 Examples and calculations 169 A Crossed products 223 Bibliography 227 Index 237 Samenvatting 245 Curriculum vitae 253
Affine and degenerate affine BMW algebras: The center
Daugherty, Zajj; Virk, Rahbar
2011-01-01
The degenerate affine and affine BMW algebras arise naturally in the context of Schur-Weyl duality for orthogonal and symplectic Lie algebras and quantum groups, respectively. Cyclotomic BMW algebras, affine Hecke algebras, cyclotomic Hecke algebras, and their degenerate versions are quotients. In this paper the theory is unified by treating the orthogonal and symplectic cases simultaneously; we make an exact parallel between the degenerate affine and affine cases via a new algebra which takes the role of the affine braid group for the degenerate setting. A main result of this paper is an identification of the centers of the affine and degenerate affine BMW algebras in terms of rings of symmetric functions which satisfy a "cancellation property" or "wheel condition" (in the degenerate case, a reformulation of a result of Nazarov). Miraculously, these same rings also arise in Schubert calculus, as the cohomology and K-theory of isotropic Grassmanians and symplectic loop Grassmanians. We also establish new inte...
BRST Cohomology in Quantum Affine Algebra $U_q(\\widehat{sl_2})$
Konno, H
1994-01-01
Using free field representation of quantum affine algebra $U_q(\\widehat{sl_2})$, we investigate the structure of the Fock modules over $U_q(\\widehat{sl_2})$. The analisys is based on a $q$-analog of the BRST formalism given by Bernard and Felder in the affine Kac-Moody algebra $\\widehat {sl_2}$. We give an explicit construction of the singular vectors using the BRST charge. By the same cohomology analysis as the classical case ($q=1$), we obtain the irreducible highest weight representation space as a nontrivial cohomology group. This enables us to calculate a trace of the $q$-vertex operators over this space.
HNA and ANA high-affinity arrays for detections of DNA and RNA single-base mismatches.
Abramov, Mikhail; Schepers, Guy; Van Aerschot, Arthur; Van Hummelen, Paul; Herdewijn, Piet
2008-06-15
DNA microarrays and sensors have become essential tools in the functional analysis of sequence information. Recently we reported that chimeric hexitol (HNA) and altritol (ANA) nucleotide monomers with an anhydrohexitol sugar moiety are easily available and proved their chemistry to be compatible with DNA and RNA synthesis. In this communication we describe a novel analytical platform based on HNA and ANA units to be used as synthetic oligonucleotide arrays on a glass solid support for match/mismatch detection of DNA and RNA targets. Arrays were fabricated by immobilization of diene-modified oligonucleotides on maleimido-activated glass slides. To demonstrate the selectivity and sensitivity of the HNA/ANA arrays and to compare their properties with regular DNA arrays, sequences in the reverse transcriptase gene (codon 74) and the protease gene of HIV-1 (codon 10) were selected. Both, the relative intensity of the signal and match/mismatch discrimination increased up to fivefold for DNA targets and up to 3-3.5-fold for RNA targets applying HNA or ANA arrays (ANA>HNA>DNA). Certainly in the new field of miRNA detection, ANA arrays could prove very beneficial and their properties should be investigated in more detail.
Directory of Open Access Journals (Sweden)
Nicholas Allen Kinney
Full Text Available Three dimensional nuclear architecture is important for genome function, but is still poorly understood. In particular, little is known about the role of the "boundary conditions"--points of attachment between chromosomes and the nuclear envelope. We describe a method for modeling the 3D organization of the interphase nucleus, and its application to analysis of chromosome-nuclear envelope (Chr-NE attachments of polytene (giant chromosomes in Drosophila melanogaster salivary glands. The model represents chromosomes as self-avoiding polymer chains confined within the nucleus; parameters of the model are taken directly from experiment, no fitting parameters are introduced. Methods are developed to objectively quantify chromosome territories and intertwining, which are discussed in the context of corresponding experimental observations. In particular, a mathematically rigorous definition of a territory based on convex hull is proposed. The self-avoiding polymer model is used to re-analyze previous experimental data; the analysis suggests 33 additional Chr-NE attachments in addition to the 15 already explored Chr-NE attachments. Most of these new Chr-NE attachments correspond to intercalary heterochromatin--gene poor, dark staining, late replicating regions of the genome; however, three correspond to euchromatin--gene rich, light staining, early replicating regions of the genome. The analysis also suggests 5 regions of anti-contact, characterized by aversion for the NE, only two of these correspond to euchromatin. This composition of chromatin suggests that heterochromatin may not be necessary or sufficient for the formation of a Chr-NE attachment. To the extent that the proposed model represents reality, the confinement of the polytene chromosomes in a spherical nucleus alone does not favor the positioning of specific chromosome regions at the NE as seen in experiment; consequently, the 15 experimentally known Chr-NE attachment positions do not
Affinity driven social networks
Ruyú, B.; Kuperman, M. N.
2007-04-01
In this work we present a model for evolving networks, where the driven force is related to the social affinity between individuals of a population. In the model, a set of individuals initially arranged on a regular ordered network and thus linked with their closest neighbors are allowed to rearrange their connections according to a dynamics closely related to that of the stable marriage problem. We show that the behavior of some topological properties of the resulting networks follows a non trivial pattern.
Skirzewski, Aureliano
2014-01-01
We develop a topological theory of gravity with torsion where metric has a dynamical rather than a kinematical origin. This approach towards gravity resembles pre-geometrical approaches in which a fundamental metric does not exist, but the affine connection gives place to a local inertial structure. Such feature reminds us of Mach's principle, that assumes the inertial forces should have dynamical origin. Additionally, a Newtonian gravitational force is obtained in the non-relativistic limit of the theory.
Case-Based Reasoning Topological Complexity Calculation of Design for Components
Institute of Scientific and Technical Information of China (English)
无
2001-01-01
Directly calculating the topological and geometric complexity from the STEP (standard for the exchange of product model data, ISO 10303) file is a huge task. So, a case-based reasoning approach is presented, which is based on the similarity between the new component and the old one, to calculate the topological and geometric complexity of new components. In order to index, retrieve in historical component database, a new way of component representation is brought forth. And then an algorithm is given to extract topological graph from its STEP files. A mathematical model, which describes how to compare the similarity, is discussed. Finally, an example is given to show the result.
Directory of Open Access Journals (Sweden)
Yan Zhenzhen
2016-01-01
Full Text Available In order to obtain the Required Safety Egress Time (RSET accurately, traditional engineering calculation method of evacuation time has been optimized in this paper. Several principles and fact situations were used to optimize the method, such as detecting principle of the fire detecting system, reaction characteristics of staff being in urgent situation, evacuating queuing theory, building structure and the plugging at the porthole. Taking a three-storey KTV as an example, two methods are used to illustrate the reliability and scientific reasonability of the calculation result. The result is deduced by comparing the error (less than 2% at an allowable range between two results. One result is calculated by a modified method of engineering calculation method, and the other one is given based on a Steering model of Pathfinder evacuation simulation software. The optimized RSET has a good feasibility and Accuracy.
Institute of Scientific and Technical Information of China (English)
ZHANG Zhi-jie; LIU Yu-hua; L(U) Zhong-yuan; LI Ze-sheng
2009-01-01
The rotational isomeric state(RIS) model was constructed for poly(vinylidene chloride)(PVDC) based on quantum chemistry calculations. The statistical weighted parameters were obtained from RIS representations and ab initio energies of conformers for model molecules 2,2,4,4-tetrachloropentane(TCP) and 2,2,4,4,6, 6-hexachlorohep-tane(HCH). By employing the RIS method, the characteristic ratio C∞ was calculated for PVDC. The calculated cha-racteristic ratio for PVDC is in good agreement with experiment result. Additionally, we studied the influence of the statistical weighted parameters on C∞ by calculating δC∞/δlnw. According to the values of δC∞/δlnw, the effects of second-order Cl-CH2 pentane type interaction and Cl-Cl long range interaction on C∞ were found to be important. In contrast, first-order interaction is unimportant.
Affine morphisms at zero level
Das, Paramita; Gupta, Ved Prakash
2010-01-01
Given a finite index subfactor, we show that the {\\em affine morphisms at zero level} in the affine category over the planar algebra associated to the subfactor is isomorphic to the fusion algebra of the subfactor as a *-algebra.
On the Affine Isoperimetric Inequalities
Indian Academy of Sciences (India)
Wuyang Yu; Gangsong Leng
2011-11-01
We obtain an isoperimetric inequality which estimate the affine invariant -surface area measure on convex bodies. We also establish the reverse version of -Petty projection inequality and an affine isoperimetric inequality of $_{-p}K$.
Applying Activity Based Costing (ABC Method to Calculate Cost Price in Hospital and Remedy Services
Directory of Open Access Journals (Sweden)
A Dabiri
2012-04-01
Full Text Available Background: Activity Based Costing (ABC is one of the new methods began appearing as a costing methodology in the 1990. It calculates cost price by determining the usage of resources. In this study, ABC method was used for calculating cost price of remedial services in hospitals.Methods: To apply ABC method, Shahid Faghihi Hospital was selected. First, hospital units were divided into three main departments: administrative, diagnostic, and hospitalized. Second, activity centers were defined by the activity analysis method. Third, costs of administrative activity centers were allocated into diagnostic and operational departments based on the cost driver. Finally, with regard to the usage of cost objectives from services of activity centers, the cost price of medical services was calculated.Results: The cost price from ABC method significantly differs from tariff method. In addition, high amount of indirect costs in the hospital indicates that capacities of resources are not used properly.Conclusion: Cost price of remedial services with tariff method is not properly calculated when compared with ABC method. ABC calculates cost price by applying suitable mechanisms but tariff method is based on the fixed price. In addition, ABC represents useful information about the amount and combination of cost price services.
Energy Technology Data Exchange (ETDEWEB)
Noh, Siwan; Kwon, Tae-Eun; Lee, Jai-Ki [Hanyang Univ., Seoul (Korea, Republic of); Lee, Jong-Il; Kim, Jang-Lyul [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
2014-10-15
In internal dosimetry, intake retention and excretion functions are essential to estimate intake activity using bioassay sample such as whole body counter, lung counter, and urine sample. Even though ICRP (International Commission on Radiological Protection)provides the functions in some ICRP publications, it is needed to calculate the functions because the functions from the publications are provided for very limited time. Thus, some computer program are generally used to calculate intake retention and excretion functions and estimate intake activity. OIR (Occupational Intakes of Radionuclides) will be published soon by ICRP, which totally replaces existing internal dosimetry models and relevant data including intake retention and excretion functions. Thus, the calculation tool for the functions is needed based on OIR. In this study, we developed calculation module for intake retention and excretion functions based on OIR using C++ programming language with Intel Math Kernel Library. In this study, we developed the intake retention and excretion function calculation module based on OIR using C++ programing language.
Directory of Open Access Journals (Sweden)
Pirrello Julien
2012-10-01
Full Text Available Abstract Background The phytohormone ethylene is involved in a wide range of developmental processes and in mediating plant responses to biotic and abiotic stresses. Ethylene signalling acts via a linear transduction pathway leading to the activation of Ethylene Response Factor genes (ERF which represent one of the largest gene families of plant transcription factors. How an apparently simple signalling pathway can account for the complex and widely diverse plant responses to ethylene remains yet an unanswered question. Building on the recent release of the complete tomato genome sequence, the present study aims at gaining better insight on distinctive features among ERF proteins. Results A set of 28 cDNA clones encoding ERFs in the tomato (Solanum lycopersicon were isolated and shown to fall into nine distinct subclasses characterised by specific conserved motifs most of which with unknown function. In addition of being able to regulate the transcriptional activity of GCC-box containing promoters, tomato ERFs are also shown to be active on promoters lacking this canonical ethylene-responsive-element. Moreover, the data reveal that ERF affinity to the GCC-box depends on the nucleotide environment surrounding this cis-acting element. Site-directed mutagenesis revealed that the nature of the flanking nucleotides can either enhance or reduce the binding affinity, thus conferring the binding specificity of various ERFs to target promoters. Based on their expression pattern, ERF genes can be clustered in two main clades given their preferential expression in reproductive or vegetative tissues. The regulation of several tomato ERF genes by both ethylene and auxin, suggests their potential contribution to the convergence mechanism between the signalling pathways of the two hormones. Conclusions The data reveal that regions flanking the core GCC-box sequence are part of the discrimination mechanism by which ERFs selectively bind to their target
Institute of Scientific and Technical Information of China (English)
何彬; 刘杨; 孙彦
2004-01-01
Triton X-114, an non-ionic surfactant, was modified with the affinity ligand of trypsin, paminobenzamidine (PAB) and the affinity surfactant (PAB-TX) was synthesized. Then, the affinity surfactant was used to prepare affinity-based colloidal gas aphrons (CGA). The stability of the affinity CGA was investigated at different temperatures and compared with that of the CGA prepared from Triton X-114. Compared with the CGA from Triton X-114, the affinity CGA showed high selective adsorption property for trypsin. In the separation of a protein mixture, recovery yield higher than 74% were achieved for trypsin and the separation factor reached over 1.5. The results showed that the affinity CGA possessed promising selectivity for separating trypsin from a protein mixture.
Rucci, Michael; Hardie, Russell C; Barnard, Kenneth J
2014-05-01
In this paper, we present a computationally efficient video restoration algorithm to address both blur and noise for a Nyquist sampled imaging system. The proposed method utilizes a temporal Kalman filter followed by a correlation-model based spatial adaptive Wiener filter (AWF). The Kalman filter employs an affine background motion model and novel process-noise variance estimate. We also propose and demonstrate a new multidelay temporal Kalman filter designed to more robustly treat local motion. The AWF is a spatial operation that performs deconvolution and adapts to the spatially varying residual noise left in the Kalman filter stage. In image areas where the temporal Kalman filter is able to provide significant noise reduction, the AWF can be aggressive in its deconvolution. In other areas, where less noise reduction is achieved with the Kalman filter, the AWF balances the deconvolution with spatial noise reduction. In this way, the Kalman filter and AWF work together effectively, but without the computational burden of full joint spatiotemporal processing. We also propose a novel hybrid system that combines a temporal Kalman filter and BM3D processing. To illustrate the efficacy of the proposed methods, we test the algorithms on both simulated imagery and video collected with a visible camera.
Directory of Open Access Journals (Sweden)
Wensheng Lan
2012-06-01
Full Text Available We have developed a novel optical biosensor device using recombinant methyl parathion hydrolase (MPH enzyme immobilized on agarose by metal-chelate affinity to detect organophosphorus (OP compounds with a nitrophenyl group. The biosensor principle is based on the optical measurement of the product of OP catalysis by MPH (p-nitrophenol. Briefly, MPH containing six sequential histidines (6× His tag at its N-terminal was bound to nitrilotriacetic acid (NTA agarose with Ni ions, resulting in the flexible immobilization of the bio-reaction platform. The optical biosensing system consisted of two light-emitting diodes (LEDs and one photodiode. The LED that emitted light at the wavelength of the maximum absorption for p-nitrophenol served as the signal light, while the other LED that showed no absorbance served as the reference light. The optical sensing system detected absorbance that was linearly correlated to methyl parathion (MP concentration and the detection limit was estimated to be 4 μM. Sensor hysteresis was investigated and the results showed that at lower concentration range of MP the difference got from the opposite process curves was very small. With its easy immobilization of enzymes and simple design in structure, the system has the potential for development into a practical portable detector for field applications.
Opitz, Lars; Hohlweg, Jonas; Reichl, Udo; Wolff, Michael W
2009-11-01
The presented study focuses on the feasibility of immobilized metal affinity chromatography for purification of Madin Darby canine kidney cell culture-derived influenza virus particles. Therefore, influenza virus A/Puerto Rico/8/34 was screened for adsorption to different transition metal ions attached to iminodiacetic acid. Subsequently, capturing of the same virus strain using zinc-modified iminodiacetic acid membrane adsorbers was characterized regarding viral recoveries, host cell nucleic acid and total protein depletion as well as zinc-ion-leaching. In addition, the effect of the imidazole proton pump on virus stability was studied based on the hemagglutination activity. During adsorption in the presence of 1M sodium chloride the majority of virus particles were recovered in the product (64% hemagglutination activity). Host cell nucleic acid and total protein content were reduced to approximately 7 and 26%, respectively. This inexpensive and rapid method was applied reproducibly for influenza virus A/Puerto Rico/8/34 preparations on the laboratory scale. However, preliminary results with other virus strains indicated clearly a strong strain dependency for viral adsorption.
Affine Patches on Positroid Varieties and Affine Pipe Dreams (Thesis)
Snider, Michelle
2010-01-01
The objects of interest in this thesis are positroid varieties in the Grassmannian, which are indexed by juggling patterns. In particular, we study affine patches on these positroid varieties. Our main result corresponds these affine patches to Kazhdan-Lusztig varieties in the affine Grassmannian. We develop a new term order and study how these spaces are related to subword complexes and Stanley-Reisner ideals. We define an extension of pipe dreams to the affine case and conclude by showing how our affine pipe dreams are generalizations of Cauchon and Le diagrams.
Affine and quasi-affine frames for rational dilations
DEFF Research Database (Denmark)
Bownik, Marcin; Lemvig, Jakob
2011-01-01
, the corresponding family of quasi-affine systems are frames with uniform frame bounds. We also prove a similar equivalence result between pairs of dual affine frames and dual quasi-affine frames. Finally, we uncover some fundamental differences between the integer and rational settings by exhibiting an example......In this paper we extend the investigation of quasi-affine systems, which were originally introduced by Ron and Shen [J. Funct. Anal. 148 (1997), 408-447] for integer, expansive dilations, to the class of rational, expansive dilations. We show that an affine system is a frame if, and only if...
Calculation of thermal expansion coefficient of glasses based on topological constraint theory
Zeng, Huidan; Ye, Feng; Li, Xiang; Wang, Ling; Yang, Bin; Chen, Jianding; Zhang, Xianghua; Sun, Luyi
2016-10-01
In this work, the thermal expansion behavior and the structure configuration evolution of glasses were studied. Degree of freedom based on the topological constraint theory is correlated with configuration evolution; considering the chemical composition and the configuration change, the analytical equation for calculating the thermal expansion coefficient of glasses from degree of freedom was derived. The thermal expansion of typical silicate and chalcogenide glasses was examined by calculating their thermal expansion coefficients (TEC) using the approach stated above. The results showed that this approach was energetically favorable for glass materials and revealed the corresponding underlying essence from viewpoint of configuration entropy. This work establishes a configuration-based methodology to calculate the thermal expansion coefficient of glasses that, lack periodic order.
Optimization Method for Indoor Thermal Comfort Based on Interactive Numerical Calculation
Institute of Scientific and Technical Information of China (English)
无
2008-01-01
In order to implement the optimal design of the indoor thermal comfort based on the numerical modeling method, the numerical calculation platform is combined seamlessly with the data-processing platform, and an interactive numerical calculation platform which includes the functions of numerical simulation and optimization is established. The artificial neural network (ANN) and the greedy strategy are introduced into the hill-climbing pattern heuristic search process, and the optimizing search direction can be predicted by using small samples; when searching along the direction using the greedy strategy, the optimal values can be quickly approached. Therefore, excessive external calling of the numerical modeling process can be avoided,and the optimization time is decreased obviously. The experimental results indicate that the satisfied output parameters of air conditioning can be quickly given out based on the interactive numerical calculation platform and the improved search method, and the optimization for indoor thermal comfort can be completed.
Amadei, A; Apol, MEF; DiNola, A; Berendsen, HJC
1996-01-01
A new theory is presented for calculating the Helmholtz free energy based on the potential energy distribution function. The usual expressions of free energy, internal energy and entropy involving the partition function are rephrased in terms of the potential energy distribution function, which must
Doiron, Charles; Hencken, Kai
2013-09-01
Computational fluid-dynamic simulations nowadays play a central role in the development of new gas circuit breakers. For these simulations to be reliable, a good knowledge of the pressure and temperature-dependence of the thermodynamic and transport properties of ionized gases is required. A key ingredient in the calculation of thermodynamic properties of thermal plasmas is the calculation of the chemical equilibrium composition of the gas. The general-purpose, open-source software toolkit Cantera provides most functionality required to carry out such thermodynamic calculations. In this contribution, we explain how we tailored Cantera specifically to calculate material properties of plasmas. The highly modular architecture of this framework made it possible to add support for Debye-Hückel non-ideality corrections in the calculation of the chemical equilibrium mixture, as well as to enable the calculation of the key transport parameters needed in CFD-based electric arc simulations: electrical and thermal conductivity, viscosity, and diffusion coefficients. As an example, we discuss the thermodynamic and transport properties of mixtures of carbon dioxide and copper vapor.
Directory of Open Access Journals (Sweden)
Chao Hu
2015-04-01
Full Text Available Slope excavation is one of the most crucial steps in the construction of a hydraulic project. Excavation project quality assessment and excavated volume calculation are critical in construction management. The positioning of excavation projects using traditional instruments is inefficient and may cause error. To improve the efficiency and precision of calculation and assessment, three-dimensional laser scanning technology was used for slope excavation quality assessment. An efficient data acquisition, processing, and management workflow was presented in this study. Based on the quality control indices, including the average gradient, slope toe elevation, and overbreak and underbreak, cross-sectional quality assessment and holistic quality assessment methods were proposed to assess the slope excavation quality with laser-scanned data. An algorithm was also presented to calculate the excavated volume with laser-scanned data. A field application and a laboratory experiment were carried out to verify the feasibility of these methods for excavation quality assessment and excavated volume calculation. The results show that the quality assessment indices can be obtained rapidly and accurately with design parameters and scanned data, and the results of holistic quality assessment are consistent with those of cross-sectional quality assessment. In addition, the time consumption in excavation project quality assessment with the laser scanning technology can be reduced by 70%−90%, as compared with the traditional method. The excavated volume calculated with the scanned data only slightly differs from measured data, demonstrating the applicability of the excavated volume calculation method presented in this study.
N. S. Labidi
2013-01-01
The semiempirical AM1 SCF method is used to study the first static hyperpolarizabilities β of some novel mono-O-Hydroxy bidentate Schiff base in which electron donating (D) and electron accepting (A) groups were introduced on either side of the Schiff base ring system. Geometries of all molecules were optimized at the semiempirical AM1. The first static hyperpolarizabilities of these molecules were calculated using Hyperchem package. To understand this phenomenon in the context of molecular o...
CCSD(T)/CBS fragment-based calculations of lattice energy of molecular crystals
Červinka, Ctirad; Fulem, Michal; Růžička, Květoslav
2016-02-01
A comparative study of the lattice energy calculations for a data set of 25 molecular crystals is performed using an additive scheme based on the individual energies of up to four-body interactions calculated using the coupled clusters with iterative treatment of single and double excitations and perturbative triples correction (CCSD(T)) with an estimated complete basis set (CBS) description. The CCSD(T)/CBS values on lattice energies are used to estimate sublimation enthalpies which are compared with critically assessed and thermodynamically consistent experimental values. The average absolute percentage deviation of calculated sublimation enthalpies from experimental values amounts to 13% (corresponding to 4.8 kJ mol-1 on absolute scale) with unbiased distribution of positive to negative deviations. As pair interaction energies present a dominant contribution to the lattice energy and CCSD(T)/CBS calculations still remain computationally costly, benchmark calculations of pair interaction energies defined by crystal parameters involving 17 levels of theory, including recently developed methods with local and explicit treatment of electronic correlation, such as LCC and LCC-F12, are also presented. Locally and explicitly correlated methods are found to be computationally effective and reliable methods enabling the application of fragment-based methods for larger systems.
Meng, ZhuXuan; Fan, Hu; Peng, Ke; Zhang, WeiHua; Yang, HuiXin
2016-12-01
This article presents a rapid and accurate aeroheating calculation method for hypersonic vehicles. The main innovation is combining accurate of numerical method with efficient of engineering method, which makes aeroheating simulation more precise and faster. Based on the Prandtl boundary layer theory, the entire flow field is divided into inviscid and viscid flow at the outer edge of the boundary layer. The parameters at the outer edge of the boundary layer are numerically calculated from assuming inviscid flow. The thermodynamic parameters of constant-volume specific heat, constant-pressure specific heat and the specific heat ratio are calculated, the streamlines on the vehicle surface are derived and the heat flux is then obtained. The results of the double cone show that at the 0° and 10° angle of attack, the method of aeroheating calculation based on inviscid outer edge of boundary layer parameters reproduces the experimental data better than the engineering method. Also the proposed simulation results of the flight vehicle reproduce the viscid numerical results well. Hence, this method provides a promising way to overcome the high cost of numerical calculation and improves the precision.
Institute of Scientific and Technical Information of China (English)
仪晓斌; 陈莹
2016-01-01
针对目前正面人脸合成算法运算量大或合成图像失真较大的问题，提出一种基于分段仿射变换和泊松融合的正面人脸图像合成算法，将多幅输入图像用分段仿射变换（Piecewise Affine Warp，PAW）映射到正面人脸模板，并根据映射时产生的非刚性形变求得其对应的权重矩阵，进而获取每幅映射图像对应的变形掩膜，依次以这些映射图像为前景图像，以其对应的变形掩膜为泊松掩膜，并以上一次的融合图像为背景图像进行泊松融合，生成一幅平滑自然的正面人脸图像。实验结果表明，相比现有算法，该算法生成的正面人脸图像更加逼近真实正面人脸图像，而且很好地保留了输入人脸的个体信息。%In this paper, a frontal face synthesizing strategy based on Poisson image fusion and Piecewise Affine Warp (PAW)is proposed to solve the problem of large-scale computation cost or transformation distortion in general synthesizing methods. The multiple non-frontal input images are mapped to the frontal face template with PAW. The corresponding weight matrices are calculated according to the magnitude of deformation which can be used to obtain the foreground mask for Poisson fusion. Iterative fusion strategy is designed to synthesize one frontal image from multiple non-frontal images. In each step, the PAW image is used as foreground image. The deformation mask is used as foreground mask, and the fusion image of the previous step is used as background. Experiments show that the synthesized frontal image can per-fectly preserve personal facial details and outperforms others in both subjective and objective evaluations.
Directory of Open Access Journals (Sweden)
Young Ah Goo
2008-01-01
Full Text Available Recently, several research groups have published methods for the determination of proteomic expression profiling by mass spectrometry without the use of exogenously added stable isotopes or stable isotope dilution theory. These so-called label-free, methods have the advantage of allowing data on each sample to be acquired independently from all other samples to which they can later be compared in silico for the purpose of measuring changes in protein expression between various biological states. We developed label free software based on direct measurement of peptide ion current area (PICA and compared it to two other methods, a simpler label free method known as spectral counting and the isotope coded affinity tag (ICAT method. Data analysis by these methods of a standard mixture containing proteins of known, but varying, concentrations showed that they performed similarly with a mean squared error of 0.09. Additionally, complex bacterial protein mixtures spiked with known concentrations of standard proteins were analyzed using the PICA label-free method. These results indicated that the PICA method detected all levels of standard spiked proteins at the 90% confidence level in this complex biological sample. This finding confirms that label-free methods, based on direct measurement of the area under a single ion current trace, performed as well as the standard ICAT method. Given the fact that the label-free methods provide ease in experimental design well beyond pair-wise comparison, label-free methods such as our PICA method are well suited for proteomic expression profiling of large numbers of samples as is needed in clinical analysis.
Rudra, Suparna; Dasmandal, Somnath; Patra, Chiranjit; Kundu, Arjama; Mahapatra, Ambikesh
2016-09-05
The binding interaction of a synthesized Schiff base Fe(II) complex with biological macromolecules viz., bovine serum albumin (BSA) and calf thymus(ct)-DNA have been investigated using different spectroscopic techniques coupled with viscosity measurements at physiological pH and 298K. Regular amendments in emission intensities of BSA upon the action of the complex indicate significant interaction between them, and the binding interaction have been characterized by Stern Volmer plots and thermodynamic binding parameters. On the basis of this quenching technique one binding site with binding constant (Kb=(7.6±0.21)×10(5)) between complex and protein have been obtained at 298K. Time-resolved fluorescence studies have also been encountered to understand the mechanism of quenching induced by the complex. Binding affinities of the complex to the fluorophores of BSA namely tryptophan (Trp) and tyrosine (Tyr) have been judged by synchronous fluorescence studies. Secondary structural changes of BSA rooted by the complex has been revealed by CD spectra. On the other hand, hypochromicity of absorption spectra of the complex with the addition of ct-DNA and the gradual reduction in emission intensities of ethidium bromide bound ct-DNA in presence of the complex indicate noticeable interaction between ct-DNA and the complex with the binding constant (4.2±0.11)×10(6)M(-1). Life-time measurements have been studied to determine the relative amplitude of binding of the complex to ct-DNA base pairs. Mode of binding interaction of the complex with ct-DNA has been deciphered by viscosity measurements. CD spectra have also been used to understand the changes in ct-DNA structure upon binding with the metal complex. Density functional theory (DFT) and molecular docking analysis have been employed in highlighting the interactive phenomenon and binding location of the complex with the macromolecules.
Rudra, Suparna; Dasmandal, Somnath; Patra, Chiranjit; Kundu, Arjama; Mahapatra, Ambikesh
2016-09-01
The binding interaction of a synthesized Schiff base Fe(II) complex with biological macromolecules viz., bovine serum albumin (BSA) and calf thymus(ct)-DNA have been investigated using different spectroscopic techniques coupled with viscosity measurements at physiological pH and 298 K. Regular amendments in emission intensities of BSA upon the action of the complex indicate significant interaction between them, and the binding interaction have been characterized by Stern Volmer plots and thermodynamic binding parameters. On the basis of this quenching technique one binding site with binding constant (Kb = (7.6 ± 0.21) × 105) between complex and protein have been obtained at 298 K. Time-resolved fluorescence studies have also been encountered to understand the mechanism of quenching induced by the complex. Binding affinities of the complex to the fluorophores of BSA namely tryptophan (Trp) and tyrosine (Tyr) have been judged by synchronous fluorescence studies. Secondary structural changes of BSA rooted by the complex has been revealed by CD spectra. On the other hand, hypochromicity of absorption spectra of the complex with the addition of ct-DNA and the gradual reduction in emission intensities of ethidium bromide bound ct-DNA in presence of the complex indicate noticeable interaction between ct-DNA and the complex with the binding constant (4.2 ± 0.11) × 106 M- 1. Life-time measurements have been studied to determine the relative amplitude of binding of the complex to ct-DNA base pairs. Mode of binding interaction of the complex with ct-DNA has been deciphered by viscosity measurements. CD spectra have also been used to understand the changes in ct-DNA structure upon binding with the metal complex. Density functional theory (DFT) and molecular docking analysis have been employed in highlighting the interactive phenomenon and binding location of the complex with the macromolecules.
Calculation and Simulation Study on Transient Stability of Power System Based on Matlab/Simulink
Directory of Open Access Journals (Sweden)
Shi Xiu Feng
2016-01-01
Full Text Available The stability of the power system is destroyed, will cause a large number of users power outage, even cause the collapse of the whole system, extremely serious consequences. Based on the analysis in single machine infinite system as an example, when at the f point two phase ground fault occurs, the fault lines on either side of the circuit breaker tripping resection at the same time,respectively by two kinds of calculation and simulation methods of system transient stability analysis, the conclusion are consistent. and the simulation analysis is superior to calculation analysis.
Calculation of Hugoniot properties for shocked nitromethane based on the improved Tsien's EOS
Zhao, Bo; Cui, Ji-Ping; Fan, Jing
2010-06-01
We have calculated the Hugoniot properties of shocked nitromethane based on the improved Tsien’s equation of state (EOS) that optimized by “exact” numerical molecular dynamic data at high temperatures and pressures. Comparison of the calculated results of the improved Tsien’s EOS with the existed experimental data and the direct simulations show that the behavior of the improved Tsien’s EOS is very good in many aspects. Because of its simple analytical form, the improved Tsien’s EOS can be prospectively used to study the condensed explosive detonation coupling with chemical reaction.
Duality-based calculations for transition probabilities in stochastic chemical reactions
Ohkubo, Jun
2017-02-01
An idea for evaluating transition probabilities in chemical reaction systems is proposed, which is efficient for repeated calculations with various rate constants. The idea is based on duality relations; instead of direct time evolutions of the original reaction system, the dual process is dealt with. Usually, if one changes rate constants of the original reaction system, the direct time evolutions should be performed again, using the new rate constants. On the other hands, only one solution of an extended dual process can be reused to calculate the transition probabilities for various rate constant cases. The idea is demonstrated in a parameter estimation problem for the Lotka-Volterra system.
Modification method of numerical calculation of heat flux over dome based on turbulence models
Zhang, Daijun; Luo, Haibo; Zhang, Junchao; Zhang, Xiangyue
2016-10-01
For the optical guidance system flying at low altitude and high speed, the calculation of turbulent convection heat transfer over its dome is the key to designing this kind of aircraft. RANS equations-based turbulence models are of high computation efficiency and their calculation accuracy can satisfy the engineering requirement. But for the calculation of the flow in the shock layer of strong entropy and pressure disturbances existence, especially of aerodynamic heat, some parameters in the RANS energy equation are necessary to be modified. In this paper, we applied turbulence models on the calculation of the heat flux over the dome of sphere-cone body at zero attack. Based on Billig's results, the shape and position of detached shock were extracted in flow field using multi-block structured grid. The thermal conductivity of the inflow was set to kinetic theory model with respect to temperature. When compared with Klein's engineering formula at the stagnation point, we found that the results of turbulent models were larger. By analysis, we found that the main reason of larger values was the interference from entropy layer to boundary layer. Then thermal conductivity of inflow was assigned a fixed value as equivalent thermal conductivity in order to compensate the overestimate of the turbulent kinetic energy. Based on the SST model, numerical experiments showed that the value of equivalent thermal conductivity was only related with the Mach number. The proposed modification approach of equivalent thermal conductivity for inflow in this paper could also be applied to other turbulence models.
A new fragment-based approach for calculating electronic excitation energies of large systems.
Ma, Yingjin; Liu, Yang; Ma, Haibo
2012-01-14
We present a new fragment-based scheme to calculate the excited states of large systems without necessity of a Hartree-Fock (HF) solution of the whole system. This method is based on the implementation of the renormalized excitonic method [M. A. Hajj et al., Phys. Rev. B 72, 224412 (2005)] at ab initio level, which assumes that the excitation of the whole system can be expressed by a linear combination of various local excitations. We decomposed the whole system into several blocks and then constructed the effective Hamiltonians for the intra- and inter-block interactions with block canonical molecular orbitals instead of widely used localized molecular orbitals. Accordingly, we avoided the prerequisite HF solution and the localization procedure of the molecular orbitals in the popular local correlation methods. Test calculations were implemented for hydrogen molecule chains at the full configuration interaction, symmetry adapted cluster/symmetry adapted cluster configuration interaction, HF/configuration interaction singles (CIS) levels and more realistic polyene systems at the HF/CIS level. The calculated vertical excitation energies for lowest excited states are in reasonable accordance with those determined by the calculations of the whole systems with traditional methods, showing that our new fragment-based method can give good estimates for low-lying energy spectra of both weak and moderate interaction systems with economic computational costs.
Li, Junchang; Tu, Han-Yen; Yeh, Wei-Chieh; Gui, Jinbin; Cheng, Chau-Jern
2014-09-20
Based on scalar diffraction theory and the geometric structure of liquid crystal on silicon (LCoS), we study the impulse responses and image depth of focus in a holographic three-dimensional (3D) display system. Theoretical expressions of the impulse response and the depth of focus of reconstructed 3D images are obtained, and experimental verifications of the imaging properties are performed. The results indicated that the images formed by holographic display based on the LCoS device were periodic image fields surrounding optical axes. The widths of the image fields were directly proportional to the wavelength and diffraction distance, and inversely proportional to the pixel size of the LCoS device. Based on the features of holographic 3D imaging and focal depth, we enhance currently popular hologram calculation methods of 3D objects to improve the computing speed of hologram calculation.
[Risk factor calculator for medical underwriting of life insurers based on the PROCAM study].
Geritse, A; Müller, G; Trompetter, T; Schulte, H; Assmann, G
2008-06-01
For its electronic manual GEM, used to perform medical risk assessment in life insurance, SCOR Global Life Germany has developed an innovative and evidence-based calculator of the mortality risk depending on cardiovascular risk factors. The calculator contains several new findings regarding medical underwriting, which were gained from the analysis of the PROCAM (Prospective Cardiovascular Münster) study. For instance, in the overall consideration of all risk factors of a medically examined applicant, BMI is not an independent risk factor. Further, given sufficient information, the total extra mortality of a person no longer results from adding up the ratings for the single risk factors. In fact, this new approach of risk assessment considers the interdependencies between the different risk factors. The new calculator is expected to improve risk selection and standard acceptances will probably increase.
Hasan, Z.; Qiu, Z.; Johnson, Jackie; Homerick, Uwe
2009-02-01
The potential of three erbium based solids hosts has been investigated for laser cooling. Absorption and emission spectra have been studied for the low lying IR transitions of erbium that are relevant to recent reports of cooling using the 4I15/2-4I9/2 and4I15/2 -4I13/2 transitions. Experimental studies have been performed for erbium in three hosts; ZBLAN glass and KPb2Cl5 and Cs2NaYCl6 crystals. In order to estimate the efficiencies of cooling, theoretical calculations have been performed for the cubic Elpasolite (Cs2NaYCl6 ) crystal. These calculations also provide a first principle insight into the cooling efficiency for non-cubic and glassy hosts where such calculations are not possible.
FragIt: A Tool to Prepare Input Files for Fragment Based Quantum Chemical Calculations
Steinmann, Casper; Hansen, Anne S; Jensen, Jan H
2012-01-01
Near linear scaling fragment based quantum chemical calculations are becoming increasingly popular for treating large systems with high accuracy and is an active field of research. However, it remains difficult to set up these calculations without expert knowledge. To facilitate the use of such methods, software tools need to be available for support, setup and lower the barrier of entry for usage by non-experts. We present a fragmentation methodology and accompanying tools called FragIt to help setup these calculations. It uses the SMARTS language to find chemically appropriate substructures in structures and is used to prepare input files for the fragment molecular orbital method in the GAMESS program package. We present patterns of fragmentation for proteins and polysaccharides, specifically D-galactopyranose for use in cyclodextrins.
Fontanot, Fabio; Silva, Laura; Monaco, Pierluigi; Skibba, Ramin; 10.1111/j.1365-2966.2008.14126.x
2009-01-01
The treatment of dust attenuation is crucial in order to compare the predictions of galaxy formation models with multiwavelength observations. Most past studies have either used simple analytic prescriptions or else full radiative transfer (RT) calculations. Here, we couple star formation histories and morphologies predicted by the semi-analytic galaxy formation model MORGANA with RT calculations from the spectrophotometric and dust code GRASIL to create a library of galaxy SEDs from the UV/optical through the far Infrared, and compare the predictions of the RT calculations with analytic prescriptions. We consider a low and high redshift sample, as well as an additional library constructed with empirical, non-cosmological star formation histories and simple (pure bulge or disc) morphologies. Based on these libraries, we derive fitting formulae for the effective dust optical depth as a function of galaxy physical properties such as metallicity, gas mass, and radius. We show that such fitting formulae can predi...
GPU-based acceleration of free energy calculations in solid state physics
Januszewski, Michał; Crivelli, Dawid; Gardas, Bartłomiej
2014-01-01
Obtaining a thermodynamically accurate phase diagram through numerical calculations is a computationally expensive problem that is crucially important to understanding the complex phenomena of solid state physics, such as superconductivity. In this work we show how this type of analysis can be significantly accelerated through the use of modern GPUs. We illustrate this with a concrete example of free energy calculation in multi-band iron-based superconductors, known to exhibit a superconducting state with oscillating order parameter. Our approach can also be used for classical BCS-type superconductors. With a customized algorithm and compiler tuning we are able to achieve a 19x speedup compared to the CPU (119x compared to a single CPU core), reducing calculation time from minutes to mere seconds, enabling the analysis of larger systems and the elimination of finite size effects.
Calculation Scheme Based on a Weighted Primitive: Application to Image Processing Transforms
Directory of Open Access Journals (Sweden)
Gregorio de Miguel Casado
2007-01-01
Full Text Available This paper presents a method to improve the calculation of functions which specially demand a great amount of computing resources. The method is based on the choice of a weighted primitive which enables the calculation of function values under the scope of a recursive operation. When tackling the design level, the method shows suitable for developing a processor which achieves a satisfying trade-off between time delay, area costs, and stability. The method is particularly suitable for the mathematical transforms used in signal processing applications. A generic calculation scheme is developed for the discrete fast Fourier transform (DFT and then applied to other integral transforms such as the discrete Hartley transform (DHT, the discrete cosine transform (DCT, and the discrete sine transform (DST. Some comparisons with other well-known proposals are also provided.
Specification of materials Data for Fire Safety Calculations based on ENV 1992-1-2
DEFF Research Database (Denmark)
Hertz, Kristian Dahl
1997-01-01
of constructions of any concrete exposed to any time of any fire exposure can be calculated.Chapter 4.4 provides information on what should be observed if more general calculation methods are used.Annex A provides some additional information on materials data. This chapter is not a part of the code......The part 1-2 of the Eurocode on Concrete deals with Structural Fire Design.In chapter 3, which is partly written by the author of this paper, some data are given for the development of a few material parameters at high temperatures. These data are intended to represent the worst possible concrete...... to experience form tests on structural specimens based on German siliceous concrete subjected to Standard fire exposure until the time of maximum gas temperature.Chapter 4.3, which is written by the author of this paper, provides a simplified calculation method by means of which the load bearing capacity...
An automated Monte-Carlo based method for the calculation of cascade summing factors
Jackson, M. J.; Britton, R.; Davies, A. V.; McLarty, J. L.; Goodwin, M.
2016-10-01
A versatile method has been developed to calculate cascade summing factors for use in quantitative gamma-spectrometry analysis procedures. The proposed method is based solely on Evaluated Nuclear Structure Data File (ENSDF) nuclear data, an X-ray energy library, and accurate efficiency characterisations for single detector counting geometries. The algorithm, which accounts for γ-γ, γ-X, γ-511 and γ-e- coincidences, can be applied to any design of gamma spectrometer and can be expanded to incorporate any number of nuclides. Efficiency characterisations can be derived from measured or mathematically modelled functions, and can accommodate both point and volumetric source types. The calculated results are shown to be consistent with an industry standard gamma-spectrometry software package. Additional benefits including calculation of cascade summing factors for all gamma and X-ray emissions, not just the major emission lines, are also highlighted.
Directory of Open Access Journals (Sweden)
Jaime Mella-Raipán
2014-03-01
Full Text Available A 3D-QSAR (CoMFA study was performed in an extensive series of aminoalkylindoles derivatives with affinity for the cannabinoid receptors CB1 and CB2. The aim of the present work was to obtain structure-activity relationships of the aminoalkylindole family in order to explain the affinity and selectivity of the molecules for these receptors. Major differences in both, steric and electrostatic fields were found in the CB1 and CB2 CoMFA models. The steric field accounts for the principal contribution to biological activity. These results provide a foundation for the future development of new heterocyclic compounds with high affinity and selectivity for the cannabinoid receptors with applications in several pathological conditions such as pain treatment, cancer, obesity and immune disorders, among others.
Mella-Raipán, Jaime; Hernández-Pino, Santiago; Morales-Verdejo, César; Pessoa-Mahana, David
2014-03-05
A 3D-QSAR (CoMFA) study was performed in an extensive series of aminoalkylindoles derivatives with affinity for the cannabinoid receptors CB1 and CB2. The aim of the present work was to obtain structure-activity relationships of the aminoalkylindole family in order to explain the affinity and selectivity of the molecules for these receptors. Major differences in both, steric and electrostatic fields were found in the CB1 and CB2 CoMFA models. The steric field accounts for the principal contribution to biological activity. These results provide a foundation for the future development of new heterocyclic compounds with high affinity and selectivity for the cannabinoid receptors with applications in several pathological conditions such as pain treatment, cancer, obesity and immune disorders, among others.
Development of a DSP-based real-time position calculation circuit for a beta camera
Energy Technology Data Exchange (ETDEWEB)
Yamamoto, Seiichi E-mail: s-yama@kobe-kosen.ac.jp; Matsuda, Tadashige; Kanno, Iwao
2000-03-01
A digital signal processor (DSP)-based position calculation circuit was developed and tested for a beta camera. The previous position calculation circuit which employed flash analog-to-digital (A-D) converters for A-D conversion and ratio calculation produced significant line artifacts in the image due to the differential non-linearity of the A-D converters. The new position calculation circuit uses four A-D converters for A-D conversion of the analog signals from the position sensitive photomultiplier tube (PSPMT). The DSP reads the A-D signals and calculates the ratio of X{sub a}/(X{sub a}+X{sub b}) and Y{sub a}/(Y{sub a}+Y{sub b}) on an event-by-event basis. The DSP also magnifies the image to fit the useful field of view (FOV) and rejects the events out of the FOV. The line artifacts in the image were almost eliminated.
Energy Technology Data Exchange (ETDEWEB)
Murata, Isao [Osaka Univ., Suita (Japan); Mori, Takamasa; Nakagawa, Masayuki; Itakura, Hirofumi
1996-03-01
The method to calculate neutronics parameters of a core composed of randomly distributed spherical fuels has been developed based on a statistical geometry model with a continuous energy Monte Carlo method. This method was implemented in a general purpose Monte Carlo code MCNP, and a new code MCNP-CFP had been developed. This paper describes the model and method how to use it and the validation results. In the Monte Carlo calculation, the location of a spherical fuel is sampled probabilistically along the particle flight path from the spatial probability distribution of spherical fuels, called nearest neighbor distribution (NND). This sampling method was validated through the following two comparisons: (1) Calculations of inventory of coated fuel particles (CFPs) in a fuel compact by both track length estimator and direct evaluation method, and (2) Criticality calculations for ordered packed geometries. This method was also confined by applying to an analysis of the critical assembly experiment at VHTRC. The method established in the present study is quite unique so as to a probabilistic model of the geometry with a great number of spherical fuels distributed randomly. Realizing the speed-up by vector or parallel computations in future, it is expected to be widely used in calculation of a nuclear reactor core, especially HTGR cores. (author).
Qin, Shanshan; Ren, Yiran; Fu, Xu; Shen, Jie; Chen, Xin; Wang, Quan; Bi, Xin; Liu, Wenjing; Li, Lixin; Liang, Guangxin; Yang, Cheng; Shui, Wenqing
2015-07-30
Binding affinity of a small molecule drug candidate to a therapeutically relevant biomolecular target is regarded the first determinant of the candidate's efficacy. Although the ultrafiltration-LC/MS (UF-LC/MS) assay enables efficient ligand discovery for a specific target from a mixed pool of compounds, most previous analysis allowed for relative affinity ranking of different ligands. Moreover, the reliability of affinity measurement for multiple ligands with UF-LC/MS has hardly been strictly evaluated. In this study, we examined the accuracy of K(d) determination through UF-LC/MS by comparison with classical ITC measurement. A single-point K(d) calculation method was found to be suitable for affinity measurement of multiple ligands bound to the same target when binding competition is minimized. A second workflow based on analysis of the unbound fraction of compounds was then developed, which simplified sample preparation as well as warranted reliable ligand discovery. The new workflow implemented in a fragment mixture screen afforded rapid and sensitive detection of low-affinity ligands selectively bound to the RNA polymerase NS5B of hepatitis C virus. More importantly, ligand identification and affinity measurement for mixture-based fragment screens by UF-LC/MS were in good accordance with single ligand evaluation by conventional SPR analysis. This new approach is expected to become a valuable addition to the arsenal of high-throughput screening techniques for fragment-based drug discovery.
Heavy Ion SEU Cross Section Calculation Based on Proton Experimental Data, and Vice Versa
Wrobel, F; Pouget, V; Dilillo, L; Ecoffet, R; Lorfèvre, E; Bezerra, F; Brugger, M; Saigné, F
2014-01-01
The aim of this work is to provide a method to calculate single event upset (SEU) cross sections by using experimental data. Valuable tools such as PROFIT and SIMPA already focus on the calculation of the proton cross section by using heavy ions cross-section experiments. However, there is no available tool that calculates heavy ion cross sections based on measured proton cross sections with no knowledge of the technology. We based our approach on the diffusion-collection model with the aim of analyzing the characteristics of transient currents that trigger SEUs. We show that experimental cross sections could be used to characterize the pulses that trigger an SEU. Experimental results allow yet defining an empirical rule to identify the transient current that are responsible for an SEU. Then, the SEU cross section can be calculated for any kind of particle and any energy with no need to know the Spice model of the cell. We applied our method to some technologies (250 nm, 90 nm and 65 nm bulk SRAMs) and we sho...
LWR decay heat calculations using a GRS improved ENDF/B-6 based ORIGEN data library
Energy Technology Data Exchange (ETDEWEB)
Hesse, U.; Hummelsheim, K.I.; Kilger, R.; Moser, F.E.; Langenbuch, S. [Gesellschaft fur Anlagen- und Reaktorsicherheit (GRS) mbH, Forschungsinstitute, Garching (Germany)
2008-07-01
The known ORNL ORIGEN code is widely spread over the world for inventory, activity and decay heat tasks and is used stand-alone or implemented in activation, shielding or burn-up systems. More than 1000 isotopes with more than six coupled neutron capture and radioactive decay channels are handled simultaneously by the code. The characteristics of the calculated inventories, e.g., masses, activities, neutron and photon source terms or the decay heat during short or long decay time steps are achieved by summing over all isotopes, characterized in the ORIGEN libraries. An extended nuclear GRS-ORIGENX data library is now developed for practical appliance. The library was checked for activation tasks of structure material isotopes and for actinide and fission product burn-up calculations compared with experiments and standard methods. The paper is directed to the LWR decay heat calculation features of the new library and shows the differences of dynamical and time integrated results of Endf/B-6 based and older Endf/B-5 based libraries for decay heat tasks compared to fission burst experiments, ANS curves and some other published data. A multi-group time exponential evaluation is given for the fission burst power of {sup 235}U, {sup 238}U, {sup 239}Pu and {sup 241}Pu, to be used in quick LWR reactor accident decay heat calculation tools. (authors)
Calculation of the Instream Ecological Flow of the Wei River Based on Hydrological Variation
Directory of Open Access Journals (Sweden)
Shengzhi Huang
2014-01-01
Full Text Available It is of great significance for the watershed management department to reasonably allocate water resources and ensure the sustainable development of river ecosystems. The greatly important issue is to accurately calculate instream ecological flow. In order to precisely compute instream ecological flow, flow variation is taken into account in this study. Moreover, the heuristic segmentation algorithm that is suitable to detect the mutation points of flow series is employed to identify the change points. Besides, based on the law of tolerance and ecological adaptation theory, the maximum instream ecological flow is calculated, which is the highest frequency of the monthly flow based on the GEV distribution and very suitable for healthy development of the river ecosystems. Furthermore, in order to guarantee the sustainable development of river ecosystems under some bad circumstances, minimum instream ecological flow is calculated by a modified Tennant method which is improved by replacing the average flow with the highest frequency of flow. Since the modified Tennant method is more suitable to reflect the law of flow, it has physical significance, and the calculation results are more reasonable.
Classification of neocortical interneurons using affinity propagation
Santana, Roberto; McGarry, Laura M.; Bielza, Concha; Larrañaga, Pedro; Yuste, Rafael
2013-01-01
In spite of over a century of research on cortical circuits, it is still unknown how many classes of cortical neurons exist. In fact, neuronal classification is a difficult problem because it is unclear how to designate a neuronal cell class and what are the best characteristics to define them. Recently, unsupervised classifications using cluster analysis based on morphological, physiological, or molecular characteristics, have provided quantitative and unbiased identification of distinct neuronal subtypes, when applied to selected datasets. However, better and more robust classification methods are needed for increasingly complex and larger datasets. Here, we explored the use of affinity propagation, a recently developed unsupervised classification algorithm imported from machine learning, which gives a representative example or exemplar for each cluster. As a case study, we applied affinity propagation to a test dataset of 337 interneurons belonging to four subtypes, previously identified based on morphological and physiological characteristics. We found that affinity propagation correctly classified most of the neurons in a blind, non-supervised manner. Affinity propagation outperformed Ward's method, a current standard clustering approach, in classifying the neurons into 4 subtypes. Affinity propagation could therefore be used in future studies to validly classify neurons, as a first step to help reverse engineer neural circuits. PMID:24348339
Classification of neocortical interneurons using affinity propagation.
Santana, Roberto; McGarry, Laura M; Bielza, Concha; Larrañaga, Pedro; Yuste, Rafael
2013-01-01
In spite of over a century of research on cortical circuits, it is still unknown how many classes of cortical neurons exist. In fact, neuronal classification is a difficult problem because it is unclear how to designate a neuronal cell class and what are the best characteristics to define them. Recently, unsupervised classifications using cluster analysis based on morphological, physiological, or molecular characteristics, have provided quantitative and unbiased identification of distinct neuronal subtypes, when applied to selected datasets. However, better and more robust classification methods are needed for increasingly complex and larger datasets. Here, we explored the use of affinity propagation, a recently developed unsupervised classification algorithm imported from machine learning, which gives a representative example or exemplar for each cluster. As a case study, we applied affinity propagation to a test dataset of 337 interneurons belonging to four subtypes, previously identified based on morphological and physiological characteristics. We found that affinity propagation correctly classified most of the neurons in a blind, non-supervised manner. Affinity propagation outperformed Ward's method, a current standard clustering approach, in classifying the neurons into 4 subtypes. Affinity propagation could therefore be used in future studies to validly classify neurons, as a first step to help reverse engineer neural circuits.
Jacobi Structures on Affine Bundles
Institute of Scientific and Technical Information of China (English)
J. GRABOWSKI; D. IGLESIAS; J. C. MARRERO; E. PADR(O)N; P. URBA(N)SKI
2007-01-01
We study affine Jacobi structures (brackets) on an affine bundle π: A→M, i.e. Jacobi brackets that close on affine functions. We prove that if the rank of A is non-zero, there is a one-to- one correspondence between affine Jacobi structures on A and Lie algebroid structures on the vector bundle A+=∪p∈M Aff(Ap, R) of affine functionals. In the case rank A = 0, it is shown that there is a one-to-one correspondence between affins Jacobi structures on A and local Lie algebras on A+. Some examples and applications, also for the linear case, are discussed. For a special type of affine Jacobi structures which are canonically exhibited (strongly-affine or affine-homogeneous Jacobi structures) over a real vector space of finite dimension, we describe the leaves of its characteristic foliation as the orbits of an affine representation. These afline Jacobi structures can be viewed as an analog of the Kostant-Arnold-LiouviUe linear Poisson structure on the dual space of a real finite-dimensional Lie algebra.
Effect of composition on antiphase boundary energy in Ni3Al based alloys: Ab initio calculations
Gorbatov, O. I.; Lomaev, I. L.; Gornostyrev, Yu. N.; Ruban, A. V.; Furrer, D.; Venkatesh, V.; Novikov, D. L.; Burlatsky, S. F.
2016-06-01
The effect of composition on the antiphase boundary (APB) energy of Ni-based L 12-ordered alloys is investigated by ab initio calculations employing the coherent potential approximation. The calculated APB energies for the {111} and {001} planes reproduce experimental values of the APB energy. The APB energies for the nonstoichiometric γ' phase increase with Al concentration and are in line with the experiment. The magnitude of the alloying effect on the APB energy correlates with the variation of the ordering energy of the alloy according to the alloying element's position in the 3 d row. The elements from the left side of the 3 d row increase the APB energy of the Ni-based L 12-ordered alloys, while the elements from the right side slightly affect it except Ni. The way to predict the effect of an addition on the {111} APB energy in a multicomponent alloy is discussed.
Tian, Pu
2015-01-01
Free energy is arguably the most important thermodynamic property for physical systems. Despite the fact that free energy is a state function, presently available rigorous methodologies, such as those based on thermodynamic integration (TI) or non-equilibrium work (NEW) analysis, involve energetic calculations on path(s) connecting the starting and the end macrostates. Meanwhile, presently widely utilized approximate end-point free energy methods lack rigorous treatment of conformational variation within end macrostates, and are consequently not sufficiently reliable. Here we present an alternative and rigorous end point free energy calculation formulation based on microscopic configurational space coarse graining, where the configurational space of a high dimensional system is divided into a large number of sufficiently fine and uniform elements, which were termed conformers. It was found that change of free energy is essentially decided by change of the number of conformers, with an error term that accounts...
Kernel Affine Projection Algorithms
Directory of Open Access Journals (Sweden)
José C. Príncipe
2008-05-01
Full Text Available The combination of the famed kernel trick and affine projection algorithms (APAs yields powerful nonlinear extensions, named collectively here, KAPA. This paper is a follow-up study of the recently introduced kernel least-mean-square algorithm (KLMS. KAPA inherits the simplicity and online nature of KLMS while reducing its gradient noise, boosting performance. More interestingly, it provides a unifying model for several neural network techniques, including kernel least-mean-square algorithms, kernel adaline, sliding-window kernel recursive-least squares (KRLS, and regularization networks. Therefore, many insights can be gained into the basic relations among them and the tradeoff between computation complexity and performance. Several simulations illustrate its wide applicability.
Kernel Affine Projection Algorithms
Liu, Weifeng; Príncipe, José C.
2008-12-01
The combination of the famed kernel trick and affine projection algorithms (APAs) yields powerful nonlinear extensions, named collectively here, KAPA. This paper is a follow-up study of the recently introduced kernel least-mean-square algorithm (KLMS). KAPA inherits the simplicity and online nature of KLMS while reducing its gradient noise, boosting performance. More interestingly, it provides a unifying model for several neural network techniques, including kernel least-mean-square algorithms, kernel adaline, sliding-window kernel recursive-least squares (KRLS), and regularization networks. Therefore, many insights can be gained into the basic relations among them and the tradeoff between computation complexity and performance. Several simulations illustrate its wide applicability.
Lin, Lin
2012-01-01
We describe how to apply the recently developed pole expansion plus selected inversion (PEpSI) technique to Kohn-Sham density function theory (DFT) electronic structure calculations that are based on atomic orbital discretization. We give analytic expressions for evaluating charge density, total energy, Helmholtz free energy and atomic forces without using the eigenvalues and eigenvectors of the Kohn-Sham Hamiltonian. We also show how to update the chemical potential without using Kohn-Sham...
Rubagotti, Sara; Croci, Stefania; Ferrari, Erika; Iori, Michele; Capponi, Pier C; Lorenzini, Luca; Calzà, Laura; Versari, Annibale; Asti, Mattia
2016-09-06
Curcumin derivatives labelled with fluorine-18 or technetium-99m have recently shown their potential as diagnostic tools for Alzheimer's disease. Nevertheless, no study by exploiting the labelling with gallium-68 has been performed so far, in spite of its suitable properties (positron emitter, generator produced radionuclide). Herein, an evaluation of the affinity for synthetic β-amyloid fibrils and for amyloid plaques of three (nat/68)Ga-labelled curcumin analogues, namely curcumin curcumin (CUR), bis-dehydroxy-curcumin (bDHC) and diacetyl-curcumin (DAC), was performed. Affinity and specificity were tested in vitro on amyloid synthetic fibrils by using gallium-68 labelled compounds. Post-mortem brain cryosections from Tg2576 mice were used for the ex vivo visualization of amyloid plaques. The affinity of (68)Ga(CUR)₂⁺, (68)Ga(DAC)₂⁺, and (68)Ga(bDHC)₂⁺ for synthetic β-amyloid fibrils was moderate and their uptake could be observed in vitro. On the other hand, amyloid plaques could not be visualized on brain sections of Tg2576 mice after injection, probably due to the low stability of the complexes in vivo and of a hampered passage through the blood-brain barrier. Like curcumin, all (nat/68)Ga-curcuminoid complexes maintain a high affinity for β-amyloid plaques. However, structural modifications are still needed to improve their applicability as radiotracers in vivo.
Integration based profile likelihood calculation for PDE constrained parameter estimation problems
Boiger, R.; Hasenauer, J.; Hroß, S.; Kaltenbacher, B.
2016-12-01
Partial differential equation (PDE) models are widely used in engineering and natural sciences to describe spatio-temporal processes. The parameters of the considered processes are often unknown and have to be estimated from experimental data. Due to partial observations and measurement noise, these parameter estimates are subject to uncertainty. This uncertainty can be assessed using profile likelihoods, a reliable but computationally intensive approach. In this paper, we present the integration based approach for the profile likelihood calculation developed by (Chen and Jennrich 2002 J. Comput. Graph. Stat. 11 714-32) and adapt it to inverse problems with PDE constraints. While existing methods for profile likelihood calculation in parameter estimation problems with PDE constraints rely on repeated optimization, the proposed approach exploits a dynamical system evolving along the likelihood profile. We derive the dynamical system for the unreduced estimation problem, prove convergence and study the properties of the integration based approach for the PDE case. To evaluate the proposed method, we compare it with state-of-the-art algorithms for a simple reaction-diffusion model for a cellular patterning process. We observe a good accuracy of the method as well as a significant speed up as compared to established methods. Integration based profile calculation facilitates rigorous uncertainty analysis for computationally demanding parameter estimation problems with PDE constraints.
Quantification of confounding factors in MRI-based dose calculations as applied to prostate IMRT
Maspero, Matteo; Seevinck, Peter R.; Schubert, Gerald; Hoesl, Michaela A. U.; van Asselen, Bram; Viergever, Max A.; Lagendijk, Jan J. W.; Meijer, Gert J.; van den Berg, Cornelis A. T.
2017-02-01
Magnetic resonance (MR)-only radiotherapy treatment planning requires pseudo-CT (pCT) images to enable MR-based dose calculations. To verify the accuracy of MR-based dose calculations, institutions interested in introducing MR-only planning will have to compare pCT-based and computer tomography (CT)-based dose calculations. However, interpreting such comparison studies may be challenging, since potential differences arise from a range of confounding factors which are not necessarily specific to MR-only planning. Therefore, the aim of this study is to identify and quantify the contribution of factors confounding dosimetric accuracy estimation in comparison studies between CT and pCT. The following factors were distinguished: set-up and positioning differences between imaging sessions, MR-related geometric inaccuracy, pCT generation, use of specific calibration curves to convert pCT into electron density information, and registration errors. The study comprised fourteen prostate cancer patients who underwent CT/MRI-based treatment planning. To enable pCT generation, a commercial solution (MRCAT, Philips Healthcare, Vantaa, Finland) was adopted. IMRT plans were calculated on CT (gold standard) and pCTs. Dose difference maps in a high dose region (CTV) and in the body volume were evaluated, and the contribution to dose errors of possible confounding factors was individually quantified. We found that the largest confounding factor leading to dose difference was the use of different calibration curves to convert pCT and CT into electron density (0.7%). The second largest factor was the pCT generation which resulted in pCT stratified into a fixed number of tissue classes (0.16%). Inter-scan differences due to patient repositioning, MR-related geometric inaccuracy, and registration errors did not significantly contribute to dose differences (0.01%). The proposed approach successfully identified and quantified the factors confounding accurate MRI-based dose calculation in
Poltev, V I; Malenkov, G G; Gonzalez, E J; Teplukhin, A V; Rein, R; Shibata, M; Miller, J H
1996-02-01
Hydration properties of individual nucleic acid bases were calculated and compared with the available experimental data. Three sets of classical potential functions (PF) used in simulations of nucleic acid hydration were juxtaposed: (i) the PF developed by Poltev and Malenkov (PM), (ii) the PF of Weiner and Kollman (WK), which together with Jorgensen's TIP3P water model are widely used in the AMBER program, and (iii) OPLS (optimized potentials for liquid simulations) developed by Jorgensen (J). The global minima of interaction energy of single water molecules with all the natural nucleic acid bases correspond to the formation of two water-base hydrogen bonds (water bridging of two hydrophilic atoms of the base). The energy values of these minima calculated via PM potentials are in somewhat better conformity with mass-spectrometric data than the values calculated via WK PF. OPLS gave much weaker water-base interactions for all compounds considered, thus these PF were not used in further computations. Monte Carlo simulations of the hydration of 9-methyladenine, 1-methyluracil and 1-methylthymine were performed in systems with 400 water molecules and periodic boundary conditions. Results of simulations with PM potentials give better agreement with experimental data on hydration energies than WK PF. Computations with PM PF of the hydration energy of keto and enol tautomers of 9-methylguanine can account for the shift in the tautomeric equilibrium of guanine in aqueous media to a dominance of the keto form in spite of nearly equal intrinsic stability of keto and enol tautomers. The results of guanine hydration computations are discussed in relation to mechanisms of base mispairing errors in nucleic acid biosynthesis. The data presented in this paper along with previous results on simulation of hydration shell structures in DNA duplex grooves provide ample evidence for the advantages of PM PF in studies of nucleic-acid hydration.
Band structure calculation of GaSe-based nanostructures using empirical pseudopotential method
Osadchy, A. V.; Volotovskiy, S. G.; Obraztsova, E. D.; Savin, V. V.; Golovashkin, D. L.
2016-08-01
In this paper we present the results of band structure computer simulation of GaSe- based nanostructures using the empirical pseudopotential method. Calculations were performed using a specially developed software that allows performing simulations using cluster computing. Application of this method significantly reduces the demands on computing resources compared to traditional approaches based on ab-initio techniques and provides receiving the adequate comparable results. The use of cluster computing allows to obtain information for structures that require an explicit account of a significant number of atoms, such as quantum dots and quantum pillars.
Automated Calculation of Water-equivalent Diameter (DW) Based on AAPM Task Group 220.
Anam, Choirul; Haryanto, Freddy; Widita, Rena; Arif, Idam; Dougherty, Geoff
2016-07-08
The purpose of this study is to accurately and effectively automate the calculation of the water-equivalent diameter (DW) from 3D CT images for estimating the size-specific dose. DW is the metric that characterizes the patient size and attenuation. In this study, DW was calculated for standard CTDI phantoms and patient images. Two types of phantom were used, one representing the head with a diameter of 16 cm and the other representing the body with a diameter of 32 cm. Images of 63 patients were also taken, 32 who had undergone a CT head examination and 31 who had undergone a CT thorax examination. There are three main parts to our algorithm for automated DW calculation. The first part is to read 3D images and convert the CT data into Hounsfield units (HU). The second part is to find the contour of the phantoms or patients automatically. And the third part is to automate the calculation of DW based on the automated contouring for every slice (DW,all). The results of this study show that the automated calculation of DW and the manual calculation are in good agreement for phantoms and patients. The differences between the automated calculation of DW and the manual calculation are less than 0.5%. The results of this study also show that the estimating of DW,all using DW,n=1 (central slice along longitudinal axis) produces percentage differences of -0.92% ± 3.37% and 6.75%± 1.92%, and estimating DW,all using DW,n=9 produces percentage differences of 0.23% ± 0.16% and 0.87% ± 0.36%, for thorax and head examinations, respectively. From this study, the percentage differences between normalized size-specific dose estimate for every slice (nSSDEall) and nSSDEn=1 are 0.74% ± 2.82% and -4.35% ± 1.18% for thorax and head examinations, respectively; between nSSDEall and nSSDEn=9 are 0.00% ± 0.46% and -0.60% ± 0.24% for thorax and head examinations, respectively.
Shevenell, Lisa
1999-03-01
Values of evapotranspiration are required for a variety of water planning activities in arid and semi-arid climates, yet data requirements are often large, and it is costly to obtain this information. This work presents a method where a few, readily available data (temperature, elevation) are required to estimate potential evapotranspiration (PET). A method using measured temperature and the calculated ratio of total to vertical radiation (after the work of Behnke and Maxey, 1969) to estimate monthly PET was applied for the months of April-October and compared with pan evaporation measurements. The test area used in this work was in Nevada, which has 124 weather stations that record sufficient amounts of temperature data. The calculated PET values were found to be well correlated (R2=0·940-0·983, slopes near 1·0) with mean monthly pan evaporation measurements at eight weather stations.In order to extrapolate these calculated PET values to areas without temperature measurements and to sites at differing elevations, the state was divided into five regions based on latitude, and linear regressions of PET versus elevation were calculated for each of these regions. These extrapolated PET values generally compare well with the pan evaporation measurements (R2=0·926-0·988, slopes near 1·0). The estimated values are generally somewhat lower than the pan measurements, in part because the effects of wind are not explicitly considered in the calculations, and near-freezing temperatures result in a calculated PET of zero at higher elevations in the spring months. The calculated PET values for April-October are 84-100% of the measured pan evaporation values. Using digital elevation models in a geographical information system, calculated values were adjusted for slope and aspect, and the data were used to construct a series of maps of monthly PET. The resultant maps show a realistic distribution of regional variations in PET throughout Nevada which inversely mimics
Roos, Katarina; Hogner, Anders; Ogg, Derek; Packer, Martin J.; Hansson, Eva; Granberg, Kenneth L.; Evertsson, Emma; Nordqvist, Anneli
2015-12-01
In drug discovery, prediction of binding affinity ahead of synthesis to aid compound prioritization is still hampered by the low throughput of the more accurate methods and the lack of general pertinence of one method that fits all systems. Here we show the applicability of a method based on density functional theory using core fragments and a protein model with only the first shell residues surrounding the core, to predict relative binding affinity of a matched series of mineralocorticoid receptor (MR) antagonists. Antagonists of MR are used for treatment of chronic heart failure and hypertension. Marketed MR antagonists, spironolactone and eplerenone, are also believed to be highly efficacious in treatment of chronic kidney disease in diabetes patients, but is contra-indicated due to the increased risk for hyperkalemia. These findings and a significant unmet medical need among patients with chronic kidney disease continues to stimulate efforts in the discovery of new MR antagonist with maintained efficacy but low or no risk for hyperkalemia. Applied on a matched series of MR antagonists the quantum mechanical based method gave an R2 = 0.76 for the experimental lipophilic ligand efficiency versus relative predicted binding affinity calculated with the M06-2X functional in gas phase and an R2 = 0.64 for experimental binding affinity versus relative predicted binding affinity calculated with the M06-2X functional including an implicit solvation model. The quantum mechanical approach using core fragments was compared to free energy perturbation calculations using the full sized compound structures.
A GIS-based method for flooded area calculation and damage evaluation
Institute of Scientific and Technical Information of China (English)
无
2001-01-01
Using geographic information system to study flooded area and damage evaluation has been a hotspot in environmental disaster research for years. In this paper, a model for flooded area calculation and damage evaluation is presented. Flooding is divided into two types:‘source flood' and ‘non-source flood'. The source-flood area calculation is based on seed spread algorithm. The flood damage evaluation is calculated by overlaying the flooded area range with thematic maps and relating the result to other social and economic data. To raise the operational efficiency of the model, a skipping approach is used to speed seed spread algorithm and all thematic maps are converted to raster format before overlay analysis. The accuracy of flooded area calculation and damage evaluation is mainly dependent upon the resolution and precision of the digital elevation model (DEM) data, upon the accuracy of registering all raster layers, and upon the quality of economic information. This model has been successfully used in the Zhejiang Province Comprehensive Water Management Information System developed by the authors. The applications show that this model is especially useful for most counties of China and other developing countries.
Pennec, Fabienne; Alzina, Arnaud; Tessier-Doyen, Nicolas; Naitali, Benoit; Smith, David S.
2012-11-01
This work is about the calculation of thermal conductivity of insulating building materials made from plant particles. To determine the type of raw materials, the particle sizes or the volume fractions of plant and binder, a tool dedicated to calculate the thermal conductivity of heterogeneous materials has been developped, using the discrete element method to generate the volume element and the finite element method to calculate the homogenized properties. A 3D optical scanner has been used to capture plant particle shapes and convert them into a cluster of discret elements. These aggregates are initially randomly distributed but without any overlap, and then fall down in a container due to the gravity force and collide with neighbour particles according to a velocity Verlet algorithm. Once the RVE is built, the geometry is exported in the open-source Salome-Meca platform to be meshed. The calculation of the effective thermal conductivity of the heterogeneous volume is then performed using a homogenization technique, based on an energy method. To validate the numerical tool, thermal conductivity measurements have been performed on sunflower pith aggregates and on packed beds of the same particles. The experimental values have been compared satisfactorily with a batch of numerical simulations.
Calculator: A Hardware Design, Math and Software Programming Project Base Learning
Directory of Open Access Journals (Sweden)
F. Criado
2015-03-01
Full Text Available This paper presents the implementation by the students of a complex calculator in hardware. This project meets hardware design goals, and also highly motivates them to use competences learned in others subjects. The learning process, associated to System Design, is hard enough because the students have to deal with parallel execution, signal delay, synchronization … Then, to strengthen the knowledge of hardware design a methodology as project based learning (PBL is proposed. Moreover, it is also used to reinforce cross subjects like math and software programming. This methodology creates a course dynamics that is closer to a professional environment where they will work with software and mathematics to resolve the hardware design problems. The students design from zero the functionality of the calculator. They are who make the decisions about the math operations that it is able to resolve it, and also the operands format or how to introduce a complex equation into the calculator. This will increase the student intrinsic motivation. In addition, since the choices may have consequences on the reliability of the calculator, students are encouraged to program in software the decisions about how implement the selected mathematical algorithm. Although math and hardware design are two tough subjects for students, the perception that they get at the end of the course is quite positive.
Morenkov, O S; Fodor, N; Fodor, I
1999-01-01
Two indirect ELISAs for the detection of antibodies against glycoprotein E (gE) of Aujeszky's disease virus (ADV) in sera have been developed. The rec-gE-ELISA is based on the E. coli-expressed recombinant protein containing the N-terminal sequences of gE (aa 1-125) fused with the glutathione S-transferase from Schistosoma japonicum. The affi-gE-ELISA is based on native gE, which was purified from virions by affinity chromatography. The tests were optimised and compared with each other, as well as with the recently developed blocking gE-ELISA (Morenkov et al., 1997b), with respect to specificity and sensitivity. The rec-gE-ELISA was less sensitive in detecting ADV-infected animals than the affi-gE-ELISA (sensitivity 80% and 97%, respectively), which is probably due to the lack of conformation-dependent immunodominant epitopes on the recombinant protein expressed in E. coli. The specificity of the rec-gE-ELISA and affi-gE-ELISA was rather moderate (90% and 94%, respectively) because it was necessary to set such cut-off values in the tests that provided a maximum level of sensitivity, which obviously increased the incidence of false positive reactions. Though the indirect ELISAs detect antibodies against many epitopes of gE, the blocking gE-ELISA, which detects antibodies against only one immunodominant epitope of gE, showed a better test performance (specificity 99% and sensitivity 98%). This is most probably due to rather high dilutions of the sera used in the indirect gE-ELISAs (1:30) as compared to the serum dilution in the blocking gE-ELISA (1:2). We conclude that the indirect gE-ELISAs are sufficiently specific and sensitive to distinguish ADV-infected swine from those vaccinated with gE-negative vaccine and can be useful, in particularly affi-gE-ELISA, as additional tests for the detection of antibodies to gE.
Wang, Junmei; Hou, Tingjun
2012-05-25
It is of great interest in modern drug design to accurately calculate the free energies of protein-ligand or nucleic acid-ligand binding. MM-PBSA (molecular mechanics Poisson-Boltzmann surface area) and MM-GBSA (molecular mechanics generalized Born surface area) have gained popularity in this field. For both methods, the conformational entropy, which is usually calculated through normal-mode analysis (NMA), is needed to calculate the absolute binding free energies. Unfortunately, NMA is computationally demanding and becomes a bottleneck of the MM-PB/GBSA-NMA methods. In this work, we have developed a fast approach to estimate the conformational entropy based upon solvent accessible surface area calculations. In our approach, the conformational entropy of a molecule, S, can be obtained by summing up the contributions of all atoms, no matter they are buried or exposed. Each atom has two types of surface areas, solvent accessible surface area (SAS) and buried SAS (BSAS). The two types of surface areas are weighted to estimate the contribution of an atom to S. Atoms having the same atom type share the same weight and a general parameter k is applied to balance the contributions of the two types of surface areas. This entropy model was parametrized using a large set of small molecules for which their conformational entropies were calculated at the B3LYP/6-31G* level taking the solvent effect into account. The weighted solvent accessible surface area (WSAS) model was extensively evaluated in three tests. For convenience, TS values, the product of temperature T and conformational entropy S, were calculated in those tests. T was always set to 298.15 K through the text. First of all, good correlations were achieved between WSAS TS and NMA TS for 44 protein or nucleic acid systems sampled with molecular dynamics simulations (10 snapshots were collected for postentropy calculations): the mean correlation coefficient squares (R²) was 0.56. As to the 20 complexes, the TS
Energy Technology Data Exchange (ETDEWEB)
Park, Peter C. [Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Schreibmann, Eduard; Roper, Justin; Elder, Eric; Crocker, Ian [Department of Radiation Oncology, Winship Cancer Institute of Emory University, Atlanta, Georgia (United States); Fox, Tim [Varian Medical Systems, Palo Alto, California (United States); Zhu, X. Ronald [Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Dong, Lei [Scripps Proton Therapy Center, San Diego, California (United States); Dhabaan, Anees, E-mail: anees.dhabaan@emory.edu [Department of Radiation Oncology, Winship Cancer Institute of Emory University, Atlanta, Georgia (United States)
2015-03-15
Purpose: Computed tomography (CT) artifacts can severely degrade dose calculation accuracy in proton therapy. Prompted by the recently increased popularity of magnetic resonance imaging (MRI) in the radiation therapy clinic, we developed an MRI-based CT artifact correction method for improving the accuracy of proton range calculations. Methods and Materials: The proposed method replaces corrupted CT data by mapping CT Hounsfield units (HU number) from a nearby artifact-free slice, using a coregistered MRI. MRI and CT volumetric images were registered with use of 3-dimensional (3D) deformable image registration (DIR). The registration was fine-tuned on a slice-by-slice basis by using 2D DIR. Based on the intensity of paired MRI pixel values and HU from an artifact-free slice, we performed a comprehensive analysis to predict the correct HU for the corrupted region. For a proof-of-concept validation, metal artifacts were simulated on a reference data set. Proton range was calculated using reference, artifactual, and corrected images to quantify the reduction in proton range error. The correction method was applied to 4 unique clinical cases. Results: The correction method resulted in substantial artifact reduction, both quantitatively and qualitatively. On respective simulated brain and head and neck CT images, the mean error was reduced from 495 and 370 HU to 108 and 92 HU after correction. Correspondingly, the absolute mean proton range errors of 2.4 cm and 1.7 cm were reduced to less than 2 mm in both cases. Conclusions: Our MRI-based CT artifact correction method can improve CT image quality and proton range calculation accuracy for patients with severe CT artifacts.
Monzo, Alex; Olajos, Marcell; De Benedictis, Lorenzo; Rivera, Zuly; Bonn, Guenther K; Guttman, András
2008-09-01
As a continuation of our work on boronic acid lectin affinity chromatography (BLAC), in this paper we introduce an automated affinity micropartitioning approach using combined boronic acid and concanavalin A (BLAC/Con A) resin-filled micropipette tips to isolate and enrich human serum glycoproteins. The N-linked oligosaccharides of the partitioned glycoproteins were removed by PNGase F enzyme digestion, followed by 8-aminopyrene-1,3,6-trisulfonic acid labeling. Capillary gel electrophoresis with blue LED-induced fluorescence detection was applied in a multiplexed format for comparative glycan profiling. The efficiency of BLAC affinity micropartitioning was compared with that of the individual lectin and pseudolectin affinity enrichment. Finally, we report on our findings in glycosylation differences in human serum samples from healthy and prostate cancer patients by applying BLAC/Con A micropipette tip-based enrichment and comparative multicapillary gel electrophoresis analysis of the released and labeled glycans.
Adsorption affinity of anions on metal oxyhydroxides
Pechenyuk, S. I.; Semushina, Yu. P.; Kuz'mich, L. F.
2013-03-01
The dependences of anion (phosphate, carbonate, sulfate, chromate, oxalate, tartrate, and citrate) adsorption affinity anions from geometric characteristics, acid-base properties, and complex forming ability are generalized. It is shown that adsorption depends on the nature of both the anions and the ionic medium and adsorbent. It is established that anions are generally grouped into the following series of adsorption affinity reduction: PO{4/3-}, CO{3/2-} > C2O{4/2-}, C(OH)(CH2)2(COO){3/3-}, (CHOH)2(COO){2/2-} > CrO{4/2-} ≫ SO{4/2-}.
New opioid affinity labels containing maleoyl moiety.
Szatmári, I; Orosz, G; Rónai, A Z; Makó, E; Medzihradszky, K; Borsodi, A
1999-01-01
Opioid receptor binding properties and pharmacological profiles of novel peptides containing maleoyl function were determined in order to develop new affinity labels. Based on the enkephalin structure peptide ligands were synthesized and tested. Both in in vitro receptor binding experiments and pharmacological studies, all ligands showed agonist character with relatively high affinity (Ki values in the nanomolar range) and good to moderate selectivity. Replacement of Gly2 in the enkephalin frame with D-Ala led to higher affinities with a small decrease in selectivity. The longer peptide chains resulted in compounds with high percentage (up to 86%) of irreversible binding. The selectivity pattern of the ligands is in good agreement with the data obtained from the pharmacological assays (guinea pig ileum and mouse vas deferens bioassays). The newly synthesized peptides could be used in further studies in order to determine more detailed characteristics of the ligand-receptor interaction.
Energy Technology Data Exchange (ETDEWEB)
McGee, K P; Lake, D; Mariappan, Y; Manduca, A; Ehman, R L [Department of Radiology, Mayo Clinic College of Medicine, 200 First Street, SW, Rochester, MN 55905 (United States); Hubmayr, R D [Division of Pulmonary and Critical Care Medicine, Department of Internal Medicine, Mayo Clinic College of Medicine, 200 First Street, SW, Rochester, MN 55905 (United States); Ansell, K, E-mail: mcgee.kiaran@mayo.edu [Schaeffer Academy, 2700 Schaeffer Lane NE, Rochester, MN 55906 (United States)
2011-07-21
Magnetic resonance elastography (MRE) is a non-invasive phase-contrast-based method for quantifying the shear stiffness of biological tissues. Synchronous application of a shear wave source and motion encoding gradient waveforms within the MRE pulse sequence enable visualization of the propagating shear wave throughout the medium under investigation. Encoded shear wave-induced displacements are then processed to calculate the local shear stiffness of each voxel. An important consideration in local shear stiffness estimates is that the algorithms employed typically calculate shear stiffness using relatively high signal-to-noise ratio (SNR) MRE images and have difficulties at an extremely low SNR. A new method of estimating shear stiffness based on the principal spatial frequency of the shear wave displacement map is presented. Finite element simulations were performed to assess the relative insensitivity of this approach to decreases in SNR. Additionally, ex vivo experiments were conducted on normal rat lungs to assess the robustness of this approach in low SNR biological tissue. Simulation and experimental results indicate that calculation of shear stiffness by the principal frequency method is less sensitive to extremely low SNR than previously reported MRE inversion methods but at the expense of loss of spatial information within the region of interest from which the principal frequency estimate is derived.
Miliordos, Evangelos; Xantheas, Sotiris S
2013-08-15
We propose a general procedure for the numerical calculation of the harmonic vibrational frequencies that is based on internal coordinates and Wilson's GF methodology via double differentiation of the energy. The internal coordinates are defined as the geometrical parameters of a Z-matrix structure, thus avoiding issues related to their redundancy. Linear arrangements of atoms are described using a dummy atom of infinite mass. The procedure has been automated in FORTRAN90 and its main advantage lies in the nontrivial reduction of the number of single-point energy calculations needed for the construction of the Hessian matrix when compared to the corresponding number using double differentiation in Cartesian coordinates. For molecules of C1 symmetry the computational savings in the energy calculations amount to 36N - 30, where N is the number of atoms, with additional savings when symmetry is present. Typical applications for small and medium size molecules in their minimum and transition state geometries as well as hydrogen bonded clusters (water dimer and trimer) are presented. In all cases the frequencies based on internal coordinates differ on average by coordinates.
DEFF Research Database (Denmark)
Rees, Stephen Edward; Rychwicka-Kielek, Beate A; Andersen, Bjarne F
2012-01-01
Abstract Background: Repeated arterial puncture is painful. A mathematical method exists for transforming peripheral venous pH, PCO2 and PO2 to arterial eliminating the need for arterial sampling. This study evaluates this method to monitor acid-base and oxygenation during admission for exacerbat......Abstract Background: Repeated arterial puncture is painful. A mathematical method exists for transforming peripheral venous pH, PCO2 and PO2 to arterial eliminating the need for arterial sampling. This study evaluates this method to monitor acid-base and oxygenation during admission...... were assessed with previously defined rules. Differences between maximal changes of calculated and measured values were compared using a t-test, with trends analysed by inspection of plots. Results: Fifty-four patients, median age 67 years (range 62-75), were studied on average 3 days. Mean values of p......H, PCO2 and PO2 were 7.432±0.047, 6.8±1.7 kPa and 9.2±1.5 kPa, respectively. Calculated and measured arterial pH and PCO2 agreed well, differences having small bias and SD (0.000±0.022 pH, -0.06±0.50 kPa PCO2), significantly better than venous blood alone. Calculated PO2 obeyed the clinical rules...
Fernández-Fernández, Mario; Rodríguez-González, Pablo; García Alonso, J Ignacio
2016-10-01
We have developed a novel, rapid and easy calculation procedure for Mass Isotopomer Distribution Analysis based on multiple linear regression which allows the simultaneous calculation of the precursor pool enrichment and the fraction of newly synthesized labelled proteins (fractional synthesis) using linear algebra. To test this approach, we used the peptide RGGGLK as a model tryptic peptide containing three subunits of glycine. We selected glycine labelled in two (13) C atoms ((13) C2 -glycine) as labelled amino acid to demonstrate that spectral overlap is not a problem in the proposed methodology. The developed methodology was tested first in vitro by changing the precursor pool enrichment from 10 to 40% of (13) C2 -glycine. Secondly, a simulated in vivo synthesis of proteins was designed by combining the natural abundance RGGGLK peptide and 10 or 20% (13) C2 -glycine at 1 : 1, 1 : 3 and 3 : 1 ratios. Precursor pool enrichments and fractional synthesis values were calculated with satisfactory precision and accuracy using a simple spreadsheet. This novel approach can provide a relatively rapid and easy means to measure protein turnover based on stable isotope tracers. Copyright © 2016 John Wiley & Sons, Ltd.
Calculation of the similarity rate between images based on the local minima present Therein
Directory of Open Access Journals (Sweden)
K. Hourany
2016-12-01
Full Text Available Hourany, K., Benmeddour, F., Moulin, E., Assaad, J. and Zaatar, Y. Calculation of the similarity rate between images based on the local minima present therein. 2016. Lebanese Science Journal, 17(2: 177-192. Image processing is a very vast field that includes both IT and applied mathematics. It is a discipline that studies the improvement and transformations of digital images hence permitting the improvement of the quality of these images and the extraction of information. The comparison of digital images is a paramount issue that has been discussed in several researches because of its various applications especially in the field of control and surveillance such as the Structural Health Monitoring using acoustic waves. The IT support of the images serves especially for comparing them notably in distinguishing differences between these images and quantifying them automatically. In this study we will present an algorithm, allowing us to calculate the similarity rate between images based on the local minima present therein. This algorithm is divided into two main parts. In the first part we will explain how to extract the local minima from an image and in the second part we will show how to calculate the similarity rate between two images.
GPU-based Monte Carlo radiotherapy dose calculation using phase-space sources
Townson, Reid W.; Jia, Xun; Tian, Zhen; Jiang Graves, Yan; Zavgorodni, Sergei; Jiang, Steve B.
2013-06-01
A novel phase-space source implementation has been designed for graphics processing unit (GPU)-based Monte Carlo dose calculation engines. Short of full simulation of the linac head, using a phase-space source is the most accurate method to model a clinical radiation beam in dose calculations. However, in GPU-based Monte Carlo dose calculations where the computation efficiency is very high, the time required to read and process a large phase-space file becomes comparable to the particle transport time. Moreover, due to the parallelized nature of GPU hardware, it is essential to simultaneously transport particles of the same type and similar energies but separated spatially to yield a high efficiency. We present three methods for phase-space implementation that have been integrated into the most recent version of the GPU-based Monte Carlo radiotherapy dose calculation package gDPM v3.0. The first method is to sequentially read particles from a patient-dependent phase-space and sort them on-the-fly based on particle type and energy. The second method supplements this with a simple secondary collimator model and fluence map implementation so that patient-independent phase-space sources can be used. Finally, as the third method (called the phase-space-let, or PSL, method) we introduce a novel source implementation utilizing pre-processed patient-independent phase-spaces that are sorted by particle type, energy and position. Position bins located outside a rectangular region of interest enclosing the treatment field are ignored, substantially decreasing simulation time with little effect on the final dose distribution. The three methods were validated in absolute dose against BEAMnrc/DOSXYZnrc and compared using gamma-index tests (2%/2 mm above the 10% isodose). It was found that the PSL method has the optimal balance between accuracy and efficiency and thus is used as the default method in gDPM v3.0. Using the PSL method, open fields of 4 × 4, 10 × 10 and 30 × 30 cm
GPU-based Monte Carlo radiotherapy dose calculation using phase-space sources.
Townson, Reid W; Jia, Xun; Tian, Zhen; Graves, Yan Jiang; Zavgorodni, Sergei; Jiang, Steve B
2013-06-21
A novel phase-space source implementation has been designed for graphics processing unit (GPU)-based Monte Carlo dose calculation engines. Short of full simulation of the linac head, using a phase-space source is the most accurate method to model a clinical radiation beam in dose calculations. However, in GPU-based Monte Carlo dose calculations where the computation efficiency is very high, the time required to read and process a large phase-space file becomes comparable to the particle transport time. Moreover, due to the parallelized nature of GPU hardware, it is essential to simultaneously transport particles of the same type and similar energies but separated spatially to yield a high efficiency. We present three methods for phase-space implementation that have been integrated into the most recent version of the GPU-based Monte Carlo radiotherapy dose calculation package gDPM v3.0. The first method is to sequentially read particles from a patient-dependent phase-space and sort them on-the-fly based on particle type and energy. The second method supplements this with a simple secondary collimator model and fluence map implementation so that patient-independent phase-space sources can be used. Finally, as the third method (called the phase-space-let, or PSL, method) we introduce a novel source implementation utilizing pre-processed patient-independent phase-spaces that are sorted by particle type, energy and position. Position bins located outside a rectangular region of interest enclosing the treatment field are ignored, substantially decreasing simulation time with little effect on the final dose distribution. The three methods were validated in absolute dose against BEAMnrc/DOSXYZnrc and compared using gamma-index tests (2%/2 mm above the 10% isodose). It was found that the PSL method has the optimal balance between accuracy and efficiency and thus is used as the default method in gDPM v3.0. Using the PSL method, open fields of 4 × 4, 10 × 10 and 30 × 30 cm
Fully converged plane-wave-based self-consistent G W calculations of periodic solids
Cao, Huawei; Yu, Zhongyuan; Lu, Pengfei; Wang, Lin-Wang
2017-01-01
The G W approximation is a well-known method to obtain the quasiparticle and spectral properties of systems ranging from molecules to solids. In practice, G W calculations are often employed with many different approximations and truncations. In this work, we describe the implementation of a fully self-consistent G W approach based on the solution of the Dyson equation using a plane wave basis set. Algorithmic, numerical, and technical details of the self-consistent G W approach are presented. The fully self-consistent G W calculations are performed for GaAs, ZnO, and CdS including semicores in the pseudopotentials. No further approximations and truncations apart from the truncation on the plane wave basis set are made in our implementation of the G W calculation. After adopting a special potential technique, a ˜100 Ry energy cutoff can be used without the loss of accuracy. We found that the self-consistent G W (sc-G W ) significantly overestimates the bulk band gaps, and this overestimation is likely due to the underestimation of the macroscopic dielectric constants. On the other hand, the sc-G W accurately predicts the d -state positions, most likely because the d -state screening does not sensitively depend on the macroscopic dielectric constant. Our work indicates the need to include the high-order vertex term in order for the many-body perturbation theory to accurately predict the semiconductor band gaps. It also sheds some light on why, in some cases, the G0W0 bulk calculation is more accurate than the fully self-consistent G W calculation, because the initial density-functional theory has a better dielectric constant compared to experiments.
Hypothesis testing and power calculations for taxonomic-based human microbiome data.
Directory of Open Access Journals (Sweden)
Patricio S La Rosa
Full Text Available This paper presents new biostatistical methods for the analysis of microbiome data based on a fully parametric approach using all the data. The Dirichlet-multinomial distribution allows the analyst to calculate power and sample sizes for experimental design, perform tests of hypotheses (e.g., compare microbiomes across groups, and to estimate parameters describing microbiome properties. The use of a fully parametric model for these data has the benefit over alternative non-parametric approaches such as bootstrapping and permutation testing, in that this model is able to retain more information contained in the data. This paper details the statistical approaches for several tests of hypothesis and power/sample size calculations, and applies them for illustration to taxonomic abundance distribution and rank abundance distribution data using HMP Jumpstart data on 24 subjects for saliva, subgingival, and supragingival samples. Software for running these analyses is available.
A Force-Based Grid Manipulator for ALE Calculations in a Lobe Pump
Institute of Scientific and Technical Information of China (English)
John Vande Voorde; Jan Vierendeels; Erik Dick
2003-01-01
In this paper, a time-dependant calculation of flow in a lobe pump is presented. Calculations are performed using the arbitrary Lagrangean Eulerean (ALE) method. A grid manipulator is needed to move the nodes between time steps. The used grid manipulator is based on the pseudo-force idea. This means that each node is fictitiously connected with its 8 neighbours via fictitious springs. The equilibrium of the resulting pseudo spring forces defines the altered position of the nodes. The grid manipulator was coupled with a commercial flow solver and the whole was tested on the flow through a three-lobe lobe pump. Results were obtained for a rotational speed of 460 rpm and incompressible silicon oil as fluid.
Liu, Jicheng; Huang, Kama; Guo, Lanting; Zhang, Hong; Hu, Yayi
2005-04-01
It is the intent of this paper to locate the activation point in Transcranial Magnetic Stimulation (TMS) efficiently. The schemes of coil array in torus shape is presented to get the electromagnetic field distribution with ideal focusing capability. Then an improved adaptive genetic algorithm (AGA) is applied to the optimization of both value and phase of the current infused in each coil. Based on the calculated results of the optimized current configurations, ideal focusing capability is drawn as contour lines and 3-D mesh charts of magnitude of both magnetic and electric field within the calculation area. It is shown that the coil array has good capability to establish focused shape of electromagnetic distribution. In addition, it is also demonstrated that the coil array has the capability to focus on two or more targets simultaneously.
GPU-based fast Monte Carlo simulation for radiotherapy dose calculation
Jia, Xun; Graves, Yan Jiang; Folkerts, Michael; Jiang, Steve B
2011-01-01
Monte Carlo (MC) simulation is commonly considered to be the most accurate dose calculation method in radiotherapy. However, its efficiency still requires improvement for many routine clinical applications. In this paper, we present our recent progress towards the development a GPU-based MC dose calculation package, gDPM v2.0. It utilizes the parallel computation ability of a GPU to achieve high efficiency, while maintaining the same particle transport physics as in the original DPM code and hence the same level of simulation accuracy. In GPU computing, divergence of execution paths between threads can considerably reduce the efficiency. Since photons and electrons undergo different physics and hence attain different execution paths, we use a simulation scheme where photon transport and electron transport are separated to partially relieve the thread divergence issue. High performance random number generator and hardware linear interpolation are also utilized. We have also developed various components to hand...
Feng, Chi; Li, Dong; Gao, Shan; Daniel, Ketui
2016-11-01
This paper presents a CFD (Computation Fluid Dynamic) simulation and experimental results for the reflected radiation error from turbine vanes when measuring turbine blade's temperature using a pyrometer. In the paper, an accurate reflection model based on discrete irregular surfaces is established. Double contour integral method is used to calculate view factor between the irregular surfaces. Calculated reflected radiation error was found to change with relative position between blades and vanes as temperature distribution of vanes and blades was simulated using CFD. Simulation results indicated that when the vanes suction surface temperature ranged from 860 K to 1060 K and the blades pressure surface average temperature is 805 K, pyrometer measurement error can reach up to 6.35%. Experimental results show that the maximum pyrometer absolute error of three different targets on the blade decreases from 6.52%, 4.15% and 1.35% to 0.89%, 0.82% and 0.69% respectively after error correction.
Phase-only stereoscopic hologram calculation based on Gerchberg-Saxton iterative algorithm
Xia, Xinyi; Xia, Jun
2016-09-01
A phase-only computer-generated holography (CGH) calculation method for stereoscopic holography is proposed in this paper. The two-dimensional (2D) perspective projection views of the three-dimensional (3D) object are generated by the computer graphics rendering techniques. Based on these views, a phase-only hologram is calculated by using the Gerchberg-Saxton (GS) iterative algorithm. Comparing with the non-iterative algorithm in the conventional stereoscopic holography, the proposed method improves the holographic image quality, especially for the phase-only hologram encoded from the complex distribution. Both simulation and optical experiment results demonstrate that our proposed method can give higher quality reconstruction comparing with the traditional method. Project supported by the National Basic Research Program of China (Grant No. 2013CB328803) and the National High Technology Research and Development Program of China (Grant Nos. 2013AA013904 and 2015AA016301).
Dose calculation using a numerical method based on Haar wavelets integration
Energy Technology Data Exchange (ETDEWEB)
Belkadhi, K., E-mail: khaled.belkadhi@ult-tunisie.com [Unité de Recherche de Physique Nucléaire et des Hautes Énergies, Faculté des Sciences de Tunis, Université Tunis El-Manar (Tunisia); Manai, K. [Unité de Recherche de Physique Nucléaire et des Hautes Énergies, Faculté des Sciences de Tunis, Université Tunis El-Manar (Tunisia); College of Science and Arts, University of Bisha, Bisha (Saudi Arabia)
2016-03-11
This paper deals with the calculation of the absorbed dose in an irradiation cell of gamma rays. Direct measurement and simulation have shown that they are expensive and time consuming. An alternative to these two operations is numerical methods, a quick and efficient way can furnish an estimation of the absorbed dose by giving an approximation of the photon flux at a specific point of space. To validate the numerical integration method based on the Haar wavelet for absorbed dose estimation, a study with many configurations was performed. The obtained results with the Haar wavelet method showed a very good agreement with the simulation highlighting good efficacy and acceptable accuracy. - Highlights: • A numerical integration method using Haar wavelets is detailed. • Absorbed dose is estimated with Haar wavelets method. • Calculated absorbed dose using Haar wavelets and Monte Carlo simulation using Geant4 are compared.
Nguyen van Ye, Romain; Del-Castillo-Negrete, Diego; Spong, D.; Hirshman, S.; Farge, M.
2008-11-01
A limitation of particle-based transport calculations is the noise due to limited statistical sampling. Thus, a key element for the success of these calculations is the development of efficient denoising methods. Here we discuss denoising techniques based on Proper Orthogonal Decomposition (POD) and Wavelet Decomposition (WD). The goal is the reconstruction of smooth (denoised) particle distribution functions from discrete particle data obtained from Monte Carlo simulations. In 2-D, the POD method is based on low rank truncations of the singular value decomposition of the data. For 3-D we propose the use of a generalized low rank approximation of matrices technique. The WD denoising is based on the thresholding of empirical wavelet coefficients [Donoho et al., 1996]. The methods are illustrated and tested with Monte-Carlo particle simulation data of plasma collisional relaxation including pitch angle and energy scattering. As an application we consider guiding-center transport with collisions in a magnetically confined plasma in toroidal geometry. The proposed noise reduction methods allow to achieve high levels of smoothness in the particle distribution function using significantly less particles in the computations.
Proton affinities of candidates for positively charged ambient ions in boreal forests
Ruusuvuori, K.; Kurtén, T.; Ortega, I. K.; Faust, J.; Vehkamäki, H.
2013-10-01
The optimized structures and proton affinities of a total of 81 nitrogen-containing bases, chosen based on field measurements of ambient positive ions, were studied using the CBS-QB3 quantum chemical method. The results were compared to values given in the National Institute of Standards and Technology (NIST) Chemistry WebBook in cases where a value was listed. The computed values show good agreement with the values listed in NIST. Grouping the molecules based on their molecular formula, the largest calculated proton affinities for each group were also compared with experimentally observed ambient cation concentrations in a boreal forest. This comparison allows us to draw qualitative conclusions about the relative ambient concentrations of different nitrogen-containing organic base molecules.
Proton affinities of candidates for positively charged ambient ions in the boreal forest
Directory of Open Access Journals (Sweden)
K. Ruusuvuori
2013-04-01
Full Text Available The optimized structures and proton affinities of a total of 81 nitrogen-containing bases, chosen based on field measurements of ambient positive ions, were studied using the CBS-QB3 quantum chemical method. The results were compared to values given in the National Institute of Standards and Technology (NIST Chemistry WebBook in cases where a value was listed. The computed values show good agreement with the values listed in NIST. Grouping the molecules based on their molecular formula, the largest calculated proton affinities for each group were also compared with experimentally observed ambient cation concentrations in the boreal forest. This comparison allows us to draw qualitative conclusions about the relative ambient concentrations of different nitrogen-containing organic base molecules.
Proton affinities of candidates for positively charged ambient ions in the boreal forest
Ruusuvuori, K.; Kurtén, T.; Ortega, I. K.; Faust, J.; Vehkamäki, H.
2013-04-01
The optimized structures and proton affinities of a total of 81 nitrogen-containing bases, chosen based on field measurements of ambient positive ions, were studied using the CBS-QB3 quantum chemical method. The results were compared to values given in the National Institute of Standards and Technology (NIST) Chemistry WebBook in cases where a value was listed. The computed values show good agreement with the values listed in NIST. Grouping the molecules based on their molecular formula, the largest calculated proton affinities for each group were also compared with experimentally observed ambient cation concentrations in the boreal forest. This comparison allows us to draw qualitative conclusions about the relative ambient concentrations of different nitrogen-containing organic base molecules.
Proton affinities of candidates for positively charged ambient ions in boreal forests
Directory of Open Access Journals (Sweden)
K. Ruusuvuori
2013-10-01
Full Text Available The optimized structures and proton affinities of a total of 81 nitrogen-containing bases, chosen based on field measurements of ambient positive ions, were studied using the CBS-QB3 quantum chemical method. The results were compared to values given in the National Institute of Standards and Technology (NIST Chemistry WebBook in cases where a value was listed. The computed values show good agreement with the values listed in NIST. Grouping the molecules based on their molecular formula, the largest calculated proton affinities for each group were also compared with experimentally observed ambient cation concentrations in a boreal forest. This comparison allows us to draw qualitative conclusions about the relative ambient concentrations of different nitrogen-containing organic base molecules.
Affine Transformation Based Image Mosaics%基于仿射变换的图像序列拼接方法
Institute of Scientific and Technical Information of China (English)
王建华
2003-01-01
When Image Mosaics is produced, we optimally solve the registration transformation for adjacent frames with traditional ways. The way is slow, heavy and sometimes gets stuck in local minima. The paper provides the way that solves the transformation with a affine transformation model. The way automatically produces the answer for the frames with larger bias. It greatly speedups the process of image mosaics. It plays important role in quick and real-time making image mosaics.
Effects of CT based Voxel Phantoms on Dose Distribution Calculated with Monte Carlo Method
Institute of Scientific and Technical Information of China (English)
Chen Chaobin; Huang Qunying; Wu Yican
2005-01-01
A few CT-based voxel phantoms were produced to investigate the sensitivity of Monte Carlo simulations of X-ray beam and electron beam to the proportions of elements and the mass densities of the materials used to express the patient's anatomical structure. The human body can be well outlined by air, lung, adipose, muscle, soft bone and hard bone to calculate the dose distribution with Monte Carlo method. The effects of the calibration curves established by using various CT scanners are not clinically significant based on our investigation. The deviation from the values of cumulative dose volume histogram derived from CT-based voxel phantoms is less than 1% for the given target.
Directory of Open Access Journals (Sweden)
N. S. Labidi
2013-01-01
Full Text Available The semiempirical AM1 SCF method is used to study the first static hyperpolarizabilities β of some novel mono-O-Hydroxy bidentate Schiff base in which electron donating (D and electron accepting (A groups were introduced on either side of the Schiff base ring system. Geometries of all molecules were optimized at the semiempirical AM1. The first static hyperpolarizabilities of these molecules were calculated using Hyperchem package. To understand this phenomenon in the context of molecular orbital picture, we examined the molecular HOMO and molecular LUMO generated via Hyperchem. The study reveals that the mono-O-Hydroxy bidentate Schiff bases have large β values and hence in general may have potential applications in the development of nonlinear optical materials.
GPU-based Monte Carlo radiotherapy dose calculation using phase-space sources
Townson, Reid; Tian, Zhen; Graves, Yan Jiang; Zavgorodni, Sergei; Jiang, Steve B
2013-01-01
A novel phase-space source implementation has been designed for GPU-based Monte Carlo dose calculation engines. Due to the parallelized nature of GPU hardware, it is essential to simultaneously transport particles of the same type and similar energies but separated spatially to yield a high efficiency. We present three methods for phase-space implementation that have been integrated into the most recent version of the GPU-based Monte Carlo radiotherapy dose calculation package gDPM v3.0. The first method is to sequentially read particles from a patient-dependent phase-space and sort them on-the-fly based on particle type and energy. The second method supplements this with a simple secondary collimator model and fluence map implementation so that patient-independent phase-space sources can be used. Finally, as the third method (called the phase-space-let, or PSL, method) we introduce a novel strategy to pre-process patient-independent phase-spaces and bin particles by type, energy and position. Position bins l...
Improving iterative surface energy balance convergence for remote sensing based flux calculation
Dhungel, Ramesh; Allen, Richard G.; Trezza, Ricardo
2016-04-01
A modification of the iterative procedure of the surface energy balance was purposed to expedite the convergence of Monin-Obukhov stability correction utilized by the remote sensing based flux calculation. This was demonstrated using ground-based weather stations as well as the gridded weather data (North American Regional Reanalysis) and remote sensing based (Landsat 5, 7) images. The study was conducted for different land-use classes in southern Idaho and northern California for multiple satellite overpasses. The convergence behavior of a selected Landsat pixel as well as all of the Landsat pixels within the area of interest was analyzed. Modified version needed multiple times less iteration compared to the current iterative technique. At the time of low wind speed (˜1.3 m/s), the current iterative technique was not able to find a solution of surface energy balance for all of the Landsat pixels, while the modified version was able to achieve it in a few iterations. The study will facilitate many operational evapotranspiration models to avoid the nonconvergence in low wind speeds, which helps to increase the accuracy of flux calculations.
Calculation of grey level co-occurrence matrix-based seismic attributes in three dimensions
Eichkitz, Christoph Georg; Amtmann, Johannes; Schreilechner, Marcellus Gregor
2013-10-01
Seismic interpretation can be supported by seismic attribute analysis. Common seismic attributes use mathematical relationships based on the geometry and the physical properties of the subsurface to reveal features of interest. But they are mostly not capable of describing the spatial arrangement of depositional facies or reservoir properties. Textural attributes such as the grey level co-occurrence matrix (GLCM) and its derived attributes are able to describe the spatial dependencies of seismic facies. The GLCM - primary used for 2D data - is a measure of how often different combinations of pixel brightness values occur in an image. We present in this paper a workflow for full three-dimensional calculation of GLCM-based seismic attributes that also consider the structural dip of the seismic data. In our GLCM workflow we consider all 13 possible space directions to determine GLCM-based attributes. The developed workflow is applied onto various seismic datasets and the results of GLCM calculation are compared to common seismic attributes such as coherence.
GPU-based ultra fast dose calculation using a finite pencil beam model
Gu, Xuejun; Men, Chunhua; Pan, Hubert; Majumdar, Amitava; Jiang, Steve B
2009-01-01
Online adaptive radiation therapy (ART) is an attractive concept that promises the ability to deliver an optimal treatment in response to the inter-fraction variability in patient anatomy. However, it has yet to be realized due to technical limitations. Fast dose deposit coefficient calculation is a critical component of the online planning process that is required for plan optimization of intensity modulated radiation therapy (IMRT). Computer graphics processing units (GPUs) are well-suited to provide the requisite fast performance for the data-parallel nature of dose calculation. In this work, we develop a dose calculation engine based on a finite-size pencil beam (FSPB) algorithm and a GPU parallel computing framework. The developed framework can accommodate any FSPB model. We test our implementation on a case of a water phantom and a case of a prostate cancer patient with varying beamlet and voxel sizes. All testing scenarios achieved speedup ranging from 200~400 times when using a NVIDIA Tesla C1060 card...
a Novel Sub-Pixel Matching Algorithm Based on Phase Correlation Using Peak Calculation
Xie, Junfeng; Mo, Fan; Yang, Chao; Li, Pin; Tian, Shiqiang
2016-06-01
The matching accuracy of homonymy points of stereo images is a key point in the development of photogrammetry, which influences the geometrical accuracy of the image products. This paper presents a novel sub-pixel matching method phase correlation using peak calculation to improve the matching accuracy. The peak theoretic centre that means to sub-pixel deviation can be acquired by Peak Calculation (PC) according to inherent geometrical relationship, which is generated by inverse normalized cross-power spectrum, and the mismatching points are rejected by two strategies: window constraint, which is designed by matching window and geometric constraint, and correlation coefficient, which is effective for satellite images used for mismatching points removing. After above, a lot of high-precise homonymy points can be left. Lastly, three experiments are taken to verify the accuracy and efficiency of the presented method. Excellent results show that the presented method is better than traditional phase correlation matching methods based on surface fitting in these aspects of accuracy and efficiency, and the accuracy of the proposed phase correlation matching algorithm can reach 0.1 pixel with a higher calculation efficiency.
Institute of Scientific and Technical Information of China (English)
王学滨
2004-01-01
A method for calculation of temperature distribution in adiabatic shear band is proposed in terms of gradient-dependent plasticity where the characteristic length describes the interactions and interplaying among microstructures. First, the increment of the plastic shear strain distribution in adiabatic shear band is obtained based on gradient-dependent plasticity. Then, the plastic work distribution is derived according to the current flow shear stress and the obtained increment of plastic shear strain distribution. In the light of the well-known assumption that 90% of plastic work is converted into the heat resulting in increase in temperature in adiabatic shear band, the increment of the temperature distribution is presented. Next, the average temperature increment in the shear band is calculated to compute the change in flow shear stress due to the thermal softening effect. After the actual flow shear stress considering the thermal softening effect is obtained according to the Johnson-Cook constitutive relation, the increment of the plastic shear strain distribution, the plastic work and the temperature in the next time step are recalculated until the total time is consumed. Summing the temperature distribution leads to rise in the total temperature distribution. The present calculated maximum temperature in adiabatic shear band in titanium agrees with the experimental observations. Moreover, the temperature profiles for different flow shear stresses are qualitatively consistent with experimental and numerical results. Effects of some related parameters on the temperature distribution are also predicted.
Energy Technology Data Exchange (ETDEWEB)
Lin, Lin; Chen, Mohan; Yang, Chao; He, Lixin
2012-02-10
We describe how to apply the recently developed pole expansion plus selected inversion (PEpSI) technique to Kohn-Sham density function theory (DFT) electronic structure calculations that are based on atomic orbital discretization. We give analytic expressions for evaluating charge density, total energy, Helmholtz free energy and atomic forces without using the eigenvalues and eigenvectors of the Kohn-Sham Hamiltonian. We also show how to update the chemical potential without using Kohn-Sham eigenvalues. The advantage of using PEpSI is that it has a much lower computational complexity than that associated with the matrix diagonalization procedure. We demonstrate the performance gain by comparing the timing of PEpSI with that of diagonalization on insulating and metallic nanotubes. For these quasi-1D systems, the complexity of PEpSI is linear with respect to the number of atoms. This linear scaling can be observed in our computational experiments when the number of atoms in a nanotube is larger than a few hundreds. Both the wall clock time and the memory requirement of PEpSI is modest. This makes it even possible to perform Kohn-Sham DFT calculations for 10,000-atom nanotubes on a single processor. We also show that the use of PEpSI does not lead to loss of accuracy required in a practical DFT calculation.
Excel pour l'ingénieur bases, graphiques, calculs, macros, VBA
Bellan, Philippe
2010-01-01
Excel, utilisé par tout possesseur d'un ordinateur personnel pour effectuer des manipulations élémentaires de tableaux et de chiffres, est en réalité un outil beaucoup plus puissant, aux potentialités souvent ignorées. A tous ceux, étudiants scientifiques, élèves-ingénieurs ou ingénieurs en exercice qui pensaient le calcul numérique seulement possible à travers des logiciels lourds et coûteux, ce livre montrera qu'un grand nombre de problèmes mathématiques courants de l'ingénieur peuvent être résolus numériquement en utilisant les outils de calcul et la capacité graphique d'Excel. A cet effet, après avoir introduit les notions de base, l'ouvrage décrit les fonctions disponibles avec Excel, puis quelques méthodes numériques simples permettant de calculer des intégrales, de résoudre des équations différentielles, d'obtenir les solutions de systèmes linéaires ou non, de traiter des problèmes d'optimisation... Les méthodes numériques présentées, qui sont très simples, peuvent...
Fast calculation of computer-generated hologram using run-length encoding based recurrence relation.
Nishitsuji, Takashi; Shimobaba, Tomoyoshi; Kakue, Takashi; Ito, Tomoyoshi
2015-04-20
Computer-Generated Holograms (CGHs) can be generated by superimposing zoneplates. A zoneplate is a grating that can concentrate an incident light into a point. Since a zoneplate has a circular symmetry, we reported an algorithm that rapidly generates a zoneplate by drawing concentric circles using computer graphic techniques. However, random memory access was required in the algorithm and resulted in degradation of the computational efficiency. In this study, we propose a fast CGH generation algorithm without random memory access using run-length encoding (RLE) based recurrence relation. As a result, we succeeded in improving the calculation time by 88%, compared with that of the previous work.
Energy Technology Data Exchange (ETDEWEB)
Cenerino, G. [CEA Centre d`Etudes de Fontenay-aux-Roses, 92 (France). Dept. de Protection de l`Environnement et des Installations; Chevalier, P.Y.; Fischer, E. [Thermodata, 38 -Saint-Martin-d`Heres (France); Marbeuf, A. [Centre National de la Recherche Scientifique (CNRS), 92 - Meudon-Bellevue (France). Lab. de Magnetisme et de Physique du Solide; Frenk, A. [Ecole Polytechnique Federale, Lausanne (Switzerland); Vahlas, C. [Laboratoire Marcel Mathieu, Centre Helioparc, 64 - Pau (France)
1992-12-31
Since 1974, Thermodata has been working on developing an Integrated Information System in Inorganic Chemistry. A major effort was carried on the thermochemical data assessment of both pure substances and multicomponent solution phases. The available data bases are connected to powerful calculation codes (GEMINI = Gibbs Energy Minimizer), which allow to determine the thermodynamical equilibrium state in multicomponent systems. The high interest of such an approach is illustrated by recent applications in as various fields as semi-conductors, chemical vapor deposition, hard alloys and nuclear safety. (author). 26 refs., 6 figs.
The Calculation Model for Operation Cost of Coal Resources Development Based on ANN
Institute of Scientific and Technical Information of China (English)
刘海滨
2004-01-01
On the basis of analysis and selection of factors influencing operation cost of coal resources development, fuzzy set method and artificial neural network (ANN) were adopted to set up the classification analysis model of coal resources. The collected samples were classified by using this model. Meanwhile, the pattern recognition model for classifying of the coal resources was built according to the factors influencing operation cost. Based on the results achieved above, in the light of the theory of information diffusion, the calculation model for operation cost of coal resources development has been presented and applied in practice, showing that these models are reasonable.
GPU-based fast Monte Carlo dose calculation for proton therapy.
Jia, Xun; Schümann, Jan; Paganetti, Harald; Jiang, Steve B
2012-12-07
Accurate radiation dose calculation is essential for successful proton radiotherapy. Monte Carlo (MC) simulation is considered to be the most accurate method. However, the long computation time limits it from routine clinical applications. Recently, graphics processing units (GPUs) have been widely used to accelerate computationally intensive tasks in radiotherapy. We have developed a fast MC dose calculation package, gPMC, for proton dose calculation on a GPU. In gPMC, proton transport is modeled by the class II condensed history simulation scheme with a continuous slowing down approximation. Ionization, elastic and inelastic proton nucleus interactions are considered. Energy straggling and multiple scattering are modeled. Secondary electrons are not transported and their energies are locally deposited. After an inelastic nuclear interaction event, a variety of products are generated using an empirical model. Among them, charged nuclear fragments are terminated with energy locally deposited. Secondary protons are stored in a stack and transported after finishing transport of the primary protons, while secondary neutral particles are neglected. gPMC is implemented on the GPU under the CUDA platform. We have validated gPMC using the TOPAS/Geant4 MC code as the gold standard. For various cases including homogeneous and inhomogeneous phantoms as well as a patient case, good agreements between gPMC and TOPAS/Geant4 are observed. The gamma passing rate for the 2%/2 mm criterion is over 98.7% in the region with dose greater than 10% maximum dose in all cases, excluding low-density air regions. With gPMC it takes only 6-22 s to simulate 10 million source protons to achieve ∼1% relative statistical uncertainty, depending on the phantoms and energy. This is an extremely high efficiency compared to the computational time of tens of CPU hours for TOPAS/Geant4. Our fast GPU-based code can thus facilitate the routine use of MC dose calculation in proton therapy.
Ordinary matter in nonlinear affine gauge theories of gravitation
Tiemblo, A; Tiemblo, A; Tresguerres, R
1994-01-01
We present a general framework to include ordinary fermionic matter in the metric--affine gauge theories of gravity. It is based on a nonlinear gauge realization of the affine group, with the Lorentz group as the classification subgroup of the matter and gravitational fields.
Striving for Empathy: Affinities, Alliances and Peer Sexuality Educators
Fields, Jessica; Copp, Martha
2015-01-01
Peer sexuality educators' accounts of their work reveal two approaches to empathy with their students: affinity and alliance. "Affinity-based empathy" rests on the idea that the more commonalities sexuality educators and students share (or perceive they share), the more they will be able to empathise with one another, while…
High affinity retinoic acid receptor antagonists: analogs of AGN 193109.
Johnson, A T; Wang, L; Gillett, S J; Chandraratna, R A
1999-02-22
A series of high affinity retinoic acid receptor (RAR) antagonists were prepared based upon the known antagonist AGN 193109 (2). Introduction of various phenyl groups revealed a preference for substitution at the para-position relative to the meta-site. Antagonists with the highest affinities for the RARs possessed hydrophobic groups, however, the presence of polar functionality was also well tolerated.
Striving for Empathy: Affinities, Alliances and Peer Sexuality Educators
Fields, Jessica; Copp, Martha
2015-01-01
Peer sexuality educators' accounts of their work reveal two approaches to empathy with their students: affinity and alliance. "Affinity-based empathy" rests on the idea that the more commonalities sexuality educators and students share (or perceive they share), the more they will be able to empathise with one another, while…
Classification of neocortical interneurons using affinity propagation
Directory of Open Access Journals (Sweden)
Roberto eSantana
2013-12-01
Full Text Available In spite of over a century of research on cortical circuits, it is still unknown how many classes of cortical neurons exist. Neuronal classification has been a difficult problem because it is unclear what a neuronal cell class actually is and what are the best characteristics are to define them. Recently, unsupervised classifications using cluster analysis based on morphological, physiological or molecular characteristics, when applied to selected datasets, have provided quantitative and unbiased identification of distinct neuronal subtypes. However, better and more robust classification methods are needed for increasingly complex and larger datasets. We explored the use of affinity propagation, a recently developed unsupervised classification algorithm imported from machine learning, which gives a representative example or exemplar for each cluster. As a case study, we applied affinity propagation to a test dataset of 337 interneurons belonging to four subtypes, previously identified based on morphological and physiological characteristics. We found that affinity propagation correctly classified most of the neurons in a blind, non-supervised manner. In fact, using a combined anatomical/physiological dataset, our algorithm differentiated parvalbumin from somatostatin interneurons in 49 out of 50 cases. Affinity propagation could therefore be used in future studies to validly classify neurons, as a first step to help reverse engineer neural circuits.
SDT: a virus classification tool based on pairwise sequence alignment and identity calculation.
Directory of Open Access Journals (Sweden)
Brejnev Muhizi Muhire
Full Text Available The perpetually increasing rate at which viral full-genome sequences are being determined is creating a pressing demand for computational tools that will aid the objective classification of these genome sequences. Taxonomic classification approaches that are based on pairwise genetic identity measures are potentially highly automatable and are progressively gaining favour with the International Committee on Taxonomy of Viruses (ICTV. There are, however, various issues with the calculation of such measures that could potentially undermine the accuracy and consistency with which they can be applied to virus classification. Firstly, pairwise sequence identities computed based on multiple sequence alignments rather than on multiple independent pairwise alignments can lead to the deflation of identity scores with increasing dataset sizes. Also, when gap-characters need to be introduced during sequence alignments to account for insertions and deletions, methodological variations in the way that these characters are introduced and handled during pairwise genetic identity calculations can cause high degrees of inconsistency in the way that different methods classify the same sets of sequences. Here we present Sequence Demarcation Tool (SDT, a free user-friendly computer program that aims to provide a robust and highly reproducible means of objectively using pairwise genetic identity calculations to classify any set of nucleotide or amino acid sequences. SDT can produce publication quality pairwise identity plots and colour-coded distance matrices to further aid the classification of sequences according to ICTV approved taxonomic demarcation criteria. Besides a graphical interface version of the program for Windows computers, command-line versions of the program are available for a variety of different operating systems (including a parallel version for cluster computing platforms.
A selection method for the calculation of preliminary risk-based remediation goals
Energy Technology Data Exchange (ETDEWEB)
Mahoney, L.A.; Batey, J.C.; Pintenich, J.L. [Eckenfelder Inc., Nashville, TN (United States)
1995-12-31
In the process of deriving acceptable concentrations of chemical constituents (or preliminary risk-based remediation goals, PRGs) for hazardous and other waste sites based on the site risk assessment results, it may be necessary or desirable to select a subset of constituents to focus the remainder of the site activities including the feasibility study and possibly, remedial design and verification sampling. Use of a focused set of action or clean-up goals offers the benefits of targeting those site areas where efforts should be concentrated, and reducing the cost and complexity of clean-up and verification sampling. Although the federal Superfund risk assessment guidance provides methods by which to calculate PRGs, no information is given on how to select which chemicals PRGs should be generated for. A method for this selection is presented which establishes: the media of interest; the populations for which PRGs should be generated; the relevant exposure route(s) for a given population to be used in calculating PRGs; and the individual constituents for which PRGs should be estimated. To illustrate this selection process, remedial investigation (RI) data and a baseline risk assessment for a hazardous waste site in Mississippi were used. The media of interest were identified as surface water and sediment from a creek that is adjacent to the site, on-site surface water, and groundwater from the uppermost aquifer. Of the 45 constituents detected in site-related waters, this selection process resulted in 16 for which PRGs were calculated, which served to focus the subsequent feasibility study efforts.
Directory of Open Access Journals (Sweden)
Sandeep Kumar Mishra
2017-03-01
Full Text Available The combined utility of many one and two dimensional NMR methodologies and DFT-based theoretical calculations have been exploited to detect the intramolecular hydrogen bond (HB in number of different organic fluorine-containing derivatives of molecules, viz. benzanilides, hydrazides, imides, benzamides, and diphenyloxamides. The existence of two and three centered hydrogen bonds has been convincingly established in the investigated molecules. The NMR spectral parameters, viz., coupling mediated through hydrogen bond, one-bond NH scalar couplings, physical parameter dependent variation of chemical shifts of NH protons have paved the way for understanding the presence of hydrogen bond involving organic fluorine in all the investigated molecules. The experimental NMR findings are further corroborated by DFT-based theoretical calculations including NCI, QTAIM, MD simulations and NBO analysis. The monitoring of H/D exchange with NMR spectroscopy established the effect of intramolecular HB and the influence of electronegativity of various substituents on the chemical kinetics in the number of organic building blocks. The utility of DQ-SQ technique in determining the information about HB in various fluorine substituted molecules has been convincingly established.
Numerical Calculations of WR-40 Boiler Based on its Zero-Dimensional Model
Directory of Open Access Journals (Sweden)
Hernik Bartłomiej
2014-06-01
Full Text Available Generally, the temperature of flue gases at the furnace outlet is not measured. Therefore, a special computation procedure is needed to determine it. This paper presents a method for coordination of the numerical model of a pulverised fuel boiler furnace chamber with the measuring data in a situation when CFD calculations are made in regard to the furnace only. This paper recommends the use of the classical 0-dimensional balance model of a boiler, based on the use of measuring data. The average temperature of flue gases at the furnace outlet tk" obtained using the model may be considered as highly reliable. The numerical model has to show the same value of tk" . This paper presents calculations for WR-40 boiler. The CFD model was matched to the 0-dimensional tk" value by means of a selection of the furnace wall emissivity. As a result of CFD modelling, the flue gas temperature and the concentration of CO, CO2, O2 and NOx were obtained at the furnace chamber outlet. The results of numerical modelling of boiler combustion based on volumetric reactions and using the Finite-Rate/Eddy-Dissipation Model are presented.
Directory of Open Access Journals (Sweden)
Feifei Fu
2014-01-01
Full Text Available Life cycle thinking has become widely applied in the assessment for building environmental performance. Various tool are developed to support the application of life cycle assessment (LCA method. This paper focuses on the carbon emission during the building construction stage. A partial LCA framework is established to assess the carbon emission in this phase. Furthermore, five typical LCA tools programs have been compared and analyzed for demonstrating the current application of LCA tools and their limitations in the building construction stage. Based on the analysis of existing tools and sustainability demands in building, a new computer calculation system has been developed to calculate the carbon emission for optimizing the sustainability during the construction stage. The system structure and detail functions are described in this paper. Finally, a case study is analyzed to demonstrate the designed LCA framework and system functions. This case is based on a typical building in UK with different plans of masonry wall and timber frame to make a comparison. The final results disclose that a timber frame wall has less embodied carbon emission than a similar masonry structure. 16% reduction was found in this study.
BaTiO3-based nanolayers and nanotubes: first-principles calculations.
Evarestov, Robert A; Bandura, Andrei V; Kuruch, Dmitrii D
2013-01-30
The first-principles calculations using hybrid exchange-correlation functional and localized atomic basis set are performed for BaTiO(3) (BTO) nanolayers and nanotubes (NTs) with the structure optimization. Both the cubic and the ferroelectric BTO phases are used for the nanolayers and NTs modeling. It follows from the calculations that nanolayers of the different ferroelectric BTO phases have the practically identical surface energies and are more stable than nanolayers of the cubic phase. Thin nanosheets composed of three or more dense layers of (0 1 0) and (0 1 1[overline]) faces preserve the ferroelectric displacements inherent to the initial bulk phase. The structure and stability of BTO single-wall NTs depends on the original bulk crystal phase and a wall thickness. The majority of the considered NTs with the low formation and strain energies has the mirror plane perpendicular to the tube axis and therefore cannot exhibit ferroelectricity. The NTs folded from (0 1 1[overline]) layers may show antiferroelectric arrangement of Ti-O bonds. Comparison of stability of the BTO-based and SrTiO(3)-based NTs shows that the former are more stable than the latter.
EXAFS simulations in Zn-doped LiNbO3 based on defect calculations
Valerio, Mário E. G.; Jackson, Robert A.; Bridges, Frank G.
2017-02-01
Lithium niobate, LiNbO3, is an important technological material with good electro-optic, acousto-optic, elasto-optic, piezoelectric and nonlinear properties. EXAFS on Zn-doped LiNbO3 found strong evidences that Zn substitutes primarily at the Li site on highly doped samples. In this work the EXAFS results were revisited using a different approach where the models for simulating the EXAFS results were obtained from the output of defect calculations. The strategy uses the relaxed positions of the ions surrounding the dopants to generate a cluster from where the EXAFS oscillations can be calculated. The defect involves not only the Zn possible substitution at either Li or Nb sites but also the charge compensating defects, when needed. From previous defect modelling, a subset of defects was selected based on the energetics of the defect production in the LiNbO3 lattice. From them, all possible clusters were generated and the simulated EXAFS were computed. The simulated EXAFS were them compared to available EXAFS results in the literature. Based on this comparison different models could be proposed to explain the behaviour of Zn in the LiNbO3 matrix.
Directory of Open Access Journals (Sweden)
Hiroaki Miyagawa
2013-07-01
Full Text Available This paper proposes a method for three dimensional gait analysis using wearable sensors and quaternion calculations. Seven sensor units consisting of a tri-axial acceleration and gyro sensors, were fixed to the lower limbs. The acceleration and angular velocity data of each sensor unit were measured during level walking. The initial orientations of the sensor units were estimated using acceleration data during upright standing position and the angular displacements were estimated afterwards using angular velocity data during gait. Here, an algorithm based on quaternion calculation was implemented for orientation estimation of the sensor units. The orientations of the sensor units were converted to the orientations of the body segments by a rotation matrix obtained from a calibration trial. Body segment orientations were then used for constructing a three dimensional wire frame animation of the volunteers during the gait. Gait analysis was conducted on five volunteers, and results were compared with those from a camera-based motion analysis system. Comparisons were made for the joint trajectory in the horizontal and sagittal plane. The average RMSE and correlation coefficient (CC were 10.14 deg and 0.98, 7.88 deg and 0.97, 9.75 deg and 0.78 for the hip, knee and ankle flexion angles, respectively.
Calculated thermal performance of solar collectors based on measured weather data from 2001-2010
DEFF Research Database (Denmark)
Dragsted, Janne; Furbo, Simon; Andersen, Elsa;
2015-01-01
This paper presents an investigation of the differences in modeled thermal performance of solar collectors when meteorological reference years are used as input and when mulit-year weather data is used as input. The investigation has shown that using the Danish reference year based on the period...... with an increase in global radiation. This means that besides increasing the thermal performance with increasing the solar radiation, the utilization of the solar radiation also becomes better....... 1975-1990 will result in deviations of up to 39 % compared with thermal performance calculated with multi-year the measured weather data. For the newer local reference years based on the period 2001-2010 the maximum deviation becomes 25 %. The investigation further showed an increase in utilization...
Relevant XML Documents - Approach Based on Vectors and Weight Calculation of Terms
Directory of Open Access Journals (Sweden)
Abdeslem DENNAI
2016-10-01
Full Text Available Three classes of documents, based on their data, circulate in the web: Unstructured documents (.Doc, .html, .pdf ..., semi-structured documents (.xml, .Owl ... and structured documents (Tables database for example. A semi-structured document is organized around predefined tags or defined by its author. However, many studies use a document classification by taking into account their textual content and underestimate their structure. We attempt in this paper to propose a representation of these semi-structured web documents based on weighted vectors allowing exploiting their content for a possible treatment. The weight of terms is calculated using: The normal frequency for a document, TF-IDF (Term Frequency - Inverse Document Frequency and logic (Boolean frequency for a set of documents. To assess and demonstrate the relevance of our proposed approach, we will realize several experiments on different corpus.
Multi-scale calculation based on dual domain material point method combined with molecular dynamics
Energy Technology Data Exchange (ETDEWEB)
Dhakal, Tilak Raj [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-02-27
This dissertation combines the dual domain material point method (DDMP) with molecular dynamics (MD) in an attempt to create a multi-scale numerical method to simulate materials undergoing large deformations with high strain rates. In these types of problems, the material is often in a thermodynamically non-equilibrium state, and conventional constitutive relations are often not available. In this method, the closure quantities, such as stress, at each material point are calculated from a MD simulation of a group of atoms surrounding the material point. Rather than restricting the multi-scale simulation in a small spatial region, such as phase interfaces, or crack tips, this multi-scale method can be used to consider non-equilibrium thermodynamic e ects in a macroscopic domain. This method takes advantage that the material points only communicate with mesh nodes, not among themselves; therefore MD simulations for material points can be performed independently in parallel. First, using a one-dimensional shock problem as an example, the numerical properties of the original material point method (MPM), the generalized interpolation material point (GIMP) method, the convected particle domain interpolation (CPDI) method, and the DDMP method are investigated. Among these methods, only the DDMP method converges as the number of particles increases, but the large number of particles needed for convergence makes the method very expensive especially in our multi-scale method where we calculate stress in each material point using MD simulation. To improve DDMP, the sub-point method is introduced in this dissertation, which provides high quality numerical solutions with a very small number of particles. The multi-scale method based on DDMP with sub-points is successfully implemented for a one dimensional problem of shock wave propagation in a cerium crystal. The MD simulation to calculate stress in each material point is performed in GPU using CUDA to accelerate the
A GPU-based Monte Carlo dose calculation code for photon transport in a voxel phantom
Energy Technology Data Exchange (ETDEWEB)
Bellezzo, M.; Do Nascimento, E.; Yoriyaz, H., E-mail: mbellezzo@gmail.br [Instituto de Pesquisas Energeticas e Nucleares / CNEN, Av. Lineu Prestes 2242, Cidade Universitaria, 05508-000 Sao Paulo (Brazil)
2014-08-15
As the most accurate method to estimate absorbed dose in radiotherapy, Monte Carlo method has been widely used in radiotherapy treatment planning. Nevertheless, its efficiency can be improved for clinical routine applications. In this paper, we present the CUBMC code, a GPU-based Mc photon transport algorithm for dose calculation under the Compute Unified Device Architecture platform. The simulation of physical events is based on the algorithm used in Penelope, and the cross section table used is the one generated by the Material routine, als present in Penelope code. Photons are transported in voxel-based geometries with different compositions. To demonstrate the capabilities of the algorithm developed in the present work four 128 x 128 x 128 voxel phantoms have been considered. One of them is composed by a homogeneous water-based media, the second is composed by bone, the third is composed by lung and the fourth is composed by a heterogeneous bone and vacuum geometry. Simulations were done considering a 6 MeV monoenergetic photon point source. There are two distinct approaches that were used for transport simulation. The first of them forces the photon to stop at every voxel frontier, the second one is the Woodcock method, where the photon stop in the frontier will be considered depending on the material changing across the photon travel line. Dose calculations using these methods are compared for validation with Penelope and MCNP5 codes. Speed-up factors are compared using a NVidia GTX 560-Ti GPU card against a 2.27 GHz Intel Xeon CPU processor. (Author)
Affinity is an important determinant of the anti-trypanosome activity of nanobodies.
Directory of Open Access Journals (Sweden)
Guy Caljon
Full Text Available BACKGROUND: The discovery of Nanobodies (Nbs with a direct toxic activity against African trypanosomes is a recent advancement towards a new strategy against these extracellular parasites. The anti-trypanosomal activity relies on perturbing the highly active recycling of the Variant-specific Surface Glycoprotein (VSG that occurs in the parasite's flagellar pocket. METHODOLOGY/PRINCIPAL FINDINGS: Here we expand the existing panel of Nbs with anti-Trypanosoma brucei potential and identify four categories based on their epitope specificity. We modified the binding properties of previously identified Nanobodies Nb_An05 and Nb_An33 by site-directed mutagenesis in the paratope and found this to strongly affect trypanotoxicity despite retention of antigen-targeting properties. Affinity measurements for all identified anti-trypanosomal Nbs reveal a strong correlation between trypanotoxicity and affinity (K(D, suggesting that it is a crucial determinant for this activity. Half maximal effective (50% affinity of 57 nM was calculated from the non-linear dose-response curves. In line with these observations, Nb humanizing mutations only preserved the trypanotoxic activity if the K(D remained unaffected. CONCLUSIONS/SIGNIFICANCE: This study reveals that the binding properties of Nanobodies need to be compatible with achieving an occupancy of >95% saturation of the parasite surface VSG in order to exert an anti-trypanosomal activity. As such, Nb-based approaches directed against the VSG target would require binding to an accessible, conserved epitope with high affinity.
Institute of Scientific and Technical Information of China (English)
蒋亚虎
2015-01-01
To solve the problem caused by fluctuation of received signal strength indicator and large computing capacity in on‐line measurement ,an indoor localization algorithm based on singularity detection and affinity propagation clustering was proposed . Taking advantage of Hampel filter and KDE ,a method of singularity detection was proposed ,then affinity propagation clustering algorithm was used to cluster received signal strength indicator .Rough and meticulous localization method was used to locate the consumer .The algorithm proposed can improve localization accuracy and reduce computing capacity Comparing with the tradi‐tional method .%为克服基于无线局域网指纹定位算法中接收信号值波动和在线检测时计算量大的问题，提出一种基于奇异值检测和A P聚类的无线局域网指纹定位算法。分析 H am pel滤波器和核概率密度估计在接收信号强度值中奇异值检测的不足，结合Hampel滤波器和核概率密度估计两种方法在奇异值检测中的优势，给出一种奇异值检测算法；利用 AP聚类算法，对离线训练系统中的信号强度测量值进行聚类；通过AP聚类粗检测和基于加权 k近邻算法的细检测评估得到用户位置，完成定位。对比传统方法，该定位算法能够提高定位的准确性，降低算法的计算复杂度。
Monte Carlo-based treatment planning system calculation engine for microbeam radiation therapy
Energy Technology Data Exchange (ETDEWEB)
Martinez-Rovira, I.; Sempau, J.; Prezado, Y. [Institut de Tecniques Energetiques, Universitat Politecnica de Catalunya, Diagonal 647, Barcelona E-08028 (Spain) and ID17 Biomedical Beamline, European Synchrotron Radiation Facility (ESRF), 6 rue Jules Horowitz B.P. 220, F-38043 Grenoble Cedex (France); Institut de Tecniques Energetiques, Universitat Politecnica de Catalunya, Diagonal 647, Barcelona E-08028 (Spain); Laboratoire Imagerie et modelisation en neurobiologie et cancerologie, UMR8165, Centre National de la Recherche Scientifique (CNRS), Universites Paris 7 et Paris 11, Bat 440., 15 rue Georges Clemenceau, F-91406 Orsay Cedex (France)
2012-05-15
Purpose: Microbeam radiation therapy (MRT) is a synchrotron radiotherapy technique that explores the limits of the dose-volume effect. Preclinical studies have shown that MRT irradiations (arrays of 25-75-{mu}m-wide microbeams spaced by 200-400 {mu}m) are able to eradicate highly aggressive animal tumor models while healthy tissue is preserved. These promising results have provided the basis for the forthcoming clinical trials at the ID17 Biomedical Beamline of the European Synchrotron Radiation Facility (ESRF). The first step includes irradiation of pets (cats and dogs) as a milestone before treatment of human patients. Within this context, accurate dose calculations are required. The distinct features of both beam generation and irradiation geometry in MRT with respect to conventional techniques require the development of a specific MRT treatment planning system (TPS). In particular, a Monte Carlo (MC)-based calculation engine for the MRT TPS has been developed in this work. Experimental verification in heterogeneous phantoms and optimization of the computation time have also been performed. Methods: The penelope/penEasy MC code was used to compute dose distributions from a realistic beam source model. Experimental verification was carried out by means of radiochromic films placed within heterogeneous slab-phantoms. Once validation was completed, dose computations in a virtual model of a patient, reconstructed from computed tomography (CT) images, were performed. To this end, decoupling of the CT image voxel grid (a few cubic millimeter volume) to the dose bin grid, which has micrometer dimensions in the transversal direction of the microbeams, was performed. Optimization of the simulation parameters, the use of variance-reduction (VR) techniques, and other methods, such as the parallelization of the simulations, were applied in order to speed up the dose computation. Results: Good agreement between MC simulations and experimental results was achieved, even at
GPU-based fast Monte Carlo simulation for radiotherapy dose calculation.
Jia, Xun; Gu, Xuejun; Graves, Yan Jiang; Folkerts, Michael; Jiang, Steve B
2011-11-21
Monte Carlo (MC) simulation is commonly considered to be the most accurate dose calculation method in radiotherapy. However, its efficiency still requires improvement for many routine clinical applications. In this paper, we present our recent progress toward the development of a graphics processing unit (GPU)-based MC dose calculation package, gDPM v2.0. It utilizes the parallel computation ability of a GPU to achieve high efficiency, while maintaining the same particle transport physics as in the original dose planning method (DPM) code and hence the same level of simulation accuracy. In GPU computing, divergence of execution paths between threads can considerably reduce the efficiency. Since photons and electrons undergo different physics and hence attain different execution paths, we use a simulation scheme where photon transport and electron transport are separated to partially relieve the thread divergence issue. A high-performance random number generator and a hardware linear interpolation are also utilized. We have also developed various components to handle the fluence map and linac geometry, so that gDPM can be used to compute dose distributions for realistic IMRT or VMAT treatment plans. Our gDPM package is tested for its accuracy and efficiency in both phantoms and realistic patient cases. In all cases, the average relative uncertainties are less than 1%. A statistical t-test is performed and the dose difference between the CPU and the GPU results is not found to be statistically significant in over 96% of the high dose region and over 97% of the entire region. Speed-up factors of 69.1 ∼ 87.2 have been observed using an NVIDIA Tesla C2050 GPU card against a 2.27 GHz Intel Xeon CPU processor. For realistic IMRT and VMAT plans, MC dose calculation can be completed with less than 1% standard deviation in 36.1 ∼ 39.6 s using gDPM.
Energy Technology Data Exchange (ETDEWEB)
Davydov, N.G.; Kiselevskiy, Yu.N.
1983-01-01
A computer (EVM) and an ASOI-VSP-SK program complex are used to analyze data from seismic exploration and acoustical logging with interval by interval calculation of the velocity every four meters. Vertical seismic profilling (VSP) results are used to identify all the upper layers as reference layers. The basic reference level, the third, which corresponds to the floor of the carbonate middle to upper Visean series, is not sustained due to the thin layered state of the terrigeneous section. Based on data from vertical seismic profilling, the reflected wave method (MOV) and the common depth point method (MOGT), the reference 3-a and 6-a levels are identified. Deep reflections of the seventh, 7-a and Rf, approximately confined to the roof and floor of the lower Paleozoic deposits and the upper part of the upper reef series, are noted in the series of the Caledonian cap of the Prebaykal massifs based on vertical seismic profilling. Collector levels are noted on the basis of the frequency of the wave spectra and from the absorption coefficient in the Testas structure and in other low amplitude structures. The insufficiency of the depth capability of the common depth point method and the poor knowledge level of seismic exploration of the section of the lower Paleozoa and the upper Proterozoa of the Chu Sarysuyskiy depresion are noted.
Directory of Open Access Journals (Sweden)
Chang Wook Jeong
Full Text Available OBJECTIVES: We developed a mobile application-based Seoul National University Prostate Cancer Risk Calculator (SNUPC-RC that predicts the probability of prostate cancer (PC at the initial prostate biopsy in a Korean cohort. Additionally, the application was validated and subjected to head-to-head comparisons with internet-based Western risk calculators in a validation cohort. Here, we describe its development and validation. PATIENTS AND METHODS: As a retrospective study, consecutive men who underwent initial prostate biopsy with more than 12 cores at a tertiary center were included. In the development stage, 3,482 cases from May 2003 through November 2010 were analyzed. Clinical variables were evaluated, and the final prediction model was developed using the logistic regression model. In the validation stage, 1,112 cases from December 2010 through June 2012 were used. SNUPC-RC was compared with the European Randomized Study of Screening for PC Risk Calculator (ERSPC-RC and the Prostate Cancer Prevention Trial Risk Calculator (PCPT-RC. The predictive accuracy was assessed using the area under the receiver operating characteristic curve (AUC. The clinical value was evaluated using decision curve analysis. RESULTS: PC was diagnosed in 1,240 (35.6% and 417 (37.5% men in the development and validation cohorts, respectively. Age, prostate-specific antigen level, prostate size, and abnormality on digital rectal examination or transrectal ultrasonography were significant factors of PC and were included in the final model. The predictive accuracy in the development cohort was 0.786. In the validation cohort, AUC was significantly higher for the SNUPC-RC (0.811 than for ERSPC-RC (0.768, p<0.001 and PCPT-RC (0.704, p<0.001. Decision curve analysis also showed higher net benefits with SNUPC-RC than with the other calculators. CONCLUSIONS: SNUPC-RC has a higher predictive accuracy and clinical benefit than Western risk calculators. Furthermore, it is easy
Low complexity VLSI implementation of CORDIC-based exponent calculation for neural networks
Aggarwal, Supriya; Khare, Kavita
2012-11-01
This article presents a low hardware complexity for exponent calculations based on CORDIC. The proposed CORDIC algorithm is designed to overcome major drawbacks (scale-factor compensation, low range of convergence and optimal selection of micro-rotations) of the conventional CORDIC in hyperbolic mode of operation. The micro-rotations are identified using leading-one bit detection with uni-direction rotations to eliminate redundant iterations and improve throughput. The efficiency and performance of the processor are independent of the probability of rotation angles being known prior to implementation. The eight-staged pipelined architecture implementation requires an 8 × N ROM in the pre-processing unit for storing the initial coordinate values; it no longer requires the ROM for storing the elementary angles. It provides an area-time efficient design for VLSI implementation for calculating exponents in activation functions and Gaussain Potential Functions (GPF) in neural networks. The proposed CORDIC processor requires 32.68% less adders and 72.23% less registers compared to that of the conventional design. The proposed design when implemented on Virtex 2P (2vp50ff1148-6) device, dissipates 55.58% less power and has 45.09% less total gate count and 16.91% less delay as compared to Xilinx CORDIC Core. The detailed algorithm design along with FPGA implementation and area and time complexities is presented.
Absorbed Dose Calculations Using Mesh-based Human Phantoms And Monte Carlo Methods
Kramer, Richard
2011-08-01
Health risks attributable to the exposure to ionizing radiation are considered to be a function of the absorbed or equivalent dose to radiosensitive organs and tissues. However, as human tissue cannot express itself in terms of equivalent dose, exposure models have to be used to determine the distribution of equivalent dose throughout the human body. An exposure model, be it physical or computational, consists of a representation of the human body, called phantom, plus a method for transporting ionizing radiation through the phantom and measuring or calculating the equivalent dose to organ and tissues of interest. The FASH2 (Female Adult meSH) and the MASH2 (Male Adult meSH) computational phantoms have been developed at the University of Pernambuco in Recife/Brazil based on polygon mesh surfaces using open source software tools and anatomical atlases. Representing standing adults, FASH2 and MASH2 have organ and tissue masses, body height and body mass adjusted to the anatomical data published by the International Commission on Radiological Protection for the reference male and female adult. For the purposes of absorbed dose calculations the phantoms have been coupled to the EGSnrc Monte Carlo code, which can transport photons, electrons and positrons through arbitrary media. This paper reviews the development of the FASH2 and the MASH2 phantoms and presents dosimetric applications for X-ray diagnosis and for prostate brachytherapy.
Microcontroller-based network for meteorological sensing and weather forecast calculations
Directory of Open Access Journals (Sweden)
A. Vas
2012-06-01
Full Text Available Weather forecasting needs a lot of computing power. It is generally accomplished by using supercomputers which are expensive to rent and to maintain. In addition, weather services also have to maintain radars, balloons and pay for worldwide weather data measured by stations and satellites. Weather forecasting computations usually consist of solving differential equations based on the measured parameters. To do that, the computer uses the data of close and distant neighbor points. Accordingly, if small-sized weather stations, which are capable of making measurements, calculations and communication, are connected through the Internet, then they can be used to run weather forecasting calculations like a supercomputer does. It doesn’t need any central server to achieve this, because this network operates as a distributed system. We chose Microchip’s PIC18 microcontroller (μC platform in the implementation of the hardware, and the embedded software uses the TCP/IP Stack v5.41 provided by Microchip.
Jacob, D; Palacios, J J
2011-01-28
We study the performance of two different electrode models in quantum transport calculations based on density functional theory: parametrized Bethe lattices and quasi-one-dimensional wires or nanowires. A detailed account of implementation details in both the cases is given. From the systematic study of nanocontacts made of representative metallic elements, we can conclude that the parametrized electrode models represent an excellent compromise between computational cost and electronic structure definition as long as the aim is to compare with experiments where the precise atomic structure of the electrodes is not relevant or defined with precision. The results obtained using parametrized Bethe lattices are essentially similar to the ones obtained with quasi-one-dimensional electrodes for large enough cross-sections of these, adding a natural smearing to the transmission curves that mimics the true nature of polycrystalline electrodes. The latter are more demanding from the computational point of view, but present the advantage of expanding the range of applicability of transport calculations to situations where the electrodes have a well-defined atomic structure, as is the case for carbon nanotubes, graphene nanoribbons, or semiconducting nanowires. All the analysis is done with the help of codes developed by the authors which can be found in the quantum transport toolbox ALACANT and are publicly available.
GMC: a GPU implementation of a Monte Carlo dose calculation based on Geant4.
Jahnke, Lennart; Fleckenstein, Jens; Wenz, Frederik; Hesser, Jürgen
2012-03-07
We present a GPU implementation called GMC (GPU Monte Carlo) of the low energy (CUDA programming interface. The classes for electron and photon interactions as well as a new parallel particle transport engine were implemented. The way a particle is processed is not in a history by history manner but rather by an interaction by interaction method. Every history is divided into steps that are then calculated in parallel by different kernels. The geometry package is currently limited to voxelized geometries. A modified parallel Mersenne twister was used to generate random numbers and a random number repetition method on the GPU was introduced. All phantom results showed a very good agreement between GPU and CPU simulation with gamma indices of >97.5% for a 2%/2 mm gamma criteria. The mean acceleration on one GTX 580 for all cases compared to Geant4 on one CPU core was 4860. The mean number of histories per millisecond on the GPU for all cases was 658 leading to a total simulation time for one intensity-modulated radiation therapy dose distribution of 349 s. In conclusion, Geant4-based Monte Carlo dose calculations were significantly accelerated on the GPU.
Energy Technology Data Exchange (ETDEWEB)
Kim, Sung Woo; Kim, Dong Jin [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
2013-10-15
In this work, to develop novel structural materials for the IHX of a VHTR, a more systematic methodology using the design of experiments (DOE) and thermodynamic calculations was proposed. For 32 sets of designs of Ni-Cr-Co-Mo alloys with minor elements of W and Ta, the mass fraction of TCP phases and mechanical properties were calculated, and finally the chemical composition was optimized for further experimental studies by applying the proposed . The highly efficient generation of electricity and the production of massive hydrogen are possible using a very high temperature gas-cooled reactor (VHTR) among generation IV nuclear power plants. The structural material for an intermediate heat exchanger (IHX) among numerous components should be endurable at high temperature of up to 950 .deg. C during long-term operation. Impurities inevitably introduced in helium as a coolant facilitate the material degradation by corrosion at high temperature. This work is concerning a methodology of Ni-Cr-Co-Mo based superalloy developed for VHTR using the design of experiments (DOE) and thermodynamic calculationsmethodology.
Seismic ray-tracing calculation based on parabolic travel-time interpolation
Institute of Scientific and Technical Information of China (English)
周竹生; 张赛民; 陈灵君
2004-01-01
A new seismic ray-tracing method is put forward based on parabolic travel-time interpolation(PTI) method, which is more accurate than the linear travel-time interpolation (LTI) method. Both PTI method and LTI method are used to compute seismic travel-time and ray-path in a 2-D grid cell model. Firstly, some basic concepts are introduced. The calculations of travel-time and ray-path are carried out only at cell boundaries. So, the ray-path is always straight in the same cells with uniform velocity. Two steps are applied in PTI and LTI method, step 1 computes travel-time and step 2 traces ray-path. Then, the derivation of LTI formulas is described. Because of the presence of refraction wave in shot cell, the formula aiming at shot cell is also derived. Finally, PTI method is presented. The calculation of PTI method is more complex than that of LTI method, but the error is limited. The results of numerical model show that PTI method can trace ray-path more accurately and efficiently than LTI method does.
Metadyn View: Fast web-based viewer of free energy surfaces calculated by metadynamics
Hošek, Petr; Spiwok, Vojtěch
2016-01-01
Metadynamics is a highly successful enhanced sampling technique for simulation of molecular processes and prediction of their free energy surfaces. An in-depth analysis of data obtained by this method is as important as the simulation itself. Although there are several tools to compute free energy surfaces from metadynamics data, they usually lack user friendliness and a build-in visualization part. Here we introduce Metadyn View as a fast and user friendly viewer of bias potential/free energy surfaces calculated by metadynamics in Plumed package. It is based on modern web technologies including HTML5, JavaScript and Cascade Style Sheets (CSS). It can be used by visiting the web site and uploading a HILLS file. It calculates the bias potential/free energy surface on the client-side, so it can run online or offline without necessity to install additional web engines. Moreover, it includes tools for measurement of free energies and free energy differences and data/image export.
Visible calculation of mining index based on stope 3D surveying and block modeling
Institute of Scientific and Technical Information of China (English)
Liu Xiaoming; Luo Zhouquan; Yang Biao; Lu Guang; Cao Shengxiang; Jiang Xinjian
2012-01-01
Aimed at the CMS laser scanning theory and characteristic,a combined actual situation of stope N4-5 of Fankou Lead-Zinc Mine and complementary monitoring of the stope were carried out by carefully choosing two measuring points.The cavity 3D visible model was created by large-scale mining industry software Surpac after changing the measured data.The stope mine design model,bottom structural model and backfill model of the south and north sides of the stope N4-5 were established according to the stope design data.On this basis,the stope block model was established,and then block attribute was estimated.The amount the ore remains,mullock,backfill and total mined ore were calculated through the solid model restrains.Finally,the stope mining dilution rate and loss rate reached 8.2％and 1.47％,respectively.The practice indicates that the mining index visible calculation method based on cavity 3D monitoring and stope block modeling can make up the deficiency of adopting the solid model to directly carry out the Boolean operation.The stope mining indexes obtained by this method are accurate and reliable,and can be used to guide the actual production management and estimate the mining quality.
Yang, Xiaoting; Hu, Yufei; Li, Gongke
2014-05-16
Quantification of monoamine neurotransmitters is very important in diagnosing and monitoring of patients with neurological disorders. We developed an online analytical method to selectively determine urinary monoamine neurotransmitters, which coupled the boronate affinity monolithic column micro-solid-phase extraction with high-performance liquid chromatography (HPLC). The boronate affinity monolithic column was prepared by in situ polymerization of vinylphenylboronic acid (VPBA) and N,N'-methylenebisacrylamide (MBAA) in a stainless capillary column. The prepared monolithic column showed good permeability, high extraction selectivity and capacity. The column-to-column reproducibility was satisfactory and the enrichment factors were 17-243 for four monoamine neurotransmitters. Parameters that influence the online extraction efficiency, including pH of sample solution, flow rate of extraction and desorption, extraction volume and desorption volume were investigated. Under the optimized conditions, the developed method exhibited low limit of detection (0.06-0.80μg/L), good linearity (with R(2) between 0.9979 and 0.9993). The recoveries in urine samples were 81.0-105.5% for four monoamine neurotransmitters with intra- and inter-day RSDs of 2.1-8.2% and 3.7-10.6%, respectively. The online analytical method was sensitive, accurate, selective, reliable and applicable to analysis of trace monoamine neurotransmitters in human urine sample.
The proton affinities of saturated and unsaturated heterocyclic molecules
Kabli, Samira; van Beelen, Eric S. E.; Ingemann, Steen; Henriksen, Lars; Hammerum, Steen
2006-03-01
The proton affinities derived from G3-calculations of 23 five-membered ring heteroaromatic molecules agree well with the experimentally determined values available in the literature. The calculated local proton affinities show that the principal site of protonation of the heteroaromatic compounds examined is an atom of the ring, carbon when there is only one heteroatom in the ring, and nitrogen where there are two or more heteroatoms. The experimental proton affinities of non-aromatic cyclic ethers, amines and thioethers are also in excellent agreement with the calculated values, with two exceptions (oxetane, N-methylazetidine). The literature proton affinities of the four simple cyclic ethers, oxetane, tetrahydrofuran, tetrahydropyran and oxepane were confirmed by Fourier Transform Ion Cyclotron Resonance (FT-ICR) mass spectrometry, in order to examine the disagreement between the values predicted by extrapolation or additivity for tetrahydrofuran and tetrahydropyran and those determined by experiment and by calculation. The proton affinity differences between the pairs tetrahydropyran/1,4-dioxane, piperidine/morpholine and related compounds show that introduction of an additional oxygen atom in the ring considerably lowers the basicity.
Realization of Fractal Affine Transformation
Institute of Scientific and Technical Information of China (English)
无
2001-01-01
This paper gives the definition of fractal affine transformation and presents a specific method for its realization and its cor responding mathematical equations which are essential in fractal image construction.
Wignall, Jessica A.; Shapiro, Andrew J.; Wright, Fred A.; Woodruff, Tracey J.; Chiu, Weihsueh A.; Guyton, Kathryn Z.
2014-01-01
Background: Benchmark dose (BMD) modeling computes the dose associated with a prespecified response level. While offering advantages over traditional points of departure (PODs), such as no-observed-adverse-effect-levels (NOAELs), BMD methods have lacked consistency and transparency in application, interpretation, and reporting in human health assessments of chemicals. Objectives: We aimed to apply a standardized process for conducting BMD modeling to reduce inconsistencies in model fitting and selection. Methods: We evaluated 880 dose–response data sets for 352 environmental chemicals with existing human health assessments. We calculated benchmark doses and their lower limits [10% extra risk, or change in the mean equal to 1 SD (BMD/L10/1SD)] for each chemical in a standardized way with prespecified criteria for model fit acceptance. We identified study design features associated with acceptable model fits. Results: We derived values for 255 (72%) of the chemicals. Batch-calculated BMD/L10/1SD values were significantly and highly correlated (R2 of 0.95 and 0.83, respectively, n = 42) with PODs previously used in human health assessments, with values similar to reported NOAELs. Specifically, the median ratio of BMDs10/1SD:NOAELs was 1.96, and the median ratio of BMDLs10/1SD:NOAELs was 0.89. We also observed a significant trend of increasing model viability with increasing number of dose groups. Conclusions: BMD/L10/1SD values can be calculated in a standardized way for use in health assessments on a large number of chemicals and critical effects. This facilitates the exploration of health effects across multiple studies of a given chemical or, when chemicals need to be compared, providing greater transparency and efficiency than current approaches. Citation: Wignall JA, Shapiro AJ, Wright FA, Woodruff TJ, Chiu WA, Guyton KZ, Rusyn I. 2014. Standardizing benchmark dose calculations to improve science-based decisions in human health assessments. Environ Health
Kim, Kwang Hoe; Ahn, Yeong Hee; Ji, Eun Sun; Lee, Ju Yeon; Kim, Jin Young; An, Hyun Joo; Yoo, Jong Shin
2015-07-02
Multiple reaction monitoring (MRM) is commonly used for the quantitative analysis of proteins during mass pectrometry (MS), and has excellent specificity and sensitivity for an analyte in a complex sample. In this study, a pseudo-MRM method for the quantitative analysis of low-abundance serological proteins was developed using hybrid quadrupole time-of-flight (hybrid Q-TOF) MS and peptide affinity-based enrichment. First, a pseudo-MRM-based analysis using hybrid Q-TOF MS was performed for synthetic peptides selected as targets and spiked into tryptic digests of human serum. By integrating multiple transition signals corresponding to fragment ions in the full scan MS/MS spectrum of a precursor ion of the target peptide, a pseudo-MRM MS analysis of the target peptide showed an increased signal-to-noise (S/N) ratio and sensitivity, as well as an improved reproducibility. The pseudo-MRM method was then used for the quantitative analysis of the tryptic peptides of two low-abundance serological proteins, tissue inhibitor of metalloproteinase 1 (TIMP1) and tissue-type protein tyrosine phosphatase kappa (PTPκ), which were prepared with peptide affinity-based enrichment from human serum. Finally, this method was used to detect femtomolar amounts of target peptides derived from TIMP1 and PTPκ, with good coefficients of variation (CV 2.7% and 9.8%, respectively), using a few microliters of human serum from colorectal cancer patients. The results suggest that pseudo-MRM using hybrid Q-TOF MS, combined with peptide affinity-based enrichment, could become a promising alternative for the quantitative analysis of low-abundance target proteins of interest in complex serum samples that avoids protein depletion.
Evaluation of MLACF based calculated attenuation brain PET imaging for FDG patient studies
Bal, Harshali; Panin, Vladimir Y.; Platsch, Guenther; Defrise, Michel; Hayden, Charles; Hutton, Chloe; Serrano, Benjamin; Paulmier, Benoit; Casey, Michael E.
2017-04-01
Calculating attenuation correction for brain PET imaging rather than using CT presents opportunities for low radiation dose applications such as pediatric imaging and serial scans to monitor disease progression. Our goal is to evaluate the iterative time-of-flight based maximum-likelihood activity and attenuation correction factors estimation (MLACF) method for clinical FDG brain PET imaging. FDG PET/CT brain studies were performed in 57 patients using the Biograph mCT (Siemens) four-ring scanner. The time-of-flight PET sinograms were acquired using the standard clinical protocol consisting of a CT scan followed by 10 min of single-bed PET acquisition. Images were reconstructed using CT-based attenuation correction (CTAC) and used as a gold standard for comparison. Two methods were compared with respect to CTAC: a calculated brain attenuation correction (CBAC) and MLACF based PET reconstruction. Plane-by-plane scaling was performed for MLACF images in order to fix the variable axial scaling observed. The noise structure of the MLACF images was different compared to those obtained using CTAC and the reconstruction required a higher number of iterations to obtain comparable image quality. To analyze the pooled data, each dataset was registered to a standard template and standard regions of interest were extracted. An SUVr analysis of the brain regions of interest showed that CBAC and MLACF were each well correlated with CTAC SUVrs. A plane-by-plane error analysis indicated that there were local differences for both CBAC and MLACF images with respect to CTAC. Mean relative error in the standard regions of interest was less than 5% for both methods and the mean absolute relative errors for both methods were similar (3.4% ± 3.1% for CBAC and 3.5% ± 3.1% for MLACF). However, the MLACF method recovered activity adjoining the frontal sinus regions more accurately than CBAC method. The use of plane-by-plane scaling of MLACF images was found to be a
Affinity chromatography of phosphorylated proteins.
Tchaga, Grigoriy S
2008-01-01
This chapter covers the use of immobilized metal ion affinity chromatography (IMAC) for enrichment of phosphorylated proteins. Some requirements for successful enrichment of these types of proteins are discussed. An experimental protocol and a set of application data are included to enable the scientist to obtain high-yield results in a very short time with pre-packed phospho-specific metal ion affinity resin (PMAC).
Calculation of watershed flow concentration based on the grid drop concept
Institute of Scientific and Technical Information of China (English)
Rui Xiaofang; Yu Mei; Liu Fanggui; Gong Xinglong
2008-01-01
The grid drop concept is introduced and used to develop a micromechanism-based methodology for calculating watershed flow concentration. The flow path and distance traveled by a grid drop to the outlet of the watershed are obtained using a digital elevation model (DEM). Regarding the slope as an uneven carpet through which the grid drop passes, a formula for overland flow velocity differing from Manning's formula for stream flow as well as Darcy's formula for pore flow is proposed. Compared with the commonly used unit hydrograph and isochronal methods, this new methodology has outstanding advantages in that it considers the influences of the slope velocity field and the heterogeneity of spatial distribution of rainfall on the flow concentration process, and includes only one parameter that needs to be calibrated. This method can also be effectively applied to the prediction of hydrologic processes in un-gauged basins.
Park, Hee Su; Sharma, Aditya
2016-12-01
We calculate the operation wavelength range of polarization controllers based on rotating wave plates such as paddle-type optical fiber devices. The coverages over arbitrary polarization conversion or arbitrary birefringence compensation are numerically estimated. The results present the acceptable phase retardation range of polarization controllers composed of two quarter-wave plates or a quarter-half-quarter-wave plate combination, and thereby determines the operation wavelength range of a given design. We further prove that a quarter-quarter-half-wave-plate combination is also an arbitrary birefringence compensator as well as a conventional quarter-half-quarter-wave-plate combination, and show that the two configurations have the identical range of acceptable phase retardance within the uncertainty of our numerical method.
Development of a Monte-Carlo based method for calculating the effect of stationary fluctuations
DEFF Research Database (Denmark)
Pettersen, E. E.; Demazire, C.; Jareteg, K.;
2015-01-01
This paper deals with the development of a novel method for performing Monte Carlo calculations of the effect, on the neutron flux, of stationary fluctuations in macroscopic cross-sections. The basic principle relies on the formulation of two equivalent problems in the frequency domain: one...... equivalent problems nevertheless requires the possibility to modify the macroscopic cross-sections, and we use the work of Kuijper, van der Marck and Hogenbirk to define group-wise macroscopic cross-sections in MCNP [1]. The method is illustrated in this paper at a frequency of 1 Hz, for which only the real...... part of the neutron balance plays a significant role and for driving fluctuations leading to neutron sources having the same sign in the two equivalent sub-critical problems. A semi-analytical diffusion-based solution is used to verily the implementation of the method on a test case representative...
The solar silicon abundance based on 3D non-LTE calculations
Amarsi, A. M.; Asplund, M.
2017-01-01
We present 3D non-local thermodynamic equilibrium (non-LTE) radiative transfer calculations for silicon in the solar photosphere, using an extensive model atom that includes recent, realistic neutral hydrogen collisional cross-sections. We find that photon losses in the Si I lines give rise to slightly negative non-LTE abundance corrections of the order of -0.01 dex. We infer a 3D non-LTE-based solar silicon abundance of lg ɛ_{Si{⊙}}=7.51 dex. With silicon commonly chosen to be the anchor between the photospheric and meteoritic abundances, we find that the meteoritic abundance scale remains unchanged compared with the Asplund et al. and Lodders et al. results.
The solar silicon abundance based on 3D non-LTE calculations
Amarsi, A M
2016-01-01
We present three-dimensional (3D) non-local thermodynamic equilibrium (non-LTE) radiative transfer calculations for silicon in the solar photosphere, using an extensive model atom that includes recent, realistic neutral hydrogen collisional cross-sections. We find that photon losses in the SiI lines give rise to slightly negative non-LTE abundance corrections of the order -0.01 dex. We infer a 3D non-LTE based solar silicon abundance of 7.51 dex. With silicon commonly chosen to be the anchor between the photospheric and meteoritic abundances, we find that the meteoritic abundance scale remains unchanged compared with the Asplund et al. (2009) and Lodders et al. (2009) results.
Vidal, David; Thormann, Michael; Pons, Miquel
2005-01-01
SMILES strings are the most compact text based molecular representations. Implicitly they contain the information needed to compute all kinds of molecular structures and, thus, molecular properties derived from these structures. We show that this implicit information can be accessed directly at SMILES string level without the need to apply explicit time-consuming conversion of the SMILES strings into molecular graphs or 3D structures with subsequent 2D or 3D QSPR calculations. Our method is based on the fragmentation of SMILES strings into overlapping substrings of a defined size that we call LINGOs. The integral set of LINGOs derived from a given SMILES string, the LINGO profile, is a hologram of the SMILES representation of the molecule described. LINGO profiles provide input for QSPR models and the calculation of intermolecular similarities at very low computational cost. The octanol/water partition coefficient (LlogP) QSPR model achieved a correlation coefficient R2=0.93, a root-mean-square error RRMS=0.49 log units, a goodness of prediction correlation coefficient Q2=0.89 and a QRMS=0.61 log units. The intrinsic aqueous solubility (LlogS) QSPR model achieved correlation coefficient values of R2=0.91, Q2=0.82, and RRMS=0.60 and QRMS=0.89 log units. Integral Tanimoto coefficients computed from LINGO profiles provided sharp discrimination between random and bioisoster pairs extracted from Accelrys Bioster Database. Average similarities (LINGOsim) were 0.07 for the random pairs and 0.36 for the bioisosteric pairs.
Institute of Scientific and Technical Information of China (English)
Ye Xiao-Qiu; Luo De-Li; Sang Ge; Ao Bing-Yun
2011-01-01
The alanates (complex aluminohydrides) have relatively high gravimetric hydrogen densities and are among the most promising solid-state hydrogen-storage materials. In this work, the electronic structures and the formation enthalpies of seven typical aluminum-based deuterides have been calculated by the plane-wave pseudopotential method,these being AID3, LiAID4, Li3AID6, BaAID5, Ba2AID7, LiMg(AID4)3 and LiMgAID6. The results show that all these compounds are large band gap insulators at 0 K with estimated band gaps from 2.31 eV in AID3 to 4.96 eV in LiMg(AID4)3. The band gaps are reduced when the coordination of Al varies from 4 to 6. Two peaks present in the valence bands are the common characteristics of aluminum-based deuterides containing AID4 subunits while three peaks are the common characteristics of those containing AID6 subunits. The electronic structures of these compounds are determined mainly by aluminum deuteride complexes (AID4 or AID6) and their mutual interactions. The predicted formation enthalpies are presented for the studied aluminum-based deuterides.
A Quadrilateral Element-based Method for Calculation of Multi-scale Temperature Field
Institute of Scientific and Technical Information of China (English)
Sun Zhigang; Zhou Chaoxian; Gao Xiguang; Song Yingdong
2010-01-01
In the analysis of functionally graded materials (FGMs),the uncoupled approach is used broadly,which is based on homogenized material property and ignores the effect of local micro-structural interaction.The higher-order theory for FGMs (HOTFGM) is a coupled approach that explicitly takes the effect of micro-structural gradation and the local interaction of the spatially variable inclusion phase into account.Based on the HOTFGM,this article presents a quadrilateral element-based method for the calculation of multi-scale temperature field (QTF).In this method,the discrete cells are quadrilateral including rectangular while the surface-averaged quantities are the primary variables which replace the coefficients employed in the tem-perature function.In contrast with the HOTFGM,this method improves the efficiency,eliminates the restriction of being rectangular cells and expands the solution scale.The presented results illustrate the efficiency of the QTF and its advantages in analyzing FGMs.
Preliminary study on CAD-based method of characteristics for neutron transport calculation
Chen, Zhen-Ping; Sun, Guang-Yao; Song, Jing; Hao, Li-Juan; Hu, Li-Qin; Wu, Yi-Can
2013-01-01
The method of characteristics (MOC) is widely used for neutron transport calculation in recent decades. However, the key problem determining whether MOC can be applied in highly heterogeneous geometry is how to combine an effective geometry modeling method with it. Most of the existing MOC codes conventionally describe the geometry model just by lines and arcs with extensive input data. Thus they have difficulty in geometry modeling and ray tracing for complicated geometries. In this study, a new method making use of a CAD-based automatic modeling tool MCAM which is a CAD/Image-based Automatic Modeling Program for Neutronics and Radiation Transport developed by FDS Team in China was introduced for geometry modeling and ray tracing of particle transport to remove those limitations. The diamond -difference scheme was applied to MOC to reduce the spatial discretization errors of the flat flux approximation. Based on MCAM and MOC, a new MOC code was developed and integrated into SuperMC system, whic h is a Super ...
The sodium ion affinities of asparagine, glutamine, histidine and arginine
Wang, Ping; Ohanessian, Gilles; Wesdemiotis, Chrys
2008-01-01
The sodium ion affinities of the amino acids Asn, Gln, His and Arg have been determined by experimental and computational approaches (for Asn, His and Arg). Na+-bound heterodimers with amino acid and peptide ligands (Pep1, Pep2) were produced by electrospray ionization. From the dissociation kinetics of these Pep1-Na+-Pep2 ions to Pep1-Na+ and Pep2-Na+, determined by collisionally activated dissociation, a ladder of relative affinities was constructed and subsequently converted to absolute affinities by anchoring the relative values to known Na+ affinities. The Na+ affinities of Asn, His and Arg, were calculated at the MP2(full)/6-311+G(2d,2p)//MP2/6-31G(d) level of ab initio theory. The resulting experimental and computed Na+ affinities are in excellent agreement with one another. These results, combined with those of our previous studies, yield the sodium ion affinities of 18 out of the 20 [alpha]-amino acids naturally occurring in peptides and proteins of living systems.
Directory of Open Access Journals (Sweden)
Ingo Gräff
Full Text Available To date, there are no valid statistics regarding the number of full time staff necessary for nursing care in emergency departments in Europe.Staff requirement calculations were performed using state-of-the art procedures which take both fluctuating patient volume and individual staff shortfall rates into consideration. In a longitudinal observational study, the average nursing staff engagement time per patient was assessed for 503 patients. For this purpose, a full-time staffing calculation was estimated based on the five priority levels of the Manchester Triage System (MTS, taking into account specific workload fluctuations (50th-95th percentiles.Patients classified to the MTS category red (n = 35 required the most engagement time with an average of 97.93 min per patient. On weighted average, for orange MTS category patients (n = 118, nursing staff were required for 85.07 min, for patients in the yellow MTS category (n = 181, 40.95 min, while the two MTS categories with the least acute patients, green (n = 129 and blue (n = 40 required 23.18 min and 14.99 min engagement time per patient, respectively. Individual staff shortfall due to sick days and vacation time was 20.87% of the total working hours. When extrapolating this to 21,899 (2010 emergency patients, 67-123 emergency patients (50-95% percentile per month can be seen by one nurse. The calculated full time staffing requirement depending on the percentiles was 14.8 to 27.1.Performance-oriented staff planning offers an objective instrument for calculation of the full-time nursing staff required in emergency departments.
Affine group formulation of the Standard Model coupled to gravity
Energy Technology Data Exchange (ETDEWEB)
Chou, Ching-Yi, E-mail: l2897107@mail.ncku.edu.tw [Department of Physics, National Cheng Kung University, Taiwan (China); Ita, Eyo, E-mail: ita@usna.edu [Department of Physics, US Naval Academy, Annapolis, MD (United States); Soo, Chopin, E-mail: cpsoo@mail.ncku.edu.tw [Department of Physics, National Cheng Kung University, Taiwan (China)
2014-04-15
In this work we apply the affine group formalism for four dimensional gravity of Lorentzian signature, which is based on Klauder’s affine algebraic program, to the formulation of the Hamiltonian constraint of the interaction of matter and all forces, including gravity with non-vanishing cosmological constant Λ, as an affine Lie algebra. We use the hermitian action of fermions coupled to gravitation and Yang–Mills theory to find the density weight one fermionic super-Hamiltonian constraint. This term, combined with the Yang–Mills and Higgs energy densities, are composed with York’s integrated time functional. The result, when combined with the imaginary part of the Chern–Simons functional Q, forms the affine commutation relation with the volume element V(x). Affine algebraic quantization of gravitation and matter on equal footing implies a fundamental uncertainty relation which is predicated upon a non-vanishing cosmological constant. -- Highlights: •Wheeler–DeWitt equation (WDW) quantized as affine algebra, realizing Klauder’s program. •WDW formulated for interaction of matter and all forces, including gravity, as affine algebra. •WDW features Hermitian generators in spite of fermionic content: Standard Model addressed. •Constructed a family of physical states for the full, coupled theory via affine coherent states. •Fundamental uncertainty relation, predicated on non-vanishing cosmological constant.
Bakhshpour, Monireh; Derazshamshir, Ali; Bereli, Nilay; Elkak, Assem; Denizli, Adil
2016-04-01
The immobilized metal-affinity chromatography (IMAC) has gained significant interest as a widespread separation and purification tool for therapeutic proteins, nucleic acids and other biological molecules. The enormous potential of IMAC for proteins with natural surface exposed-histidine residues and for recombinant proteins with histidine clusters. Cryogels as monolithic materials have recently been proposed as promising chromatographic adsorbents for the separation of biomolecules in downstream processing. In the present study, IMAC cryogels have been synthesized and utilized for the adsorption and separation of immunoglobulin G (IgG) from IgG solution and whole human plasma. For this purpose, Cu(II)-ions were coupled to poly(hydroxyethyl methacrylate) PHEMA using poly(ethylene imine) (PEI) as the chelating ligand. In this study the cryogels formation optimized by the varied proportion of PEI from 1% to 15% along with different amounts of Cu (II) as chelating metal. The prepared cryogels were characterized by scanning electron microscopy, Fourier transform infrared spectroscopy, and thermogravimetric analysis. The [PHEMA/PEI]-Cu(II) cryogels were assayed for their capability to bind the human IgG from aqueous solutions. The IMAC cryogels were found to have high affinity toward human IgG. The adsorption of human IgG was investigated onto the PHEMA/PEI cryogels with (10% PEI) and the concentration of Cu (II) varied as 10, 50, 100 and 150 mg/L. The separation of human IgG was achieved in one purification step at pH7.4. The maximum adsorption capacity was observed at the [PHEMA/PEI]-Cu(II) (10% PEI) with 72.28 mg/g of human IgG. The purification efficiency and human IgG purity were investigated by sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE).
Egami, Yoshiyuki; Iwase, Shigeru; Tsukamoto, Shigeru; Ono, Tomoya; Hirose, Kikuji
2015-09-01
We develop a first-principles electron-transport simulator based on the Lippmann-Schwinger (LS) equation within the framework of the real-space finite-difference scheme. In our fully real-space-based LS (grid LS) method, the ratio expression technique for the scattering wave functions and the Green's function elements of the reference system is employed to avoid numerical collapse. Furthermore, we present analytical expressions and/or prominent calculation procedures for the retarded Green's function, which are utilized in the grid LS approach. In order to demonstrate the performance of the grid LS method, we simulate the electron-transport properties of the semiconductor-oxide interfaces sandwiched between semi-infinite jellium electrodes. The results confirm that the leakage current through the (001)Si-SiO_{2} model becomes much larger when the dangling-bond state is induced by a defect in the oxygen layer, while that through the (001)Ge-GeO_{2} model is insensitive to the dangling bond state.
He, Ling; Jia, Qi-jian; Li, Chao; Xu, Hao
2016-01-01
The rapid development of coastal economy in Hebei Province caused rapid transition of coastal land use structure, which has threatened land ecological security. Therefore, calculating ecosystem service value of land use and exploring ecological security baseline can provide the basis for regional ecological protection and rehabilitation. Taking Huanghua, a city in the southeast of Hebei Province, as an example, this study explored the joint point, joint path and joint method between ecological security and food security, and then calculated the ecological security baseline of Huanghua City based on the ecosystem service value and the food safety standard. The results showed that ecosystem service value of per unit area from maximum to minimum were in this order: wetland, water, garden, cultivated land, meadow, other land, salt pans, saline and alkaline land, constructive land. The order of contribution rates of each ecological function value from high to low was nutrient recycling, water conservation, entertainment and culture, material production, biodiversity maintenance, gas regulation, climate regulation and environmental purification. The security baseline of grain production was 0.21 kg · m⁻², the security baseline of grain output value was 0.41 yuan · m⁻², the baseline of ecosystem service value was 21.58 yuan · m⁻², and the total of ecosystem service value in the research area was 4.244 billion yuan. In 2081 the ecological security will reach the bottom line and the ecological system, in which human is the subject, will be on the verge of collapse. According to the ecological security status, Huanghua can be divided into 4 zones, i.e., ecological core protection zone, ecological buffer zone, ecological restoration zone and human activity core zone.
Institute of Scientific and Technical Information of China (English)
TANG Yi; FANG Yong-li; YANG Luo; SUN Yu-xin; YU Zheng-hua
2012-01-01
A new accurate calculation method of electric power harmonic parameters was presented.Based on the delay time theorem of Fourier transform,the frequency of the electric power was calculated,and then,suing interpolation in the frequency domain of the windows,the parameters (amplitude and phase) of each harmonic frequency signals were calculated accurately.In the paper,the effect of the delay time and the windows on the electric power harmonic calculation accuracy was analysed.The digital simulation and the physical measurement tests show that the proposed method is effective and has more advantages than other methods which are based on multipoint interpolation especially in calculation time cost; therefore,it is very suitable to be used in the single chip DSP micro-processor.
Watanabe, Chiduru; Fukuzawa, Kaori; Okiyama, Yoshio; Tsukamoto, Takayuki; Kato, Akifumi; Tanaka, Shigenori; Mochizuki, Yuji; Nakano, Tatsuya
2013-04-01
We develop an inter-fragment interaction energy (IFIE) analysis based on the three- and four-body corrected fragment molecular orbital (FMO3 and FMO4) method to evaluate the interactions of functional group units in structure-based drug design context. The novel subdividing fragmentation method for a ligand (in units of their functional groups) and amino acid residues (in units of their main and side chains) enables us to understand the ligand-binding mechanism in more detail without sacrificing chemical accuracy of the total energy and IFIEs by using the FMO4 method. We perform FMO4 calculations with the second order Møller-Plesset perturbation theory for an estrogen receptor (ER) and the 17β-estradiol (EST) complex using the proposed fragmentation method and assess the interaction for each ligand-binding site by the FMO4-IFIE analysis. When the steroidal EST is divided into two functional units including "A ring" and "D ring", respectively, the FMO4-IFIE analysis reveals their binding affinity with surrounding fragments of the amino acid residues; the "A ring" of EST has polarization interaction with the main chain of Thr347 and two hydrogen bonds with the side chains of Glu353 and Arg394; the "D ring" of EST has a hydrogen bond with the side chain of His524. In particular, the CH/π interactions of the "A ring" of EST with the side chains of Leu387 and Phe404 are easily identified in cooperation with the CHPI program. The FMO4-IFIE analysis using our novel subdividing fragmentation method, which provides higher resolution than the conventional IFIE analysis in units of ligand and each amino acid reside in the framework of two-body approximation, is a useful tool for revealing ligand-binding mechanism and would be applicable to rational drug design such as structure-based drug design and fragment-based drug design.
Comparison of CT number calibration techniques for CBCT-based dose calculation
Energy Technology Data Exchange (ETDEWEB)
Dunlop, Alex [The Royal Marsden NHS Foundation Trust, Joint Department of Physics, Institute of Cancer Research, London (United Kingdom); The Royal Marsden Hospital, Sutton, Surrey, Downs Road (United Kingdom); McQuaid, Dualta; Nill, Simeon; Hansen, Vibeke N.; Oelfke, Uwe [The Royal Marsden NHS Foundation Trust, Joint Department of Physics, Institute of Cancer Research, London (United Kingdom); Murray, Julia; Bhide, Shreerang; Harrington, Kevin [The Royal Marsden Hospital, Sutton, Surrey, Downs Road (United Kingdom); The Institute of Cancer Research, London (United Kingdom); Poludniowski, Gavin [Karolinska University Hospital, Department of Medical Physics, Stockholm (Sweden); Nutting, Christopher [The Institute of Cancer Research, London (United Kingdom); Newbold, Kate [The Royal Marsden Hospital, Sutton, Surrey, Downs Road (United Kingdom)
2015-12-15
The aim of this work was to compare and validate various computed tomography (CT) number calibration techniques with respect to cone beam CT (CBCT) dose calculation accuracy. CBCT dose calculation accuracy was assessed for pelvic, lung, and head and neck (H and N) treatment sites for two approaches: (1) physics-based scatter correction methods (CBCT{sub r}); (2) density override approaches including assigning water density to the entire CBCT (W), assignment of either water or bone density (WB), and assignment of either water or lung density (WL). Methods for CBCT density assignment within a commercially available treatment planning system (RS{sub auto}), where CBCT voxels are binned into six density levels, were assessed and validated. Dose-difference maps and dose-volume statistics were used to compare the CBCT dose distributions with the ground truth of a planning CT acquired the same day as the CBCT. For pelvic cases, all CTN calibration methods resulted in average dose-volume deviations below 1.5 %. RS{sub auto} provided larger than average errors for pelvic treatments for patients with large amounts of adipose tissue. For H and N cases, all CTN calibration methods resulted in average dose-volume differences below 1.0 % with CBCT{sub r} (0.5 %) and RS{sub auto} (0.6 %) performing best. For lung cases, WL and RS{sub auto} methods generated dose distributions most similar to the ground truth. The RS{sub auto} density override approach is an attractive option for CTN adjustments for a variety of anatomical sites. RS{sub auto} methods were validated, resulting in dose calculations that were consistent with those calculated on diagnostic-quality CT images, for CBCT images acquired of the lung, for patients receiving pelvic RT in cases without excess adipose tissue, and for H and N cases. (orig.) [German] Ziel dieser Arbeit ist der Vergleich und die Validierung mehrerer CT-Kalibrierungsmethoden zur Dosisberechnung auf der Grundlage von Kegelstrahlcomputertomographie
Calculation of Phase Equilibria Based on the Levenberg-Marquardt Method
Institute of Scientific and Technical Information of China (English)
Ruijie ZHANG; Lei LI; Zhongwei CHEN; Zhi HE; Wanqi JIE
2005-01-01
The Levenberg-Marquardt method, the best algorithm to obtain the least-square solution of nonlinear equations, is applied to calculate the stable phase equilibria. It can get the best combination between robustness and speed of the calculations. Its application to ternary Al-Si-Mg system is executed in detail. The calculated phase equilibria agree well with the experimental results. Furthermore, the Levenberg-Marquardt method is not sensitive to the initial values.
Energy Technology Data Exchange (ETDEWEB)
Mikell, Justin K. [Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, Texas (United States); University of Texas Graduate School of Biomedical Sciences at Houston, Houston, Texas (United States); Klopp, Ann H. [Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Gonzalez, Graciela M.N. [Department of Biostatistics, University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Kisling, Kelly D. [Department of Radiation Physics-Patient Care, University of Texas MD Anderson Cancer Center, Houston, Texas (United States); University of Texas Graduate School of Biomedical Sciences at Houston, Houston, Texas (United States); Price, Michael J. [Department of Physics and Astronomy, Louisiana State University and Agricultural and Mechanical College, Baton Rouge, Louisiana, and Mary Bird Perkins Cancer Center, Baton Rouge, Louisiana (United States); Berner, Paula A. [Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Eifel, Patricia J. [Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Mourtada, Firas, E-mail: fmourtad@christianacare.org [Department of Radiation Physics-Patient Care, University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Department of Experimental Diagnostic Imaging, University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Department of Radiation Oncology, Helen F. Graham Cancer Center, Newark, Delaware (United States)
2012-07-01
Purpose: To investigate the dosimetric impact of the heterogeneity dose calculation Acuros (Transpire Inc., Gig Harbor, WA), a grid-based Boltzmann equation solver (GBBS), for brachytherapy in a cohort of cervical cancer patients. Methods and Materials: The impact of heterogeneities was retrospectively assessed in treatment plans for 26 patients who had previously received {sup 192}Ir intracavitary brachytherapy for cervical cancer with computed tomography (CT)/magnetic resonance-compatible tandems and unshielded colpostats. The GBBS models sources, patient boundaries, applicators, and tissue heterogeneities. Multiple GBBS calculations were performed with and without solid model applicator, with and without overriding the patient contour to 1 g/cm{sup 3} muscle, and with and without overriding contrast materials to muscle or 2.25 g/cm{sup 3} bone. Impact of source and boundary modeling, applicator, tissue heterogeneities, and sensitivity of CT-to-material mapping of contrast were derived from the multiple calculations. American Association of Physicists in Medicine Task Group 43 (TG-43) guidelines and the GBBS were compared for the following clinical dosimetric parameters: Manchester points A and B, International Commission on Radiation Units and Measurements (ICRU) report 38 rectal and bladder points, three and nine o'clock, and {sub D2cm3} to the bladder, rectum, and sigmoid. Results: Points A and B, D{sub 2} cm{sup 3} bladder, ICRU bladder, and three and nine o'clock were within 5% of TG-43 for all GBBS calculations. The source and boundary and applicator account for most of the differences between the GBBS and TG-43 guidelines. The D{sub 2cm3} rectum (n = 3), D{sub 2cm3} sigmoid (n = 1), and ICRU rectum (n = 6) had differences of >5% from TG-43 for the worst case incorrect mapping of contrast to bone. Clinical dosimetric parameters were within 5% of TG-43 when rectal and balloon contrast were mapped to bone and radiopaque packing was not overridden
Chasing polys: Interdisciplinary affinity and its connection to physics identity
Scott, Tyler D.
This research is based on two motivations that merge by means of the frameworks of interdisciplinary affinity and physics identity. First, a goal of education is to develop interdisciplinary abilities in students' thinking and work. But an often ignored factor is students interests and beliefs about being interdisciplinary. Thus, this work develops and uses a framework called interdisciplinary affinity. It encompasses students interests in making connections across disciplines and their beliefs about their abilities to make those connections. The second motivation of this research is to better understand how to engage more students with physics. Physics identity describes how a student sees themselves in relation to physics. By understanding how physics identity is developed, researchers and educators can identify factors that increase interest and engagement in physics classrooms. Therefore, physics identity was used in conjunction with interdisciplinary affinity. Using a mixed methods approach, this research used quantitative data to identify the relationships interdisciplinary affinity has with physics identity and the physics classroom. These connections were explored in more detail using a case study of three students in a high school physics class. Results showed significant and positive relationships between interdisciplinary affinity and physics identity, including the individual interest and recognition components of identity. It also identified characteristics of physics classrooms that had a significant, positive relationship with interdisciplinary affinity. The qualitative case study highlighted the importance of student interest to the relationship between interdisciplinary affinity and physics identity. It also identified interest and mastery orientation as key to understanding the link between interdisciplinary affinity and the physics classroom. These results are a positive sign that by understanding interdisciplinary affinity and physics identity
Institute of Scientific and Technical Information of China (English)
Xuebin Wang; Shuhong Dai; Long Hai
2004-01-01
The capacity of energy absorption by fault bands after rock burst was calculated quantitatively according to shear stressshear deformation curves considering the interactions and interplaying among microstructures due to the heterogeneity of strain softening rock materials. The post-peak stiffness of rock specimens subjected to direct shear was derived strictly based on gradientdependent plasticity, which can not be obtained from the classical elastoplastic theory. Analytical solutions for the dissipated energy of rock burst were proposed whether the slope of the post-peak shear stress-shear deformation curve is positive or not. The analytical solutions show that shear stress level, confining pressure, shear strength, brittleness, strain rate and heterogeneity of rock materials have important influence on the dissipated energy. The larger value of the dissipated energy means that the capacity of energy dissipation in the form of shear bands is superior and a lower magnitude of rock burst is expected under the condition of the same work done by external shear force. The possibility of rock burst is reduced for a lower softening modulus or a larger thickness of shear bands.
GPAW - massively parallel electronic structure calculations with Python-based software.
Energy Technology Data Exchange (ETDEWEB)
Enkovaara, J.; Romero, N.; Shende, S.; Mortensen, J. (LCF)
2011-01-01
Electronic structure calculations are a widely used tool in materials science and large consumer of supercomputing resources. Traditionally, the software packages for these kind of simulations have been implemented in compiled languages, where Fortran in its different versions has been the most popular choice. While dynamic, interpreted languages, such as Python, can increase the effciency of programmer, they cannot compete directly with the raw performance of compiled languages. However, by using an interpreted language together with a compiled language, it is possible to have most of the productivity enhancing features together with a good numerical performance. We have used this approach in implementing an electronic structure simulation software GPAW using the combination of Python and C programming languages. While the chosen approach works well in standard workstations and Unix environments, massively parallel supercomputing systems can present some challenges in porting, debugging and profiling the software. In this paper we describe some details of the implementation and discuss the advantages and challenges of the combined Python/C approach. We show that despite the challenges it is possible to obtain good numerical performance and good parallel scalability with Python based software.
Directory of Open Access Journals (Sweden)
Lopes Antonio
2009-01-01
Full Text Available Background : In patients with congenital heart disease undergoing cardiac catheterization for hemodynamic purposes, parameter estimation by the indirect Fick method using a single predicted value of oxygen consumption has been a matter of criticism. Objective : We developed a computer-based routine for rapid estimation of replicate hemodynamic parameters using multiple predicted values of oxygen consumption. Materials and Methods : Using Microsoft ® Excel facilities, we constructed a matrix containing 5 models (equations for prediction of oxygen consumption, and all additional formulas needed to obtain replicate estimates of hemodynamic parameters. Results : By entering data from 65 patients with ventricular septal defects, aged 1 month to 8 years, it was possible to obtain multiple predictions for oxygen consumption, with clear between-age groups ( P < .001 and between-methods ( P < .001 differences. Using these predictions in the individual patient, it was possible to obtain the upper and lower limits of a likely range for any given parameter, which made estimation more realistic. Conclusion : The organized matrix allows for rapid obtainment of replicate parameter estimates, without error due to exhaustive calculations.
Fission yield calculation using toy model based on Monte Carlo simulation
Energy Technology Data Exchange (ETDEWEB)
Jubaidah, E-mail: jubaidah@student.itb.ac.id [Nuclear Physics and Biophysics Division, Department of Physics, Bandung Institute of Technology. Jl. Ganesa No. 10 Bandung – West Java, Indonesia 40132 (Indonesia); Physics Department, Faculty of Mathematics and Natural Science – State University of Medan. Jl. Willem Iskandar Pasar V Medan Estate – North Sumatera, Indonesia 20221 (Indonesia); Kurniadi, Rizal, E-mail: rijalk@fi.itb.ac.id [Nuclear Physics and Biophysics Division, Department of Physics, Bandung Institute of Technology. Jl. Ganesa No. 10 Bandung – West Java, Indonesia 40132 (Indonesia)
2015-09-30
Toy model is a new approximation in predicting fission yield distribution. Toy model assumes nucleus as an elastic toy consist of marbles. The number of marbles represents the number of nucleons, A. This toy nucleus is able to imitate the real nucleus properties. In this research, the toy nucleons are only influenced by central force. A heavy toy nucleus induced by a toy nucleon will be split into two fragments. These two fission fragments are called fission yield. In this research, energy entanglement is neglected. Fission process in toy model is illustrated by two Gaussian curves intersecting each other. There are five Gaussian parameters used in this research. They are scission point of the two curves (R{sub c}), mean of left curve (μ{sub L}) and mean of right curve (μ{sub R}), deviation of left curve (σ{sub L}) and deviation of right curve (σ{sub R}). The fission yields distribution is analyses based on Monte Carlo simulation. The result shows that variation in σ or µ can significanly move the average frequency of asymmetry fission yields. This also varies the range of fission yields distribution probability. In addition, variation in iteration coefficient only change the frequency of fission yields. Monte Carlo simulation for fission yield calculation using toy model successfully indicates the same tendency with experiment results, where average of light fission yield is in the range of 90
Structure reconstruction of TiO2-based multi-wall nanotubes: first-principles calculations.
Bandura, A V; Evarestov, R A; Lukyanov, S I
2014-07-28
A new method of theoretical modelling of polyhedral single-walled nanotubes based on the consolidation of walls in the rolled-up multi-walled nanotubes is proposed. Molecular mechanics and ab initio quantum mechanics methods are applied to investigate the merging of walls in nanotubes constructed from the different phases of titania. The combination of two methods allows us to simulate the structures which are difficult to find only by ab initio calculations. For nanotube folding we have used (1) the 3-plane fluorite TiO2 layer; (2) the anatase (101) 6-plane layer; (3) the rutile (110) 6-plane layer; and (4) the 6-plane layer with lepidocrocite morphology. The symmetry of the resulting single-walled nanotubes is significantly lower than the symmetry of initial coaxial cylindrical double- or triple-walled nanotubes. These merged nanotubes acquire higher stability in comparison with the initial multi-walled nanotubes. The wall thickness of the merged nanotubes exceeds 1 nm and approaches the corresponding parameter of the experimental patterns. The present investigation demonstrates that the merged nanotubes can integrate the two different crystalline phases in one and the same wall structure.
An Efficient Algorithm for Calculating Aircraft RCS Based on the Geometrical Characteristics
Institute of Scientific and Technical Information of China (English)
Gao Zhenghong; Wang Mingliang
2008-01-01
Taking into account the influences of scatterer geometrical shapes on induced currents, an algorithm, termed the sparse-matrix method (SMM), is proposed to calculate radar cross section (RCS) of aircraft configuration. Based on the geometrical characteristics and the method of moment (MOM), the SMM points out that the strong current coupling zone could be predefined according to the shape of scatterers. Two geometrical parameters, the surface curvature and the electrical space between the field position and source position, are deducted to distinguish the dominant current coupling. Then the strong current coupling is computed to construct an impedance matrix having sparse nature, which is solved to compute RCS. The efficiency and feasibility of the SMM are demonstrated by computing electromagnetic scattering of some kinds of shapes such as a cone-sphere with a gap, a bi-arc column and a stealth aircraft configuration.The numerical results show that: (1) the accuracy of SMM is satisfied, as compared with MOM, and the computational time it spends is only about 8% of the MOM; (2) with the electrical space considered, making another allowance for the surface curvature can reduce the computation time by 9.5%.
Improvement of Power Flow Calculation with Optimization Factor Based on Current Injection Method
Directory of Open Access Journals (Sweden)
Lei Wang
2014-01-01
Full Text Available This paper presents an improvement in power flow calculation based on current injection method by introducing optimization factor. In the method proposed by this paper, the PQ buses are represented by current mismatches while the PV buses are represented by power mismatches. It is different from the representations in conventional current injection power flow equations. By using the combined power and current injection mismatches method, the number of the equations required can be decreased to only one for each PV bus. The optimization factor is used to improve the iteration process and to ensure the effectiveness of the improved method proposed when the system is ill-conditioned. To verify the effectiveness of the method, the IEEE test systems are tested by conventional current injection method and the improved method proposed separately. Then the results are compared. The comparisons show that the optimization factor improves the convergence character effectively, especially that when the system is at high loading level and R/X ratio, the iteration number is one or two times less than the conventional current injection method. When the overloading condition of the system is serious, the iteration number in this paper appears 4 times less than the conventional current injection method.
Auxiliary-field-based trial wave functions in quantum Monte Carlo calculations
Chang, Chia-Chen; Rubenstein, Brenda M.; Morales, Miguel A.
2016-12-01
Quantum Monte Carlo (QMC) algorithms have long relied on Jastrow factors to incorporate dynamic correlation into trial wave functions. While Jastrow-type wave functions have been widely employed in real-space algorithms, they have seen limited use in second-quantized QMC methods, particularly in projection methods that involve a stochastic evolution of the wave function in imaginary time. Here we propose a scheme for generating Jastrow-type correlated trial wave functions for auxiliary-field QMC methods. The method is based on decoupling the two-body Jastrow into one-body projectors coupled to auxiliary fields, which then operate on a single determinant to produce a multideterminant trial wave function. We demonstrate that intelligent sampling of the most significant determinants in this expansion can produce compact trial wave functions that reduce errors in the calculated energies. Our technique may be readily generalized to accommodate a wide range of two-body Jastrow factors and applied to a variety of model and chemical systems.
Formation of a 6FDA-based ring polyimide with nanoscale cavity evaluated by DFT calculations
Fukuda, Mitsuhiro; Takao, Yoshimi; Tamai, Yoshinori
2005-04-01
The computer-aided molecular design of a rigid ring molecule has been performed. As a candidate molecule, the polyimide derived from 2,2-bis(3,4-carboxylphenyl) hexafluoropropane dianhydride (6FDA) with m-phenylenediamine (MDA) has been used. The optimized structures of the 6FDA-MDA model compounds including a precursor type amic acid model were investigated using the density functional theory (DFT) at the B3LYP/6-311G(d,p) level. Using the optimized structures of the model compounds, the probable combinations to form a flat ring polyimide are considered by taking the spatial angles between the respective aromatic groups into consideration. We selected several combinations with different conformations and the number of monomer units. We showed that the dimer, trimer and tetramer of not only the 6FDA-based ring imide but also the corresponding ring amic acid can have a stable geometry. Each of them contains a cavity of sub-nanometer size and characteristic shape. Among them, the interaction energy with some guest molecules are evaluated for the smallest ring imide constructed from two units of 6FDA-MDA using the DFT calculations.
Optimization of the Multi-Spectral Euclidean Distance Calculation for FPGA-based Spaceborne Systems
Cristo, Alejandro; Fisher, Kevin; Perez, Rosa M.; Martinez, Pablo; Gualtieri, Anthony J.
2012-01-01
Due to the high quantity of operations that spaceborne processing systems must carry out in space, new methodologies and techniques are being presented as good alternatives in order to free the main processor from work and improve the overall performance. These include the development of ancillary dedicated hardware circuits that carry out the more redundant and computationally expensive operations in a faster way, leaving the main processor free to carry out other tasks while waiting for the result. One of these devices is SpaceCube, a FPGA-based system designed by NASA. The opportunity to use FPGA reconfigurable architectures in space allows not only the optimization of the mission operations with hardware-level solutions, but also the ability to create new and improved versions of the circuits, including error corrections, once the satellite is already in orbit. In this work, we propose the optimization of a common operation in remote sensing: the Multi-Spectral Euclidean Distance calculation. For that, two different hardware architectures have been designed and implemented in a Xilinx Virtex-5 FPGA, the same model of FPGAs used by SpaceCube. Previous results have shown that the communications between the embedded processor and the circuit create a bottleneck that affects the overall performance in a negative way. In order to avoid this, advanced methods including memory sharing, Native Port Interface (NPI) connections and Data Burst Transfers have been used.
Lin, Lin; Yang, Chao; He, Lixin
2012-01-01
We describe how to apply the recently developed pole expansion plus selected inversion (PEpSI) technique to Kohn-Sham density function theory (DFT) electronic structure calculations that are based on atomic orbital discretization. We give analytic expressions for evaluating charge density, total energy, Helmholtz free energy and atomic forces without using the eigenvalues and eigenvectors of the Kohn-Sham Hamiltonian. We also show how to update the chemical potential without using Kohn-Sham eigenvalues. The advantage of using PEpSI is that it has a much lower computational complexity than that associated with the matrix diagonalization procedure. We demonstrate the performance gain by comparing the timing of PEpSI with that of diagonalization on insulating and metallic nanotubes. For these quasi-1D systems, the complexity of PEpSI is linear with respect to the number of atoms. This linear scaling can be observed in our computational experiments when the number of atoms in a nanotube is larger than a few hundr...
Joint kinematic calculation based on clinical direct kinematic versus inverse kinematic gait models.
Kainz, H; Modenese, L; Lloyd, D G; Maine, S; Walsh, H P J; Carty, C P
2016-06-14
Most clinical gait laboratories use the conventional gait analysis model. This model uses a computational method called Direct Kinematics (DK) to calculate joint kinematics. In contrast, musculoskeletal modelling approaches use Inverse Kinematics (IK) to obtain joint angles. IK allows additional analysis (e.g. muscle-tendon length estimates), which may provide valuable information for clinical decision-making in people with movement disorders. The twofold aims of the current study were: (1) to compare joint kinematics obtained by a clinical DK model (Vicon Plug-in-Gait) with those produced by a widely used IK model (available with the OpenSim distribution), and (2) to evaluate the difference in joint kinematics that can be solely attributed to the different computational methods (DK versus IK), anatomical models and marker sets by using MRI based models. Eight children with cerebral palsy were recruited and presented for gait and MRI data collection sessions. Differences in joint kinematics up to 13° were found between the Plug-in-Gait and the gait 2392 OpenSim model. The majority of these differences (94.4%) were attributed to differences in the anatomical models, which included different anatomical segment frames and joint constraints. Different computational methods (DK versus IK) were responsible for only 2.7% of the differences. We recommend using the same anatomical model for kinematic and musculoskeletal analysis to ensure consistency between the obtained joint angles and musculoskeletal estimates.
Pricing swaptions and coupon bond options in affine term structure models
Schrager, D.F.; Pelsser, A.A.J.
2005-01-01
We propose an approach to …nd an approximate price of a swaption in Affine Term Structure Models. Our approach is based on the derivation of approximate dynamics in which the volatility of the Forward Swap Rate is itself an affine function of the factors. Hence we remain in the Affine framework and
Random and bias errors in simple regression-based calculations of sea-level acceleration
Howd, P.; Doran, K. J.; Sallenger, A. H.
2012-12-01
We examine the random and bias errors associated with three simple regression-based methods used to calculate the acceleration of sea-level elevation (SL). These methods are: (1) using ordinary least-squares regression (OLSR) to fit a single second-order (in time) equation to an entire elevation time series; (2) using a sliding regression window with OLRS 2nd order fits to provide time and window length dependent estimates; and (3) using a sliding regression window with OLSR 1st order fits to provide time and window length dependent estimates of sea level rate differences (SLRD). A Monte Carlo analysis using synthetic elevation time series with 9 different noise formulations (red, AR(1), and white noise at 3 variance levels) is used to examine the error structure associated with the three analysis methods. We show that, as expected, the single-fit method (1), while providing statistically unbiased estimates of the mean acceleration over an interval, by statistical design does not provide estimates of time-varying acceleration. This technique cannot be expected to detect recent changes in SL acceleration, such as those predicted by some climate models. The two sliding window techniques show similar qualitative results for the test time series, but differ dramatically in their statistical significance. Estimates of acceleration based on the 2nd order fits (2) are numerically smaller than the rate differences (3), and in the presence of near-equal residual noise, are more difficult to detect with statistical significance. We show, using the SLRD estimates from tide gauge data, how statistically significant changes in sea level accelerations can be detected at different temporal and spatial scales.
DEFF Research Database (Denmark)
Skjødt, Mette Louise
surface expression of various antibody formats in the generated knockout strain. Functional scFv and scFab fragments were efficiently displayed on yeast whereas impaired chain assembly and heavy chain degradation was observed for display of full-length IgG molecules. To identify the optimal polypeptide...... linker for yeast surface display of scFv and scFab fragments, we compared a series of different Gly-Ser-based linkers in display and antigen binding proficiency. We show that these formats of the model antibody can accommodate linkers of different lengths and that introduction of alanine or glutamate...... fragments by in vivo homologous recombination large combinatorial antibody libraries can easily be generated. We have optimized ordered assembly of three CDR fragments into a gapped vector and observed increased transformation efficiency in a yeast strain carrying a deletion of the SGS1 helicase...
Digital Game-Based Learning: A Supplement for Medication Calculation Drills in Nurse Education
Foss, Brynjar; Lokken, Atle; Leland, Arne; Stordalen, Jorn; Mordt, Petter; Oftedal, Bjorg F.
2014-01-01
Student nurses, globally, appear to struggle with medication calculations. In order to improve these skills among student nurses, the authors developed The Medication Game--an online computer game that aims to provide simple mathematical and medical calculation drills, and help students practise standard medical units and expressions. The aim of…
Directory of Open Access Journals (Sweden)
R. Frandoloso
2013-01-01
Full Text Available The expression of chemokines (CCL-2 and CXCL-8 and cytokines (IL-1α, IL-1β, IL-6, TNF-α, and IL-10 was evaluated by RT-qPCR in colostrum-deprived pigs vaccinated and challenged with Haemophilus parasuis serovar 5. Two vaccines containing native proteins with affinity to porcine transferrin (NPAPTim and NPAPTit were tested, along with two control groups: one inoculated with PBS instead of antigen (challenge group (CHG, and another one nonimmunized and noninfected (blank group. The use of NPAPTim and NPAPTit resulted in complete protection against H. parasuis (no clinical signs and/or lesions, and both vaccines were capable of avoiding the expression of the proinflammatory molecules to levels similar to physiological values in blank group. However, overexpression of all proinflammatory molecules was observed in CHG group, mainly in the target infection tissues (brain, lungs, and spleen. High expression of CCL-2, CXCL-8, IL-1α, IL-1β, and IL-6 can be considered one of the characteristics of H. parasuis infection by serovar 5.
Energy Technology Data Exchange (ETDEWEB)
Zhao, Ying; Hammoudeh, Dalia; Yun, Mi-Kyung; Qi, Jianjun; White, Stephen W.; Lee, Richard E. (Tennessee-HSC); (SJCH)
2012-05-29
Dihydropteroate synthase (DHPS) is the validated drug target for sulfonamide antimicrobial therapy. However, due to widespread drug resistance and poor tolerance, the use of sulfonamide antibiotics is now limited. The pterin binding pocket in DHPS is highly conserved and is distinct from the sulfonamide binding site. It therefore represents an attractive alternative target for the design of novel antibacterial agents. We previously carried out the structural characterization of a known pyridazine inhibitor in the Bacillus anthracis DHPS pterin site and identified a number of unfavorable interactions that appear to compromise binding. With this structural information, a series of 4,5-dioxo-1,4,5,6-tetrahydropyrimido[4,5-c]pyridazines were designed to improve binding affinity. Most importantly, the N-methyl ring substitution was removed to improve binding within the pterin pocket, and the length of the side chain carboxylic acid was optimized to fully engage the pyrophosphate binding site. These inhibitors were synthesized and evaluated by an enzyme activity assay, X-ray crystallography, isothermal calorimetry, and surface plasmon resonance to obtain a comprehensive understanding of the binding interactions from structural, kinetic, and thermodynamic perspectives. This study clearly demonstrates that compounds lacking the N-methyl substitution exhibit increased inhibition of DHPS, but the beneficial effects of optimizing the side chain length are less apparent.
Equation of State of Al Based on Quantum Molecular Dynamics Calculations
Minakov, Dmitry V.; Levashov, Pavel R.; Khishchenko, Konstantin V.
2011-06-01
In this work, we present quantum molecular dynamics calculations of the shock Hugoniots of solid and porous samples as well as release isentropes and values of isentropic sound velocity behind the shock front for aluminum. We use the VASP code with an ultrasoft pseudopotential and GGA exchange-correlation functional. Up to 108 particles have been used in calculations. For the Hugoniots of Al we solve the Hugoniot equation numerically. To calculate release isentropes, we use Zel'dovich's approach and integrate an ordinary differential equation for the temperature thus restoring all thermodynamic parameters. Isentropic sound velocity is calculated by differentiation along isentropes. The results of our calculations are in good agreement with experimental data. Thus, quantum molecular dynamics results can be effectively used for verification or calibration of semiempirical equations of state under conditions of lack of experimental information at high energy densities. This work is supported by RFBR, grants 09-08-01129 and 11-08-01225.
Bending Moment Calculations for Piles Based on the Finite Element Method
Directory of Open Access Journals (Sweden)
Yu-xin Jie
2013-01-01
Full Text Available Using the finite element analysis program ABAQUS, a series of calculations on a cantilever beam, pile, and sheet pile wall were made to investigate the bending moment computational methods. The analyses demonstrated that the shear locking is not significant for the passive pile embedded in soil. Therefore, higher-order elements are not always necessary in the computation. The number of grids across the pile section is important for bending moment calculated with stress and less significant for that calculated with displacement. Although computing bending moment with displacement requires fewer grid numbers across the pile section, it sometimes results in variation of the results. For displacement calculation, a pile row can be suitably represented by an equivalent sheet pile wall, whereas the resulting bending moments may be different. Calculated results of bending moment may differ greatly with different grid partitions and computational methods. Therefore, a comparison of results is necessary when performing the analysis.
Affine Contractions on the Plane
Celik, D.; Ozdemir, Y.; Ureyen, M.
2007-01-01
Contractions play a considerable role in the theory of fractals. However, it is not easy to find contractions which are not similitudes. In this study, it is shown by counter examples that an affine transformation of the plane carrying a given triangle onto another triangle may not be a contraction even if it contracts edges, heights or medians.…
Ray tracing based path-length calculations for polarized light tomographic imaging
Manjappa, Rakesh; Kanhirodan, Rajan
2015-09-01
A ray tracing based path length calculation is investigated for polarized light transport in a pixel space. Tomographic imaging using polarized light transport is promising for applications in optical projection tomography of small animal imaging and turbid media with low scattering. Polarized light transport through a medium can have complex effects due to interactions such as optical rotation of linearly polarized light, birefringence, di-attenuation and interior refraction. Here we investigate the effects of refraction of polarized light in a non-scattering medium. This step is used to obtain the initial absorption estimate. This estimate can be used as prior in Monte Carlo (MC) program that simulates the transport of polarized light through a scattering medium to assist in faster convergence of the final estimate. The reflectance for p-polarized (parallel) and s-polarized (perpendicular) are different and hence there is a difference in the intensities that reach the detector end. The algorithm computes the length of the ray in each pixel along the refracted path and this is used to build the weight matrix. This weight matrix with corrected ray path length and the resultant intensity reaching the detector for each ray is used in the algebraic reconstruction (ART) method. The proposed method is tested with numerical phantoms for various noise levels. The refraction errors due to regions of different refractive index are discussed, the difference in intensities with polarization is considered. The improvements in reconstruction using the correction so applied is presented. This is achieved by tracking the path of the ray as well as the intensity of the ray as it traverses through the medium.
Method for stability analysis based on the Floquet theory and Vidyn calculations
Energy Technology Data Exchange (ETDEWEB)
Ganander, Hans
2005-03-01
This report presents the activity 3.7 of the STEM-project Aerobig and deals with aeroelastic stability of the complete wind turbine structure at operation. As a consequence of the increase of sizes of wind turbines dynamic couplings are being more important for loads and dynamic properties. The steady ambition to increase the cost competitiveness of wind turbine energy by using optimisation methods lowers design margins, which in turn makes questions about stability of the turbines more important. The main objective of the project is to develop a general stability analysis tool, based on the VIDYN methodology regarding the turbine dynamic equations and the Floquet theory for the stability analysis. The reason for selecting the Floquet theory is that it is independent of number of blades, thus can be used for 2 as well as 3 bladed turbines. Although the latter ones are dominating on the market, the former has large potential when talking about offshore large turbines. The fact that cyclic and individual blade pitch controls are being developed as a mean for reduction of fatigue also speaks for general methods as Floquet. The first step of a general system for stability analysis has been developed, the code VIDSTAB. Together with other methods, as the snap shot method, the Coleman transformation and the use of Fourier series, eigenfrequences and modes can be analysed. It is general with no restrictions on the number of blades nor the symmetry of the rotor. The derivatives of the aerodynamic forces are calculated numerically in this first version. Later versions would include state space formulations of these forces. This would also be the case for the controllers of turbine rotation speed, yaw direction and pitch angle.
Majumder, Moumita; Dawes, Richard; Wang, Xiao-Gang; Carrington, Tucker; Li, Jun; Guo, Hua; Manzhos, Sergei
2014-06-01
New potential energy surfaces for methane were constructed, represented as analytic fits to about 100,000 individual high-level ab initio data. Explicitly-correlated multireference data (MRCI-F12(AE)/CVQZ-F12) were computed using Molpro [1] and fit using multiple strategies. Fits with small to negligible errors were obtained using adaptations of the permutation-invariant-polynomials (PIP) approach [2,3] based on neural-networks (PIP-NN) [4,5] and the interpolative moving least squares (IMLS) fitting method [6] (PIP-IMLS). The PESs were used in full-dimensional vibrational calculations with an exact kinetic energy operator by representing the Hamiltonian in a basis of products of contracted bend and stretch functions and using a symmetry adapted Lanczos method to obtain eigenvalues and eigenvectors. Very close agreement with experiment was produced from the purely ab initio PESs. References 1- H.-J. Werner, P. J. Knowles, G. Knizia, 2012.1 ed. 2012, MOLPRO, a package of ab initio programs. see http://www.molpro.net. 2- Z. Xie and J. M. Bowman, J. Chem. Theory Comput 6, 26, 2010. 3- B. J. Braams and J. M. Bowman, Int. Rev. Phys. Chem. 28, 577, 2009. 4- J. Li, B. Jiang and Hua Guo, J. Chem. Phys. 139, 204103 (2013). 5- S Manzhos, X Wang, R Dawes and T Carrington, JPC A 110, 5295 (2006). 6- R. Dawes, X-G Wang, A.W. Jasper and T. Carrington Jr., J. Chem. Phys. 133, 134304 (2010).
Yang, Xihui; Hu, Yichen; Kong, Weijun; Chu, Xianfeng; Yang, Meihua; Zhao, Ming; Ouyang, Zhen
2014-11-01
A rapid, selective, and sensitive ultra-fast liquid chromatography with tandem mass spectrometry method was developed for the determination of ochratoxin A in traditional Chinese medicines based on vortex-assisted solid-liquid microextraction and aptamer-affinity column clean-up. Through optimizing the sample pretreatment procedures and chromatographic conditions, good linearity (r(2) ≥ 0.9993), low limit of detection (0.5-0.8 μg/kg), and satisfactory recovery (83.54-94.44%) expressed the good reliability and applicability of the established method in various traditional Chinese medicines. Moreover, the aptamer-affinity column, prepared in-house, showed an excellent feasibility owing to its specific identification of ochratoxin A in various kinds of selected traditional Chinese medicines. The maximum adsorption amount and applicability value were 188.96 ± 10.56 ng and 72.3%, respectively. The matrix effects were effectively eliminated, especially for m/z 404.2→358.0 of ochratoxin A. The application of the developed method for screening the natural contamination levels of ochratoxin A in 25 random traditional Chinese medicines on the market in China indicated that only eight samples were contaminated with low levels below the legal limit (5.0 μg/kg) set by the European Union. This study provided a preferred choice for the rapid and accurate monitoring of ochratoxin A in complex matrices.
Institute of Scientific and Technical Information of China (English)
刘婧
2011-01-01
Three-dimensional affine transformation formula and affine array fomula in three-dimensional coordi-nate system, giving a simple coordinate transformation formula are deduced and its algorithm is proved by QT class libraries based on C++ which could be applied to graphic adapter that imitates three dimension by two dimension, such as three-dimensional transformation imitation in flash and three-dimensional graphic transformation in GUI pro-gramming.%根据二维仿射变换公式推导了三维仿射变换公式,给出了三维坐标系中的仿射矩阵表示公式.同时提出了一种简单的三维到二维的坐标转换公式,并且使用基于C++的QT类库对这种算法进行了实现.这种算法可以应用于一些使用二维模拟三维的图形处理软件中,比如flash中的三维变换模拟和GUI编程中的三维图形变换等情况下.
12 CFR 702.106 - Standard calculation of risk-based net worth requirement.
2010-01-01
... AFFECTING CREDIT UNIONS PROMPT CORRECTIVE ACTION Net Worth Classification § 702.106 Standard calculation of...) Allowance. Negative one hundred percent (−100%) of the balance of the Allowance for Loan and Lease...
Bourguignon, Laurent; Goutelle, Sylvain; Gérard, Cécile; Guillermet, Anne; Burdin de Saint Martin, Julie; Maire, Pascal; Ducher, Michel
2009-01-01
The use of amikacin is difficult because of its toxicity and its pharmacokinetic variability. This variability is almost ignored in adult standard dosage regimens since only the weight is used in the dose calculation. Our objective is to test if the pharmacokinetic of amikacin can be regarded as homogenous, and if the method for calculating the dose according to patients' weight is appropriate. From a cohort of 580 patients, five groups of patients were created by statistical data partitioning. A population pharmacokinetic analysis was performed in each group. The adult population is not homogeneous in term of pharmacokinetics. The doses required to achieve a maximum concentration of 60 mg/L are strongly different (585 to 1507 mg) between groups. The exclusive use of the weight to calculate the dose of amikacine appears inappropriate for 80% of the patients, showing the limits of the formulae for calculating doses of aminoglycosides.
Similarity Calculation Method of Chinese Short Text Based on Semantic Feature Space
Liqiang Pan; Pu Zhang; Anping Xiong
2015-01-01
In order to improve the accuracy of short text similarity calculation, this paper presents the idea that use the history of short text messages to construct semantic feature space, then use the vector in semantic feature space to represent short text and do semantic extension, and finally calculate the short text similarity of corresponding vector in the semantic feature space. This method can represent the semantic information of short text message thoroughly so as to improve the accuracy of...
The Activation Energy Of Ignition Calculation For Materials Based On Plastics
Rantuch Peter; Wachter Igor; Martinka Jozef; Kuracina Marcel
2015-01-01
This article deals with the activation energy of ignition calculation of plastics. Two types of polyamide 6 and one type of polypropylene and polyurethane were selected as samples. The samples were tested under isothermal conditions at several temperatures while times to ignition were observed. From the obtained data, activation energy relating to the moment of ignition was calculated for each plastics. The values for individual plastics were different. The highest activation energies (129.5 ...
Effectiveness of a computer based medication calculation education and testing programme for nurses.
Sherriff, Karen; Burston, Sarah; Wallis, Marianne
2012-01-01
The aim of the study was to evaluate the effect of an on-line, medication calculation education and testing programme. The outcome measures were medication calculation proficiency and self efficacy. This quasi-experimental study involved the administration of questionnaires before and after nurses completed annual medication calculation testing. The study was conducted in two hospitals in south-east Queensland, Australia, which provide a variety of clinical services including obstetrics, paediatrics, ambulatory, mental health, acute and critical care and community services. Participants were registered nurses (RNs) and enrolled nurses with a medication endorsement (EN(Med)) working as clinicians (n=107). Data pertaining to success rate, number of test attempts, self-efficacy, medication calculation error rates and nurses' satisfaction with the programme were collected. Medication calculation scores at first test attempt showed improvement following one year of access to the programme. Two of the self-efficacy subscales improved over time and nurses reported satisfaction with the online programme. Results of this study may facilitate the continuation and expansion of medication calculation and administration education to improve nursing knowledge, inform practise and directly improve patient safety.
Directory of Open Access Journals (Sweden)
R. Sørensen
2006-01-01
Full Text Available The topographic wetness index (TWI, ln(a/tanβ, which combines local upslope contributing area and slope, is commonly used to quantify topographic control on hydrological processes. Methods of computing this index differ primarily in the way the upslope contributing area is calculated. In this study we compared a number of calculation methods for TWI and evaluated them in terms of their correlation with the following measured variables: vascular plant species richness, soil pH, groundwater level, soil moisture, and a constructed wetness degree. The TWI was calculated by varying six parameters affecting the distribution of accumulated area among downslope cells and by varying the way the slope was calculated. All possible combinations of these parameters were calculated for two separate boreal forest sites in northern Sweden. We did not find a calculation method that performed best for all measured variables; rather the best methods seemed to be variable and site specific. However, we were able to identify some general characteristics of the best methods for different groups of measured variables. The results provide guiding principles for choosing the best method for estimating species richness, soil pH, groundwater level, and soil moisture by the TWI derived from digital elevation models.
Directory of Open Access Journals (Sweden)
Huijun Xu
2016-01-01
Full Text Available Many clinics still use monitor unit (MU calculations for electron treatment planning and/or quality assurance (QA. This work (1 investigates the clinical implementation of a dosimetry system including a modified American Association of Physicists in Medicine-task group-71 (TG-71-based electron MU calculation protocol (modified TG-71 electron [mTG-71E] and an independent commercial calculation program and (2 provides the practice recommendations for clinical usage. Following the recently published TG-71 guidance, an organized mTG-71E databook was developed to facilitate data access and subsequent MU computation according to our clinical need. A recently released commercial secondary calculation program - Mobius3D (version 1.5.1 Electron Quick Calc (EQC (Mobius Medical System, LP, Houston, TX, USA, with inherent pencil beam algorithm and independent beam data, was used to corroborate the calculation results. For various setups, the calculation consistency and accuracy of mTG-71E and EQC were validated by their cross-comparison and the ion chamber measurements in a solid water phantom. Our results show good agreement between mTG-71E and EQC calculations, with average 2% difference. Both mTG-71E and EQC calculations match with measurements within 3%. In general, these differences increase with decreased cutout size, increased extended source to surface distance, and lower energy. It is feasible to use TG71 and Mobius3D clinically as primary and secondary electron MU calculations or vice versa. We recommend a practice that only requires patient-specific measurements in rare cases when mTG-71E and EQC calculations differ by 5% or more.
DEFF Research Database (Denmark)
Göksu, Ömer; Teodorescu, Remus; Bak-Jensen, Birgitte;
2012-01-01
As more renewable energy sources, especially more wind turbines are installed in the power system, analysis of the power system with the renewable energy sources becomes more important. Short-circuit calculation is a well known fault analysis method which is widely used for early stage analysis...... and design purposes and tuning of the network protection equipments. However, due to current controlled power converter-based grid connection of the wind turbines, short-circuit calculation cannot be performed with its current form for networks with power converter-based wind turbines. In this paper......, an iterative approach for short-circuit calculation of networks with power converter-based wind turbines is developed for both symmetrical and asymmetrical short-circuit grid faults. As a contribution to existing solutions, negative sequence current injection from the wind turbines is also taken into account...
Calculation of the axion mass based on high-temperature lattice quantum chromodynamics
Borsanyi, S.; Fodor, Z.; Guenther, J.; Kampert, K.-H.; Katz, S. D.; Kawanai, T.; Kovacs, T. G.; Mages, S. W.; Pasztor, A.; Pittler, F.; Redondo, J.; Ringwald, A.; Szabo, K. K.
2016-11-01
Unlike the electroweak sector of the standard model of particle physics, quantum chromodynamics (QCD) is surprisingly symmetric under time reversal. As there is no obvious reason for QCD being so symmetric, this phenomenon poses a theoretical problem, often referred to as the strong CP problem. The most attractive solution for this requires the existence of a new particle, the axion—a promising dark-matter candidate. Here we determine the axion mass using lattice QCD, assuming that these particles are the dominant component of dark matter. The key quantities of the calculation are the equation of state of the Universe and the temperature dependence of the topological susceptibility of QCD, a quantity that is notoriously difficult to calculate, especially in the most relevant high-temperature region (up to several gigaelectronvolts). But by splitting the vacuum into different sectors and re-defining the fermionic determinants, its controlled calculation becomes feasible. Thus, our twofold prediction helps most cosmological calculations to describe the evolution of the early Universe by using the equation of state, and may be decisive for guiding experiments looking for dark-matter axions. In the next couple of years, it should be possible to confirm or rule out post-inflation axions experimentally, depending on whether the axion mass is found to be as predicted here. Alternatively, in a pre-inflation scenario, our calculation determines the universal axionic angle that corresponds to the initial condition of our Universe.
Energy Technology Data Exchange (ETDEWEB)
Abdel-Khalik, Hany S. [North Carolina State Univ., Raleigh, NC (United States); Zhang, Qiong [North Carolina State Univ., Raleigh, NC (United States)
2014-05-20
The development of hybrid Monte-Carlo-Deterministic (MC-DT) approaches, taking place over the past few decades, have primarily focused on shielding and detection applications where the analysis requires a small number of responses, i.e. at the detector locations(s). This work further develops a recently introduced global variance reduction approach, denoted by the SUBSPACE approach is designed to allow the use of MC simulation, currently limited to benchmarking calculations, for routine engineering calculations. By way of demonstration, the SUBSPACE approach is applied to assembly level calculations used to generate the few-group homogenized cross-sections. These models are typically expensive and need to be executed in the order of 10^{3} - 10^{5} times to properly characterize the few-group cross-sections for downstream core-wide calculations. Applicability to k-eigenvalue core-wide models is also demonstrated in this work. Given the favorable results obtained in this work, we believe the applicability of the MC method for reactor analysis calculations could be realized in the near future.
GPU-based calculation of scattering characteristics of space target in the visible spectrum
Cao, YunHua; Wu, Zhensen; Bai, Lu; Song, Zhan; Guo, Xing
2014-10-01
Scattering characteristics of space target in the visible spectrum, which can be used in target detection, target identification, and space docking, is calculated in this paper. Algorithm of scattering characteristics of space target is introduced. In the algorithm, space target is divided into thousands of triangle facets. In order to obtain scattering characteristics of the target, calculation of each facet will be needed. For each facet, calculation will be executed in the spectrum of 400-760 nanometers at intervals of 1 nanometer. Thousands of facets and hundreds of bands of each facet will cause huge calculation, thus the calculation will be very time-consuming. Taking into account the high parallelism of the algorithm, Graphic Processing Units (GPUs) are used to accelerate the algorithm. The acceleration reaches 300 times speedup on single Femi-generation NVDIA GTX 590 as compared to the single-thread CPU version of code on Intel(R) Xeon(R) CPU E5-2620. And a speedup of 412x can be reached when a Kepler-generation NVDIA K20c is used.
X-ray tube output based calculation of patient entrance surface dose: validation of the method
Energy Technology Data Exchange (ETDEWEB)
Harju, O.; Toivonen, M.; Tapiovaara, M.; Parviainen, T. [Radiation and Nuclear Safety Authority, Helsinki (Finland)
2003-06-01
X-ray departments need methods to monitor the doses delivered to the patients in order to be able to compare their dose level to established reference levels. For this purpose, patient dose per radiograph is described in terms of the entrance surface dose (ESD) or dose-area product (DAP). The actual measurement is often made by using a DAP-meter or thermoluminescent dosimeters (TLD). The third possibility, the calculation of ESD from the examination technique factors, is likely to be a common method for x-ray departments that do not have the other methods at their disposal or for examinations where the dose may be too low to be measured by the other means (e.g. chest radiography). We have developed a program for the determination of ESD by the calculation method and analysed the accuracy that can be achieved by this indirect method. The program calculates the ESD from the current time product, x-ray tube voltage, beam filtration and focus- to-skin distance (FSD). Additionally, for calibrating the dose calculation method and thereby improving the accuracy of the calculation, the x-ray tube output should be measured for at least one x-ray tube voltage value in each x-ray unit. The aim of the present work is to point out the restrictions of the method and details of its practical application. The first experiences from the use of the method will be summarised. (orig.)
Nonlinear Optimization Method of Ship Floating Condition Calculation in Wave Based on Vector
Institute of Scientific and Technical Information of China (English)
丁宁; 余建星
2014-01-01
Ship-floating-condition-in-regular-waves-is-calculated.-New-equations-controlling-any-ship’s-floating-condition-are-proposed-by-use-of-the-vector-operation.-This-form-is-a-nonlinear-optimization-problem-which-can-be-solved-using-the-penalty-function-method-with-constant-coefficients.-And-the-solving-process-is-accelerated-by-dichotomy.-During-the-solving-process,-the-ship’s-displacement-and-buoyant-centre-have-been-calculated-by-the-integration-of-the-ship-surface-according-to-the-waterline.-The-ship-surface-is-described-using-an-accumulative-chord-length-theory-in-order-to-determine-the-displacement,-the-buoyancy-center-and-the-waterline.-The-draught-forming-the-waterline-at-each-station-can-be-found-out-by-calculating-the-intersection-of-the-ship-surface-and-the-wave-surface.-The-results-of-an-example-indicate-that-this-method-is-exact-and-efficient.-It-can-calculate-the-ship-floating-condition-in-regular-waves-as-well-as-simplify-the-calculation-and-improve-the-computational-efficiency-and-the-precision-of-results.
Directory of Open Access Journals (Sweden)
Fuda Guo
2016-01-01
Full Text Available The phase stability, mechanical, electronic, and thermodynamic properties of In-Zr compounds have been explored using the first-principles calculation based on density functional theory (DFT. The calculated formation enthalpies show that these compounds are all thermodynamically stable. Information on electronic structure indicates that they possess metallic characteristics and there is a common hybridization between In-p and Zr-d states near the Fermi level. Elastic properties have been taken into consideration. The calculated results on the ratio of the bulk to shear modulus (B/G validate that InZr3 has the strongest deformation resistance. The increase of indium content results in the breakout of a linear decrease of the bulk modulus and Young’s modulus. The calculated theoretical hardness of α-In3Zr is higher than the other In-Zr compounds.
A GPU-based calculation using the three-dimensional FDTD method for electromagnetic field analysis.
Nagaoka, Tomoaki; Watanabe, Soichi
2010-01-01
Numerical simulations with the numerical human model using the finite-difference time domain (FDTD) method have recently been performed frequently in a number of fields in biomedical engineering. However, the FDTD calculation runs too slowly. We focus, therefore, on general purpose programming on the graphics processing unit (GPGPU). The three-dimensional FDTD method was implemented on the GPU using Compute Unified Device Architecture (CUDA). In this study, we used the NVIDIA Tesla C1060 as a GPGPU board. The performance of the GPU is evaluated in comparison with the performance of a conventional CPU and a vector supercomputer. The results indicate that three-dimensional FDTD calculations using a GPU can significantly reduce run time in comparison with that using a conventional CPU, even a native GPU implementation of the three-dimensional FDTD method, while the GPU/CPU speed ratio varies with the calculation domain and thread block size.
Douillard, J M; Henry, M
2003-07-15
A very simple route to calculation of the surface energy of solids is proposed because this value is very difficult to determine experimentally. The first step is the calculation of the attractive part of the electrostatic energy of crystals. The partial charges used in this calculation are obtained by using electronegativity equalization and scales of electronegativity and hardness deduced from physical characteristics of the atom. The lattice energies of the infinite crystal and of semi-infinite layers are then compared. The difference is related to the energy of cohesion and then to the surface energy. Very good results are obtained with ice, if one compares with the surface energy of liquid water, which is generally considered a good approximation of the surface energy of ice.
Energy Technology Data Exchange (ETDEWEB)
Decossas, J.L.; Vareille, J.C.; Moliton, J.P.; Teyssier, J.L. (Limoges Univ., 87 (France). Lab. d' Electronique des Polymeres sous Faisceaux Ioniques)
1983-01-01
A fast neutron dosemeter is generally composed of a radiator in which n-p elastic scattering occurs and a detector which registers protons. A theoretical study, and the calculation (FORTRAN program) of the response of such a dosemeter is presented involving two steps: 1) The proton flux emerging from a thick radiator on which monoenergetic neutrons are normally incident is studied. This is characterised by its energy spectrum depending on the neutron energy and on the radiator thickness. 2) Proton detection being achieved with a solid state nuclear track detector whose performance is known, the number of registered tracks are calculated. The dosemeter sensitivity (tracks cm/sup -2/. Sv/sup -1/) is deduced. Then, the calculations show that it is possible to optimise the radiator thickness to obtain the smallest variation in sensitivity with neutron energy. The theoretical results are in good agreement with the experimental ones found in the literature.
Spectral linelist of HD16O molecule based on VTT calculations for atmospheric application
Voronin, B. A.
2014-11-01
Three version line-list of dipole transition for isotopic modification of water molecule HD16O are presented. Line-lists have been created on the basis of VTT calculations (Voronin, Tennyson, Tolchenov et al. MNRAS, 2010) by adding air- and self-broadening coefficient, and temperature exponents for HD16O-air case. Three cut-of values for the line intensities were used: 1e-30, 1e-32 and 1e-35 cm/molecule. Calculated line-lists are available on the site ftp://ftp.iao.ru/pub/VTT/VTT-296/.
Dynamical mean field theory-based electronic structure calculations for correlated materials.
Biermann, Silke
2014-01-01
We give an introduction to dynamical mean field approaches to correlated materials. Starting from the concept of electronic correlation, we explain why a theoretical description of correlations in spectroscopic properties needs to go beyond the single-particle picture of band theory.We discuss the main ideas of dynamical mean field theory and its use within realistic electronic structure calculations, illustrated by examples of transition metals, transition metal oxides, and rare-earth compounds. Finally, we summarise recent progress on the calculation of effective Hubbard interactions and the description of dynamical screening effects in solids.
The Activation Energy Of Ignition Calculation For Materials Based On Plastics
Rantuch, Peter; Wachter, Igor; Martinka, Jozef; Kuracina, Marcel
2015-06-01
This article deals with the activation energy of ignition calculation of plastics. Two types of polyamide 6 and one type of polypropylene and polyurethane were selected as samples. The samples were tested under isothermal conditions at several temperatures while times to ignition were observed. From the obtained data, activation energy relating to the moment of ignition was calculated for each plastics. The values for individual plastics were different. The highest activation energies (129.5 kJ.mol-1 and 106.2 kJ.mol-1) were achieved by polyamides 6, while the lowest was determined for a sample of polyurethane.
The Activation Energy Of Ignition Calculation For Materials Based On Plastics
Directory of Open Access Journals (Sweden)
Rantuch Peter
2015-06-01
Full Text Available This article deals with the activation energy of ignition calculation of plastics. Two types of polyamide 6 and one type of polypropylene and polyurethane were selected as samples. The samples were tested under isothermal conditions at several temperatures while times to ignition were observed. From the obtained data, activation energy relating to the moment of ignition was calculated for each plastics. The values for individual plastics were different. The highest activation energies (129.5 kJ.mol−1 and 106.2 kJ.mol−1 were achieved by polyamides 6, while the lowest was determined for a sample of polyurethane.
Torpedo's Search Trajectory Design Based on Acquisition and Hit Probability Calculation
Institute of Scientific and Technical Information of China (English)
LI Wen-zhe; ZHANG Yu-wen; FAN Hui; WANG Yong-hu
2008-01-01
Taking aim at light torpedo search trajectory characteristic of warship, by analyzing common used torpedo search trajectory, a better torpedo search trajectory is designed, a mathematic model is built up, and the simulation calculation taking MK46 torpedo for example is carried out. The calculation results testify that this method can increase acquisition probability and hit probability by about 10%-30% at some situations and becomes feasible for the torpedo trajectory design. The research is of great reference value for the acoustic homing torpedo trajectory design and the torpedo combat efficiency research.
An accurate potential energy curve for helium based on ab initio calculations
Janzen, A. R.; Aziz, R. A.
1997-07-01
Korona, Williams, Bukowski, Jeziorski, and Szalewicz [J. Chem. Phys. 106, 1 (1997)] constructed a completely ab initio potential for He2 by fitting their calculations using infinite order symmetry adapted perturbation theory at intermediate range, existing Green's function Monte Carlo calculations at short range and accurate dispersion coefficients at long range to a modified Tang-Toennies potential form. The potential with retardation added to the dipole-dipole dispersion is found to predict accurately a large set of microscopic and macroscopic experimental data. The potential with a significantly larger well depth than other recent potentials is judged to be the most accurate characterization of the helium interaction yet proposed.
Breusegem, Sophia Y; Clegg, Robert M; Loontiens, Frank G
2002-02-01
The binding of Hoechst 33258 and DAPI to five different (A/T)4 sequences in a stable DNA hairpin was studied exploiting the substantial increase in dye fluorescence upon binding. The two dyes have comparable affinities for the AATT site (e.g. association constant K(a)=5.5 x 10(8) M(-1) for DAPI), and their affinities decrease in the series AATT > TAAT approximately equal to ATAT > TATA approximately equal to TTAA. The extreme values of K(a) differ by a factor of 200 for Hoechst 33258 but only 30 for DAPI. The binding kinetics of Hoechst 33258 were measured by stopped-flow under pseudo-first order conditions with an (A/T)4 site in excess. The lower-resolution experiments can be well represented by single exponential processes, corresponding to a single-step binding mechanism. The calculated association-rate parameters for the five (A/T)4 sites are similar (2.46 x 10(8) M(-1) s(-1) to 0.86 x 10(8) M(-1) s(-1)) and nearly diffusion-controlled, while the dissociation-rate parameters vary from 0.42 s(-1) to 96 s(-1). Thus the association constants are kinetically controlled and are close to their equilibrium-determined values. However, when obtained with increased signal-to-noise ratio, the kinetic traces for Hoechst 33258 binding at the AATT site reveal two components. The concentration dependencies of the two time constants and amplitudes are consistent with two different kinetically equivalent two-step models. In the first model, fast bimolecular binding is followed by an isomerization of the initial complex. In the second model, two single-step associations form two complexes that mutually exclude each other. For both models the four reaction-rate parameters are calculated. Finally, specific dissociation kinetics, using poly[d(A-5BrU)], show that the kinetics are even more complex than either two-step model. We correlate our results with the different binding orientations and locations of Hoechst 33258 in the DNA minor groove found in several structural studies in
Institute of Scientific and Technical Information of China (English)
徐晓苏; 吴剑飞; 徐胜保; 王立辉; 李佩娟
2014-01-01
ICCP匹配算法是水下组合导航系统中最为重要的匹配算法。针对传统ICCP匹配算法存在仅对水下航行器惯性导航系统指示航迹作旋转和平移的刚性变换的局限性问题，为提高水下航行器地形辅助导航系统中匹配算法的精度，分析了水下航行器惯性导航系统误差特性，建立了误差模型，提出了基于仿射修正技术的水下地形ICCP匹配算法。首先利用ICCP匹配算法对惯性导航系统指示航迹进行刚性变换，再利用最小二乘法求解仿射参数，进而对 ICCP 匹配航迹进行仿射修正。仿真研究表明，基于仿射修正技术的ICCP匹配算法能较好地解决传统ICCP匹配算法刚性变换的局限性，匹配精度优于传统ICCP算法，匹配误差小于数字地图网格间距的50%，同时仿射修正所耗费时间极少，所增加的时间仅为传统ICCP匹配算法匹配时间的千分之一。%ICCP is the most important matching algorithm used in underwater integrated navigation system. Traditional ICCP algorithm can only do rigid transformation (rotation and translation) for the indicated track of underwater vehicle’s INS. In order to overcome this problem and improve the accuracy of track matching, the error characteristics of INS are analyzed, and the error model of INS is established. Then a new ICCP algorithm based on affine correction is put forward. The track indicated by INS is transformed according to the ICCP algorithm. The affine parameters are solved with the least squares method. The ICCP matching track is modified by affine transformation. The simulations show that the defects of traditional ICCP algorithm can be avoided by affine correction, and the matching result is better than those of traditional ICCP algorithm, in which the matching error is less than 50% of digital map grid spacing. Meanwhile, the additional time consumed in affine correction is just about one thousandth of that in traditional ICCP
A thermodynamic approach to the affinity optimization of drug candidates.
Freire, Ernesto
2009-11-01
High throughput screening and other techniques commonly used to identify lead candidates for drug development usually yield compounds with binding affinities to their intended targets in the mid-micromolar range. The affinity of these molecules needs to be improved by several orders of magnitude before they become viable drug candidates. Traditionally, this task has been accomplished by establishing structure activity relationships to guide chemical modifications and improve the binding affinity of the compounds. As the binding affinity is a function of two quantities, the binding enthalpy and the binding entropy, it is evident that a more efficient optimization would be accomplished if both quantities were considered and improved simultaneously. Here, an optimization algorithm based upon enthalpic and entropic information generated by Isothermal Titration Calorimetry is presented.
Misini Ignjatović, Majda; Caldararu, Octav; Dong, Geng; Muñoz-Gutierrez, Camila; Adasme-Carreño, Francisco; Ryde, Ulf
2016-09-01
We have estimated the binding affinity of three sets of ligands of the heat-shock protein 90 in the D3R grand challenge blind test competition. We have employed four different methods, based on five different crystal structures: first, we docked the ligands to the proteins with induced-fit docking with the Glide software and calculated binding affinities with three energy functions. Second, the docked structures were minimised in a continuum solvent and binding affinities were calculated with the MM/GBSA method (molecular mechanics combined with generalised Born and solvent-accessible surface area solvation). Third, the docked structures were re-optimised by combined quantum mechanics and molecular mechanics (QM/MM) calculations. Then, interaction energies were calculated with quantum mechanical calculations employing 970-1160 atoms in a continuum solvent, combined with energy corrections for dispersion, zero-point energy and entropy, ligand distortion, ligand solvation, and an increase of the basis set to quadruple-zeta quality. Fourth, relative binding affinities were estimated by free-energy simulations, using the multi-state Bennett acceptance-ratio approach. Unfortunately, the results were varying and rather poor, with only one calculation giving a correlation to the experimental affinities larger than 0.7, and with no consistent difference in the quality of the predictions from the various methods. For one set of ligands, the results could be strongly improved (after experimental data were revealed) if it was recognised that one of the ligands displaced one or two water molecules. For the other two sets, the problem is probably that the ligands bind in different modes than in the crystal structures employed or that the conformation of the ligand-binding site or the whole protein changes.
Misini Ignjatović, Majda; Caldararu, Octav; Dong, Geng; Muñoz-Gutierrez, Camila; Adasme-Carreño, Francisco; Ryde, Ulf
2016-09-01
We have estimated the binding affinity of three sets of ligands of the heat-shock protein 90 in the D3R grand challenge blind test competition. We have employed four different methods, based on five different crystal structures: first, we docked the ligands to the proteins with induced-fit docking with the Glide software and calculated binding affinities with three energy functions. Second, the docked structures were minimised in a continuum solvent and binding affinities were calculated with the MM/GBSA method (molecular mechanics combined with generalised Born and solvent-accessible surface area solvation). Third, the docked structures were re-optimised by combined quantum mechanics and molecular mechanics (QM/MM) calculations. Then, interaction energies were calculated with quantum mechanical calculations employing 970-1160 atoms in a continuum solvent, combined with energy corrections for dispersion, zero-point energy and entropy, ligand distortion, ligand solvation, and an increase of the basis set to quadruple-zeta quality. Fourth, relative binding affinities were estimated by free-energy simulations, using the multi-state Bennett acceptance-ratio approach. Unfortunately, the results were varying and rather poor, with only one calculation giving a correlation to the experimental affinities larger than 0.7, and with no consistent difference in the quality of the predictions from the various methods. For one set of ligands, the results could be strongly improved (after experimental data were revealed) if it was recognised that one of the ligands displaced one or two water molecules. For the other two sets, the problem is probably that the ligands bind in different modes than in the crystal structures employed or that the conformation of the ligand-binding site or the whole protein changes.
Electron affinity of chlorine dioxide
Energy Technology Data Exchange (ETDEWEB)
Babcock, L.M.; Pentecost, T.; Koppenol, W.H. (Louisiana State Univ., Baton Rouge (USA))
1989-12-14
The flowing afterglow technique was used to determine the electron affinity of chlorine dioxide. A value of 2.37 {plus minus} 0.10 eV was found by bracketing between the electron affinities of HS* and SF{sub 4} as a lower limit and that of NO{sub 2} as an upper limit. This value is in excellent agreement with 2.32 eV predicted from a simple thermodynamic cycle involving the reduction potential of the ClO{sub 2}/ClO{sub 2}{sup {minus}} couple and a Gibbs hydration energy identical with that of SO{sub 2}{sup {sm bullet}{minus}}.
Affine density in wavelet analysis
Kutyniok, Gitta
2007-01-01
In wavelet analysis, irregular wavelet frames have recently come to the forefront of current research due to questions concerning the robustness and stability of wavelet algorithms. A major difficulty in the study of these systems is the highly sensitive interplay between geometric properties of a sequence of time-scale indices and frame properties of the associated wavelet systems. This volume provides the first thorough and comprehensive treatment of irregular wavelet frames by introducing and employing a new notion of affine density as a highly effective tool for examining the geometry of sequences of time-scale indices. Many of the results are new and published for the first time. Topics include: qualitative and quantitative density conditions for existence of irregular wavelet frames, non-existence of irregular co-affine frames, the Nyquist phenomenon for wavelet systems, and approximation properties of irregular wavelet frames.
Affine Coherent States in Quantum Cosmology
Malkiewicz, Przemyslaw
2015-01-01
A brief summary of the application of coherent states in the examination of quantum dynamics of cosmological models is given. We discuss quantization maps, phase space probability distributions and semiclassical phase spaces. The implementation of coherent states based on the affine group resolves the hardest singularities, renders self-adjoint Hamiltonians without boundary conditions and provides a completely consistent semi-classical description of the involved quantum dynamics. We consider three examples: the closed Friedmann model, the anisotropic Bianchi Type I model and the deep quantum domain of the Bianchi Type IX model.
A Bolus Calculator Based on Continuous-Discrete Unscented Kalman Filtering for Type 1 Diabetics
DEFF Research Database (Denmark)
Boiroux, Dimitri; Aradóttir, Tinna Björk; Hagdrup, Morten;
2015-01-01
after or 30 minutes after the beginning of the meal). We implement a continuous-discrete unscented Kalman filter to estimate the states and insulin sensitivity. These estimates are used in a bolus calculator. The numerical results demonstrate that administering the meal bolus 15 minutes after mealtime...
BERGSTRA, A; VANDIJK, RB; HILLEGE, HL; LIE, KI; MOOK, GA
1995-01-01
This study was performed because of observed differences between dye dilution cardiac output and the Fick cardiac output, calculated from estimated oxygen consumption according to LaFarge and Miettinen, and to find a better formula for assumed oxygen consumption. In 250 patients who underwent left a
Baraffe, [No Value; Alibert, Y; Mera, D; Charbrier, G; Beaulieu, JP
1998-01-01
We have computed stellar evolutionary models for stars in a mass range characteristic of Cepheid variables (3
Applicability of the wide-band limit in DFT-based molecular transport calculations
Verzijl, C.J.O.; Seldenthuis, J.S.; Thijssen, J.M.
2013-01-01
Transport properties of molecular junctions are notoriously expensive to calculate with ab initio methods, primarily due to the semi-infinite electrodes. This has led to the introduction of different approximation schemes for the electrodes. For the most popular metals used in experiments, such as g
Backward Calculation Based on the Advection and Diffusion of Oil Spills on the Sea Surface
Institute of Scientific and Technical Information of China (English)
LIU Hao; YIN Baoshu; LIN Jianguo
2005-01-01
In the light of the problem of oil pollution brought about by ships, in this paper we present the concept of backward tracing oil spills. In the course of backward calculation of the two-dimensional convection & diffusion equation, on the one hand,the advection term itself has the strong unilateral property, which means information in the upper reaches is transmitted downstream via the advection term; on the other hand,because of the opposite direction of calculation, it is essential for information to be conveyed upstream by means of the advection term. In addition, unlike that in the forward calculation, the diffusion term in the backward calculation is prone to accumulate errors, and thus renders the whole scheme unstable. Therefore, we adopt the central difference to deal with both the convectional term and the diffusion term. By examining two practical examples (1) under the unlimited boundary condition, and (2) under the limited boundary condition, it is proven that this method could achieve fundamentally satisfactory results not only in the open ocean but also in the closed or semi-closed bay.
Vibrational and structural study of onopordopicrin based on the FTIR spectrum and DFT calculations.
Chain, Fernando E; Romano, Elida; Leyton, Patricio; Paipa, Carolina; Catalán, César A N; Fortuna, Mario; Brandán, Silvia Antonia
2015-01-01
In the present work, the structural and vibrational properties of the sesquiterpene lactone onopordopicrin (OP) were studied by using infrared spectroscopy and density functional theory (DFT) calculations together with the 6-31G(∗) basis set. The harmonic vibrational wavenumbers for the optimized geometry were calculated at the same level of theory. The complete assignment of the observed bands in the infrared spectrum was performed by combining the DFT calculations with Pulay's scaled quantum mechanical force field (SQMFF) methodology. The comparison between the theoretical and experimental infrared spectrum demonstrated good agreement. Then, the results were used to predict the Raman spectrum. Additionally, the structural properties of OP, such as atomic charges, bond orders, molecular electrostatic potentials, characteristics of electronic delocalization and topological properties of the electronic charge density were evaluated by natural bond orbital (NBO), atoms in molecules (AIM) and frontier orbitals studies. The calculated energy band gap and the chemical potential (μ), electronegativity (χ), global hardness (η), global softness (S) and global electrophilicity index (ω) descriptors predicted for OP low reactivity, higher stability and lower electrophilicity index as compared with the sesquiterpene lactone cnicin containing similar rings.
Lectin affinity chromatography of glycolipids
Energy Technology Data Exchange (ETDEWEB)
Torres, B.V.; Smith, D.F.
1987-05-01
Since glycolipids (GLs) are either insoluble or form mixed micelles in water, lectin affinity chromatography in aqueous systems has not been applied to their separation. They have overcome this problem by using tetrahydrofuran (THF) in the mobile phase during chromatography. Affinity columns prepared with the GalNAc-specific Helix pomatia agglutinin (HPA) and equilibrated in THF specifically bind the (/sup 3/H)oligosaccharide derived from Forssman GL indicating that the immobilized HPA retained its carbohydrate-binding specificity in this solvent. Intact Forssman GL was bound by the HPA-column equilibrated in THF and was specifically eluted with 0.1 mg/ml GalNAc in THF. Purification of the Forssman GL was achieved when a crude lipid extract of sheep erythrocyte membranes was applied to the HPA-column in THF. Non-specifically bound GLs were eluted from the column using a step gradient of aqueous buffer in THF, while the addition of GalNAc was required to elute the specifically bound GLs. Using this procedure the A-active GLs were purified from a crude lipid extract of type A human erythrocytes in a single chromatographic step. The use of solvents that maintain carbohydrate-binding specificity and lipid solubility will permit the application of affinity chromatography on immobilized carbohydrate-binding proteins to intact GLs.
Institute of Scientific and Technical Information of China (English)
钟伟民; 王月琴; 梁毅; 祁荣宾; 钱锋
2012-01-01
Based on AP (Affinity Propagation) algorithm, a new algorithm named CPAP ( Clustering based on P-changed Affinity Propagation) is proposed. For the background of heterogeneous wireless sensor network,CPAP changes the conventional set mode of preference p, and considering both energy and distance while clustering. Besides, by analysing and comparing the effects of the parameter K, its approximate optimal value is achieved. Simulation results show that compared with PECBA, the first nodes death time is delayed by 28. 5% in CPAP, which illustrates that the proposed solution makes a better utilization of energy before network is dead and enhances the energy efficiency.%在近邻传播聚类算法基础上提出了基于偏向参数P可变的分簇路由算法CPAP,该算法针对异构无线传感器网络的特殊背景,改变AP算法偏向参数P的常规设置方式,综合考虑能量、距离因素解决分簇问题；另外,分析了算法中K参数的影响,取得其近似最优值.仿真结果表明:CPAP与PECBA相比,第一死亡节点出现时间推迟了28.5％,将更多的能量用于网络开始死亡之前,提高了网络的能量利用率.
Application of perturbation theory to lattice calculations based on method of cyclic characteristics
Assawaroongruengchot, Monchai
computing time when both direct and adjoint solutions are required. A problem that arises for the generalized adjoint problem is that the direct use of the negative external generalized adjoint sources in the adjoint solution algorithm results in negative generalized adjoint functions. A coupled flux biasing/decontamination scheme is applied to make the generalized adjoint functions positive using the adjoint functions in such a way that it can be used for the multigroup rebalance technique. Next we consider the application of the perturbation theory to the reactor problems. Since the coolant void reactivity (CVR) is a important factor in reactor safety analysis, we have decided to select this parameter for optimization studies. We consider the optimization and adjoint sensitivity techniques for the adjustments of CVR at beginning of burnup cycle (BOC) and k eff at end of burnup cycle (EOC) for a 2D Advanced CANDU Reactor (ACR) lattice. The sensitivity coefficients are evaluated using the perturbation theory based on the integral transport equations. Three sets of parameters for CVR-BOC and keff-EOC adjustments are studied: (1) Dysprosium density in the central pin with Uranium enrichment in the outer fuel rings, (2) Dysprosium density and Uranium enrichment both in the central pin, and (3) the same parameters as in the first case but the objective is to obtain a negative checkerboard CVR at beginning of cycle (CBCVR-BOC). To approximate the sensitivity coefficient at EOC, we perform constant-power burnup/depletion calculations for 600 full power days (FPD) using a slightly perturbed nuclear library and the unperturbed neutron fluxes to estimate the variation of nuclide densities at EOC. Sensitivity analyses of CVR and eigenvalue are included in the study. In addition the optimization and adjoint sensitivity techniques are applied to the CBCVR-BOC and keff-EOC adjustment of the ACR lattices with Gadolinium in the central pin. Finally we apply these techniques to the CVR
A Quick and Affine Invariance Matching Method for Oblique Images
Directory of Open Access Journals (Sweden)
XIAO Xiongwu
2015-04-01
Full Text Available This paper proposed a quick, affine invariance matching method for oblique images. It calculated the initial affine matrix by making full use of the two estimated camera axis orientation parameters of an oblique image, then recovered the oblique image to a rectified image by doing the inverse affine transform, and left over by the SIFT method. We used the nearest neighbor distance ratio(NNDR, normalized cross correlation(NCC measure constraints and consistency check to get the coarse matches, then used RANSAC method to calculate the fundamental matrix and the homography matrix. And we got the matches that they were interior points when calculating the homography matrix, then calculated the average value of the matches' principal direction differences. During the matching process, we got the initial matching features by the nearest neighbor(NN matching strategy, then used the epipolar constrains, homography constrains, NCC measure constrains and consistency check of the initial matches' principal direction differences with the calculated average value of the interior matches' principal direction differences to eliminate false matches. Experiments conducted on three pairs of typical oblique images demonstrate that our method takes about the same time as SIFT to match a pair of oblique images with a plenty of corresponding points distributed evenly and an extremely low mismatching rate.
Senol, Ali; Dündar, Sefa; Gündüz, Nazan
2015-01-01
The aim of this study are to examine the relationship between prospective classroom teachers' estimation skills based on calculation and their number sense and to investigate whether their number sense and estimation skills change according to their class level and gender. The participants of the study are 125 prospective classroom teachers…
Bekker, H.; Brink, A.A.; Roerdink, J.B.T.M.
2009-01-01
To calculate the Minkowski-sum based similarity measure of two convex polyhedra, many relative orientations have to be considered. These relative orientations are characterized by the fact that some faces and edges of the polyhedra are parallel. For every relative orientation of the polyhedra, the v
The edge-based face element method for 3D-stream function and flux calculations in porous media flow
Zijl, W.; Nawalany, M.
2004-01-01
We present a velocity-oriented discrete analog of the partial differential equations governing porous media flow: the edge-based face element method. Conventional finite element techniques calculate pressures in the nodes of the grid. However, such methods do not satisfy the requirement of flux cont
Directory of Open Access Journals (Sweden)
Song Hua
2015-01-01
Full Text Available The wheel/rail rolling contact can not only lead to rail fatigue damage but also bring rail corrugation. According to the wheel/rail rolling contact problem, based on the ANSYS/LS-DYNA explicit analysis software, this paper established the finite element model of wheel/rail rolling contact in non-linear steady-state curve negotiation, and proposed the explicit-explicit sequence calculation method that can be used to solve this model. The explicit-explicit sequence calculation method uses explicit solver in calculating the rail pre-stressing force and the process of wheel/rail rolling contact. Compared with the implicit-explicit sequence calculation method that has been widely applied, the explicit-explicit sequence calculation method including similar precision in calculation with faster speed and higher efficiency, make it more applicable to solve the wheel/rail rolling contact problem of non-linear steady-state curving with a large solving model or a high non-linear degree.
Tissue decomposition from dual energy CT data for MC based dose calculation in particle therapy
Energy Technology Data Exchange (ETDEWEB)
Hünemohr, Nora, E-mail: n.huenemohr@dkfz.de; Greilich, Steffen [Medical Physics in Radiation Oncology, German Cancer Research Center, 69120 Heidelberg (Germany); Paganetti, Harald; Seco, Joao [Department of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts 02114 (United States); Jäkel, Oliver [Medical Physics in Radiation Oncology, German Cancer Research Center, 69120 Heidelberg, Germany and Department of Radiation Oncology and Radiation Therapy, University Hospital of Heidelberg, 69120 Heidelberg (Germany)
2014-06-15
Purpose: The authors describe a novel method of predicting mass density and elemental mass fractions of tissues from dual energy CT (DECT) data for Monte Carlo (MC) based dose planning. Methods: The relative electron density ϱ{sub e} and effective atomic number Z{sub eff} are calculated for 71 tabulated tissue compositions. For MC simulations, the mass density is derived via one linear fit in the ϱ{sub e} that covers the entire range of tissue compositions (except lung tissue). Elemental mass fractions are predicted from the ϱ{sub e} and the Z{sub eff} in combination. Since particle therapy dose planning and verification is especially sensitive to accurate material assignment, differences to the ground truth are further analyzed for mass density, I-value predictions, and stopping power ratios (SPR) for ions. Dose studies with monoenergetic proton and carbon ions in 12 tissues which showed the largest differences of single energy CT (SECT) to DECT are presented with respect to range uncertainties. The standard approach (SECT) and the new DECT approach are compared to reference Bragg peak positions. Results: Mean deviations to ground truth in mass density predictions could be reduced for soft tissue from (0.5±0.6)% (SECT) to (0.2±0.2)% with the DECT method. Maximum SPR deviations could be reduced significantly for soft tissue from 3.1% (SECT) to 0.7% (DECT) and for bone tissue from 0.8% to 0.1%. MeanI-value deviations could be reduced for soft tissue from (1.1±1.4%, SECT) to (0.4±0.3%) with the presented method. Predictions of elemental composition were improved for every element. Mean and maximum deviations from ground truth of all elemental mass fractions could be reduced by at least a half with DECT compared to SECT (except soft tissue hydrogen and nitrogen where the reduction was slightly smaller). The carbon and oxygen mass fraction predictions profit especially from the DECT information. Dose studies showed that most of the 12 selected tissues would
Takaba, Hiromitsu; Kimura, Shou; Alam, Md. Khorshed
2017-03-01
Durability of organo-lead halide perovskite are important issue for its practical application in a solar cells. In this study, using density functional theory (DFT) and molecular dynamics, we theoretically investigated a crystal structure, electronic structure, and ionic diffusivity of the partially substituted cubic MA0.5X0.5PbI3 (MA = CH3NH3+, X = NH4+ or (NH2)2CH+ or Cs+). Our calculation results indicate that a partial substitution of MA induces a lattice distortion, resulting in preventing MA or X from the diffusion between A sites in the perovskite. DFT calculations show that electronic structures of the investigated partially substituted perovskites were similar with that of MAPbI3, while their bandgaps slightly decrease compared to that of MAPbI3. Our results mean that partial substitution in halide perovskite is effective technique to suppress diffusion of intrinsic ions and tune the band gap.
Abrupt fault diagnosis of aero-engine based on affinity propagation clustering%基于相似性传播聚类的航空发动机突发故障诊断
Institute of Scientific and Technical Information of China (English)
李丽敏; 王仲生; 姜洪开
2014-01-01
针对航空发动机突发故障，构建了一种基于相似性传播聚类的突发故障诊断方法。首先利用突发故障历史监测数据建立突发故障数据库，通过相似性传播聚类找到数据库中所有突发故障数据的中心，当诊断新采集数据的突发故障类型时，通过相似性传播聚类找到当前新采集数据的中心，经过与突发故障数据库中的数据中心进行匹配判断该新采集数据所对应的突发故障类型。将该突发故障诊断方法应用到发动机转子实验台的突发故障诊断中，仿真和实验结果表明该方法的可行性，并通过与其他方法比较，表明该方法具有诊断时间短和误差小的优点。%Aiming at aero-engine faults,an abrupt fault diagnosis method based on affinity propagation clustering was proposed.Abrupt fault historical monitoring data were used to establish faults database.Through affinity propagation clustering,all the exemplars of abrupt faults in the database were found and the affinity propagation clustering was applied once again to find the exemplar of the new collected data.The fault type was then identified by matching the center with the centers obtained from the faults database.The method was used in the aero-engine abrupt fault diagnosis.The simulation and experiment results show that the method is feasible to diagnose abrupt fault,and compared with other methods,it needs shorter time consuming and produces lower error.
The Quasi—affine Maps and Fractals
Institute of Scientific and Technical Information of China (English)
LunhaiLONG; GangCHEN
1997-01-01
In this paper,we discuss the discretization of the affine maps in R2,that is ,we consider a class of maps in Z2,which are induced by affine maps and called the quasi-affine maps.We investigate the properties and the dynamical behaviour of such maps,and give a sort of construction of complicated fractals by using quasi-affine maps.
Affine connections on involutive G-structures
Merkulov, Sergey A.
1995-01-01
This paper is a review of the twistor theory of irreducible G-structures and affine connections. Long ago, Berger presented a very restricted list of possible irreducibly acting holonomies of torsion-free affine connections. His list was complete in the part of metric connections, while the situation with holonomies of non-metric torsion-free affine connections was and remains rather unclear. One of the results discussed in this review asserts that any torsion-free holomorphic affine connecti...
Calculations of the hurricane eye motion based on singularity propagation theory
Directory of Open Access Journals (Sweden)
Vladimir Danilov
2002-02-01
Full Text Available We discuss the possibility of using calculating singularities to forecast the dynamics of hurricanes. Our basic model is the shallow-water system. By treating the hurricane eye as a vortex type singularity and truncating the corresponding sequence of Hugoniot type conditions, we carry out many numerical experiments. The comparison of our results with the tracks of three actual hurricanes shows that our approach is rather fruitful.
Comparison of lysimeter based and calculated ASCE reference evapotranspiration in a subhumid climate
Nolz, Reinhard; Cepuder, Peter; Eitzinger, Josef
2016-04-01
The standardized form of the well-known FAO Penman-Monteith equation, published by the Environmental and Water Resources Institute of the American Society of Civil Engineers (ASCE-EWRI), is recommended as a standard procedure for calculating reference evapotranspiration (ET ref) and subsequently plant water requirements. Applied and validated under different climatic conditions it generally achieved good results compared to other methods. However, several studies documented deviations between measured and calculated reference evapotranspiration depending on environmental and weather conditions. Therefore, it seems generally advisable to evaluate the model under local environmental conditions. In this study, reference evapotranspiration was determined at a subhumid site in northeastern Austria from 2005 to 2010 using a large weighing lysimeter (ET lys). The measured data were compared with ET ref calculations. Daily values differed slightly during a year, at which ET ref was generally overestimated at small values, whereas it was rather underestimated when ET was large, which is supported also by other studies. In our case, advection of sensible heat proved to have an impact, but it could not explain the differences exclusively. Obviously, there were also other influences, such as seasonal varying surface resistance or albedo. Generally, the ASCE-EWRI equation for daily time steps performed best at average weather conditions. The outcomes should help to correctly interpret ET ref data in the region and in similar environments and improve knowledge on the dynamics of influencing factors causing deviations.
A User Differential Range Error Calculating Algorithm Based on Analytic Method
Institute of Scientific and Technical Information of China (English)
SHAO Bo; LIU Jiansheng; ZHAO Ruibin; HUANG Zhigang; LI Rui
2011-01-01
To enhance the integrity,an analytic method (AM) which has less execution time is proposed to calculate the user differential range error (UDRE) used by the user to detect the potential risk.An ephemeris and clock correction calculation method is introduced first.It shows that the most important thing of computing UDRE is to find the worst user location (WUL) in the service volume.Then,a UDRE algorithm using AM is described to solve this problem.By using the covariance matrix of the error vector,the searching of WUL is converted to an analytic geometry problem.The location of WUL can be obtained directly by mathematical derivation.Experiments are conducted to compare the performance between the proposed AM algorithm and the exhaustive grid search (EGS) method used in the master station.The results show that the correctness of the AM algorithm can be proved by the EGS method and the AM algorithm can reduce the calculation time by more than 90%.The computational complexity of this proposed algorithm is better than that of EGS.Thereby this algorithm is more suitable for computing UDRE at the master station.
Directory of Open Access Journals (Sweden)
S. Mattedi
2000-12-01
Full Text Available A modified form of the Hicks and Young algorithm was used with the Mattedi-Tavares-Castier lattice equation of state (MTC lattice EOS to calculate critical points of binary mixtures that exhibit several types of critical behavior. Several qualitative aspects of the critical curves, such as maxima and minima in critical pressure, and minima in critical temperature, could be predicted using the MTC lattice EOS. These results were in agreement with experimental information available in the literature, illustrating the flexibility of the functional form of the MTC lattice EOS. We observed however that the MTC lattice EOS failed to predict maxima in pressure for two of the studied systems: ethane + ethanol and methane + n-hexane. We also observed that the agreement between the calculated and experimental critical properties was at most semi-quantitative in some examples. Despite these limitations, in many ways similar to those of other EOS in common use when applied to critical point calculations, we can conclude that the MTC lattice EOS has the ability to predict several types of critical curves of complex shape.
Ying, Zhang; Zhengqiang, Li; Yan, Wang
2014-03-01
Anthropogenic aerosols are released into the atmosphere, which cause scattering and absorption of incoming solar radiation, thus exerting a direct radiative forcing on the climate system. Anthropogenic Aerosol Optical Depth (AOD) calculations are important in the research of climate changes. Accumulation-Mode Fractions (AMFs) as an anthropogenic aerosol parameter, which are the fractions of AODs between the particulates with diameters smaller than 1μm and total particulates, could be calculated by AOD spectral deconvolution algorithm, and then the anthropogenic AODs are obtained using AMFs. In this study, we present a parameterization method coupled with an AOD spectral deconvolution algorithm to calculate AMFs in Beijing over 2011. All of data are derived from AErosol RObotic NETwork (AERONET) website. The parameterization method is used to improve the accuracies of AMFs compared with constant truncation radius method. We find a good correlation using parameterization method with the square relation coefficient of 0.96, and mean deviation of AMFs is 0.028. The parameterization method could also effectively solve AMF underestimate in winter. It is suggested that the variations of Angstrom indexes in coarse mode have significant impacts on AMF inversions.
Energy Technology Data Exchange (ETDEWEB)
Song, Chan-Ho; Park, Hee-Seong; Ha, Jea-Hyun; Jin, Hyung-Gon; Park, Seung-Kook [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
2015-05-15
The KAERI be used to calculate the decommissioning cost and manage the data of decommissioning activity experience through systems such as the decommissioning information management system (DECOMMIS), Decommissioning Facility Characterization DB System (DEFACS), decommissioning work-unit productivity calculation system (DEWOCS). Some country such as Japan and The United States have the information for decommissioning experience of the NPP and publish reports on decommissioning cost analysis. These reports as valuable data be used to compare with the decommissioning unit cost. In particular, need a method to estimate the decommissioning cost of the NPP because there is no decommissioning experience of NPP in case of Korea. makes possible to predict the more precise prediction about the decommissioning unit cost. But still, there are many differences on calculation for the decommissioning unit cost in domestic and foreign country. Typically, it is difficult to compare with data because published not detailed reports. Therefore, field of estimation for decommissioning cost have to use a unified framework in order to the decommissioning cost be provided to exact of the decommissioning cost.
Manifolds with integrable affine shape operator
Directory of Open Access Journals (Sweden)
Daniel A. Joaquín
2005-05-01
Full Text Available This work establishes the conditions for the existence of vector fields with the property that theirs covariant derivative, with respect to the affine normal connection, be the affine shape operatorS in hypersurfaces. Some results are obtained from this property and, in particular, for some kind of affine decomposable hypersurfaces we explicitely get the actual vector fields.
Uncertainties in Monte Carlo-based absorbed dose calculations for an experimental benchmark.
Renner, F; Wulff, J; Kapsch, R-P; Zink, K
2015-10-01
There is a need to verify the accuracy of general purpose Monte Carlo codes like EGSnrc, which are commonly employed for investigations of dosimetric problems in radiation therapy. A number of experimental benchmarks have been published to compare calculated values of absorbed dose to experimentally determined values. However, there is a lack of absolute benchmarks, i.e. benchmarks without involved normalization which may cause some quantities to be cancelled. Therefore, at the Physikalisch-Technische Bundesanstalt a benchmark experiment was performed, which aimed at the absolute verification of radiation transport calculations for dosimetry in radiation therapy. A thimble-type ionization chamber in a solid phantom was irradiated by high-energy bremsstrahlung and the mean absorbed dose in the sensitive volume was measured per incident electron of the target. The characteristics of the accelerator and experimental setup were precisely determined and the results of a corresponding Monte Carlo simulation with EGSnrc are presented within this study. For a meaningful comparison, an analysis of the uncertainty of the Monte Carlo simulation is necessary. In this study uncertainties with regard to the simulation geometry, the radiation source, transport options of the Monte Carlo code and specific interaction cross sections are investigated, applying the general methodology of the Guide to the expression of uncertainty in measurement. Besides studying the general influence of changes in transport options of the EGSnrc code, uncertainties are analyzed by estimating the sensitivity coefficients of various input quantities in a first step. Secondly, standard uncertainties are assigned to each quantity which are known from the experiment, e.g. uncertainties for geometric dimensions. Data for more fundamental quantities such as photon cross sections and the I-value of electron stopping powers are taken from literature. The significant uncertainty contributions are identified as
Energy Technology Data Exchange (ETDEWEB)
Chi, Yuan, E-mail: jtext@hust.edu.cn [State Key Laboratory of Advanced Electromagnetic Engineering and Technology, Huazhong University of Science and Technology, Wuhan 430074 (China); Institute of Plasma Physics, Chinese Academy of Sciences, Hefei 230031 (China); Hu, Chundong [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei 230031 (China); Zhuang, Ge [State Key Laboratory of Advanced Electromagnetic Engineering and Technology, Huazhong University of Science and Technology, Wuhan 430074 (China)
2014-02-15
Calorimetric method has been primarily applied for several experimental campaigns to determine the angular divergence of high-current ion source for the neutral beam injection system on the Experimental Advanced Superconducting Tokamak (EAST). A Doppler shift spectroscopy has been developed to provide the secondary measurement of the angular divergence to improve the divergence measurement accuracy and for real-time and non-perturbing measurement. The modified calculation model based on the W7AS neutral beam injectors is adopted to accommodate the slot-type accelerating grids used in the EAST's ion source. Preliminary spectroscopic experimental results are presented comparable to the calorimetrically determined value of theoretical calculation.
基于仿射混杂系统控制设计的机器人导航控制%Robot Navigation Based on Control Synthesis of Piecewise Affine Hybrid Systems
Institute of Scientific and Technical Information of China (English)
王慧芳; 陈阳舟
2008-01-01
Control synthesis and reachability analysis of the piecewise affine hybrid systems on simplices were applied for safely steering a robot from a given position to a final position with consideration of optimality. Based on the triangulation of the state space of a robot, a dual graph was constructed following the target attractive principle, and then a path planning algorithm was presented to find a sequence of adjacent triangles that were traversed by the shortest path. According to the characteristics of affine systems on simplices, a motion planning algorithm was proposed to determine the translational and rotational velocities for a robot. The simulation results demonstrate the effectiveness of the algorithms.%根据单纯形仿射混杂系统的可达性分析设计控制律,使机器人在平面任意两点间运行,保证其安全性并考虑其最优性.对机器人的状态空间进行三角划分,根据目标吸引原理来建立其对偶图,针对对偶图提出路径规划算法得到最短路径穿越的三角形序列.然后根据仿射系统在单纯形中的性质,提出运动规划算法,得到机器人的角速度和线速度,控制机器人穿越给定的三角形序列到达目标点.仿真结果表明了方法的有效性.
Pence, Thomas J; Monroe, Ryan J; Wright, Neil T
2008-08-01
Some recent analyses modeled the response of collagenous tissues, such as epicardium, using a hypothetical network consisting of interconnected springlike fibers. The fibers in the network were organized such that internal nodes served as the connection point between three such collagen springs. The results for assumed affine and nonaffine deformations are contrasted after a homogeneous deformation at the boundary. Affine deformation provides a stiffer mechanical response than nonaffine deformation. In contrast to nonaffine deformation, affine deformation determines the displacement of internal nodes without imposing detailed force balance, thereby complicating the simplest intuitive notion of stress, one based on free body cuts, at the single node scale. The standard notion of stress may then be recovered via average field theory computations based on large micromesh realizations. An alternative and by all indications complementary viewpoint for the determination of stress in these collagen fiber networks is discussed here, one in which stress is defined using elastic energy storage, a notion which is intuitive at the single node scale. It replaces the average field theory computations by an averaging technique over randomly oriented isolated simple elements. The analytical operations do not require large micromesh realizations, but the tedious nature of the mathematical manipulation is clearly aided by symbolic algebra calculation. For the example case of linear elastic deformation, this results in material stiffnesses that relate the infinitesimal strain and stress. The result that the affine case is stiffer than the nonaffine case is recovered, as would be expected. The energy framework also lends itself to the natural inclusion of changes in mechanical response due to the chemical, electrical, or thermal environment.
Wada, Daichi; Igawa, Hirotaka; Murayama, Hideaki; Kasai, Tokio
2014-03-24
A signal processing method based on group delay calculations is introduced for distributed measurements of long-length fiber Bragg gratings (FBGs) based on optical frequency domain reflectometry (OFDR). Bragg wavelength shifts in interfered signals of OFDR are regarded as group delay. By calculating group delay, the distribution of Bragg wavelength shifts is obtained with high computational efficiency. We introduce weighted averaging process for noise reduction. This method required only 3.5% of signal processing time which was necessary for conventional equivalent signal processing based on short-time Fourier transform. The method also showed high sensitivity to experimental signals where non-uniform strain distributions existed in a long-length FBG.
Neff, Michael; Rauhut, Guntram
2014-02-01
Multidimensional potential energy surfaces obtained from explicitly correlated coupled-cluster calculations and further corrections for high-order correlation contributions, scalar relativistic effects and core-correlation energy contributions were generated in a fully automated fashion for the double-minimum benchmark systems OH3(+) and NH3. The black-box generation of the potentials is based on normal coordinates, which were used in the underlying multimode expansions of the potentials and the μ-tensor within the Watson operator. Normal coordinates are not the optimal choice for describing double-minimum potentials and the question remains if they can be used for accurate calculations at all. However, their unique definition is an appealing feature, which removes remaining errors in truncated potential expansions arising from different choices of curvilinear coordinate systems. Fully automated calculations are presented, which demonstrate, that the proposed scheme allows for the determination of energy levels and tunneling splittings as a routine application.
Hu, Chia-Yu; Chen, Pisin
2010-01-01
The radio approach for detecting the ultra-high energy cosmic neutrinos has become a mature field. The Cherenkov signals in radio detection are originated from the charge excess of particle showers due to Askaryan effect. The conventional way of calculating the Cherenkov pulses by making Fraunhofer approximation fails when the sizes of the elongated showers become comparable with the detection distances. We present a calculation method of Cherenkov pulses based on the finite-difference time-domain (FDTD) method, and attain a satisfying effeciency via the GPU- acceleration. Our method provides a straightforward way of the near field calculation, which would be important for ultra high energy particle showers, especailly the electromagnetic showers induced by the high energy leptons produced in the neutrino charge current interactions.