WorldWideScience

Sample records for general computational spectroscopic

  1. Generalized inverses theory and computations

    CERN Document Server

    Wang, Guorong; Qiao, Sanzheng

    2018-01-01

    This book begins with the fundamentals of the generalized inverses, then moves to more advanced topics. It presents a theoretical study of the generalization of Cramer's rule, determinant representations of the generalized inverses, reverse order law of the generalized inverses of a matrix product, structures of the generalized inverses of structured matrices, parallel computation of the generalized inverses, perturbation analysis of the generalized inverses, an algorithmic study of the computational methods for the full-rank factorization of a generalized inverse, generalized singular value decomposition, imbedding method, finite method, generalized inverses of polynomial matrices, and generalized inverses of linear operators. This book is intended for researchers, postdocs, and graduate students in the area of the generalized inverses with an undergraduate-level understanding of linear algebra.

  2. Algebraic computing in general relativity

    International Nuclear Information System (INIS)

    D'Inverno, R.A.

    1975-01-01

    The purpose of this paper is to bring to the attention of potential users the existence of algebraic computing systems, and to illustrate their use by reviewing a number of problems for which such a system has been successfully used in General Relativity. In addition, some remarks are included which may be of help in the future design of these systems. (author)

  3. Computer methods in general relativity: algebraic computing

    CERN Document Server

    Araujo, M E; Skea, J E F; Koutras, A; Krasinski, A; Hobill, D; McLenaghan, R G; Christensen, S M

    1993-01-01

    Karlhede & MacCallum [1] gave a procedure for determining the Lie algebra of the isometry group of an arbitrary pseudo-Riemannian manifold, which they intended to im- plement using the symbolic manipulation package SHEEP but never did. We have recently finished making this procedure explicit by giving an algorithm suitable for implemen- tation on a computer [2]. Specifically, we have written an algorithm for determining the isometry group of a spacetime (in four dimensions), and partially implemented this algorithm using the symbolic manipulation package CLASSI, which is an extension of SHEEP.

  4. Hybrid computing - Generalities and bibliography

    International Nuclear Information System (INIS)

    Neel, Daniele

    1970-01-01

    This note presents the content of a research thesis. It describes the evolution of hybrid computing systems, discusses the benefits and shortcomings of analogue or hybrid systems, discusses the building up of an hybrid system (requires properties), comments different possible uses, addresses the issues of language and programming, discusses analysis methods and scopes of application. An appendix proposes a bibliography on these issues and notably the different scopes of application (simulation, fluid dynamics, biology, chemistry, electronics, energy, errors, space, programming languages, hardware, mechanics, and optimisation of equations or processes, physics) [fr

  5. General purpose computers in real time

    International Nuclear Information System (INIS)

    Biel, J.R.

    1989-01-01

    I see three main trends in the use of general purpose computers in real time. The first is more processing power. The second is the use of higher speed interconnects between computers (allowing more data to be delivered to the processors). The third is the use of larger programs running in the computers. Although there is still work that needs to be done, I believe that all indications are that the online need for general purpose computers should be available for the SCC and LHC machines. 2 figs

  6. General Quantum Interference Principle and Duality Computer

    International Nuclear Information System (INIS)

    Long Guilu

    2006-01-01

    In this article, we propose a general principle of quantum interference for quantum system, and based on this we propose a new type of computing machine, the duality computer, that may outperform in principle both classical computer and the quantum computer. According to the general principle of quantum interference, the very essence of quantum interference is the interference of the sub-waves of the quantum system itself. A quantum system considered here can be any quantum system: a single microscopic particle, a composite quantum system such as an atom or a molecule, or a loose collection of a few quantum objects such as two independent photons. In the duality computer, the wave of the duality computer is split into several sub-waves and they pass through different routes, where different computing gate operations are performed. These sub-waves are then re-combined to interfere to give the computational results. The quantum computer, however, has only used the particle nature of quantum object. In a duality computer, it may be possible to find a marked item from an unsorted database using only a single query, and all NP-complete problems may have polynomial algorithms. Two proof-of-the-principle designs of the duality computer are presented: the giant molecule scheme and the nonlinear quantum optics scheme. We also propose thought experiment to check the related fundamental issues, the measurement efficiency of a partial wave function.

  7. Spectroscopic Signatures and Structural Motifs of Dopamine: a Computational Study

    Science.gov (United States)

    Srivastava, Santosh Kumar; Singh, Vipin Bahadur

    2016-06-01

    Dopamine (DA) is an essential neurotransmitter in the central nervous system and it plays integral role in numerous brain functions including behaviour, cognition, emotion, working memory and associated learning. In the present work the conformational landscapes of neutral and protonated dopamine have been investigated in the gas phase and in aqueous solution by MP2 and DFT (M06-2X, ωB97X-D, B3LYP and B3LYP-D3) methods. Twenty lowest energy structures of neutral DA were subjected to geometry optimization and the gauche conformer, GIa, was found to be the lowest gas phase structure at the each level of theory in agreement with the experimental rotational spectroscopy. All folded gauche conformers (GI) where lone electron pair of the NH2 group is directed towards the π system of the aromatic ring ( 'non up' ) are found more stable in the gas phase. While in aqueous solution, all those gauche conformers (GII) where lone electron pair of the NH2 group is directed opposite from the π system of the aromatic ring ('up' structures) are stabilized significantly.Nine lowest energy structures, protonated at the amino group, are optimized at the same MP2/aug-cc-pVDZ level of theory. In the most stable gauche structures, g-1 and g+1, mainly electrostatic cation - π interaction is further stabilized by significant dispersion forces as predicted by the substantial differences between the DFT and dispersion corrected DFT-D3 calculations. In aqueous environment the intra-molecular cation- π distance in g-1 and g+1 isomers, slightly increases compared to the gas phase and the magnitude of the cation- π interaction is reduced relative to the gas phase, because solvation of the cation decreases its interaction energy with the π face of aromatic system. The IR intensity of the bound N-H+ stretching mode provides characteristic 'IR spectroscopic signatures' which can reflect the strength of cation- π interaction energy. The CC2 lowest lying S1 ( 1ππ* ) excited state of neutral

  8. Computer use changes generalization of movement learning.

    Science.gov (United States)

    Wei, Kunlin; Yan, Xiang; Kong, Gaiqing; Yin, Cong; Zhang, Fan; Wang, Qining; Kording, Konrad Paul

    2014-01-06

    Over the past few decades, one of the most salient lifestyle changes for us has been the use of computers. For many of us, manual interaction with a computer occupies a large portion of our working time. Through neural plasticity, this extensive movement training should change our representation of movements (e.g., [1-3]), just like search engines affect memory [4]. However, how computer use affects motor learning is largely understudied. Additionally, as virtually all participants in studies of perception and actions are computer users, a legitimate question is whether insights from these studies bear the signature of computer-use experience. We compared non-computer users with age- and education-matched computer users in standard motor learning experiments. We found that people learned equally fast but that non-computer users generalized significantly less across space, a difference negated by two weeks of intensive computer training. Our findings suggest that computer-use experience shaped our basic sensorimotor behaviors, and this influence should be considered whenever computer users are recruited as study participants. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. Guidelines for computer security in general practice.

    Science.gov (United States)

    Schattner, Peter; Pleteshner, Catherine; Bhend, Heinz; Brouns, Johan

    2007-01-01

    As general practice becomes increasingly computerised, data security becomes increasingly important for both patient health and the efficient operation of the practice. To develop guidelines for computer security in general practice based on a literature review, an analysis of available information on current practice and a series of key stakeholder interviews. While the guideline was produced in the context of Australian general practice, we have developed a template that is also relevant for other countries. Current data on computer security measures was sought from Australian divisions of general practice. Semi-structured interviews were conducted with general practitioners (GPs), the medical software industry, senior managers within government responsible for health IT (information technology) initiatives, technical IT experts, divisions of general practice and a member of a health information consumer group. The respondents were asked to assess both the likelihood and the consequences of potential risks in computer security being breached. The study suggested that the most important computer security issues in general practice were: the need for a nominated IT security coordinator; having written IT policies, including a practice disaster recovery plan; controlling access to different levels of electronic data; doing and testing backups; protecting against viruses and other malicious codes; installing firewalls; undertaking routine maintenance of hardware and software; and securing electronic communication, for example via encryption. This information led to the production of computer security guidelines, including a one-page summary checklist, which were subsequently distributed to all GPs in Australia. This paper maps out a process for developing computer security guidelines for general practice. The specific content will vary in different countries according to their levels of adoption of IT, and cultural, technical and other health service factors. Making

  10. Stable computation of generalized singular values

    Energy Technology Data Exchange (ETDEWEB)

    Drmac, Z.; Jessup, E.R. [Univ. of Colorado, Boulder, CO (United States)

    1996-12-31

    We study floating-point computation of the generalized singular value decomposition (GSVD) of a general matrix pair (A, B), where A and B are real matrices with the same numbers of columns. The GSVD is a powerful analytical and computational tool. For instance, the GSVD is an implicit way to solve the generalized symmetric eigenvalue problem Kx = {lambda}Mx, where K = A{sup {tau}}A and M = B{sup {tau}}B. Our goal is to develop stable numerical algorithms for the GSVD that are capable of computing the singular value approximations with the high relative accuracy that the perturbation theory says is possible. We assume that the singular values are well-determined by the data, i.e., that small relative perturbations {delta}A and {delta}B (pointwise rounding errors, for example) cause in each singular value {sigma} of (A, B) only a small relative perturbation {vert_bar}{delta}{sigma}{vert_bar}/{sigma}.

  11. Guidelines for computer security in general practice

    Directory of Open Access Journals (Sweden)

    Peter Schattner

    2007-06-01

    Conclusions This paper maps out a process for developing computer security guidelines for general practice. The specific content will vary in different countries according to their levels of adoption of IT, and cultural, technical and other health service factors. Making these guidelines relevant to local contexts should help maximise their uptake.

  12. Terahertz spectroscopic polarimetry of generalized anisotropic media composed of Archimedean spiral arrays: Experiments and simulations.

    Science.gov (United States)

    Aschaffenburg, Daniel J; Williams, Michael R C; Schmuttenmaer, Charles A

    2016-05-07

    Terahertz time-domain spectroscopic polarimetry has been used to measure the polarization state of all spectral components in a broadband THz pulse upon transmission through generalized anisotropic media consisting of two-dimensional arrays of lithographically defined Archimedean spirals. The technique allows a full determination of the frequency-dependent, complex-valued transmission matrix and eigenpolarizations of the spiral arrays. Measurements were made on a series of spiral array orientations. The frequency-dependent transmission matrix elements as well as the eigenpolarizations were determined, and the eigenpolarizations were found be to elliptically corotating, as expected from their symmetry. Numerical simulations are in quantitative agreement with measured spectra.

  13. Generalized Bell-inequality experiments and computation

    Energy Technology Data Exchange (ETDEWEB)

    Hoban, Matty J. [Department of Physics and Astronomy, University College London, Gower Street, London WC1E 6BT (United Kingdom); Department of Computer Science, University of Oxford, Wolfson Building, Parks Road, Oxford OX1 3QD (United Kingdom); Wallman, Joel J. [School of Physics, The University of Sydney, Sydney, New South Wales 2006 (Australia); Browne, Dan E. [Department of Physics and Astronomy, University College London, Gower Street, London WC1E 6BT (United Kingdom)

    2011-12-15

    We consider general settings of Bell inequality experiments with many parties, where each party chooses from a finite number of measurement settings each with a finite number of outcomes. We investigate the constraints that Bell inequalities place upon the correlations possible in local hidden variable theories using a geometrical picture of correlations. We show that local hidden variable theories can be characterized in terms of limited computational expressiveness, which allows us to characterize families of Bell inequalities. The limited computational expressiveness for many settings (each with many outcomes) generalizes previous results about the many-party situation each with a choice of two possible measurements (each with two outcomes). Using this computational picture we present generalizations of the Popescu-Rohrlich nonlocal box for many parties and nonbinary inputs and outputs at each site. Finally, we comment on the effect of preprocessing on measurement data in our generalized setting and show that it becomes problematic outside of the binary setting, in that it allows local hidden variable theories to simulate maximally nonlocal correlations such as those of these generalized Popescu-Rohrlich nonlocal boxes.

  14. Generalized Bell-inequality experiments and computation

    International Nuclear Information System (INIS)

    Hoban, Matty J.; Wallman, Joel J.; Browne, Dan E.

    2011-01-01

    We consider general settings of Bell inequality experiments with many parties, where each party chooses from a finite number of measurement settings each with a finite number of outcomes. We investigate the constraints that Bell inequalities place upon the correlations possible in local hidden variable theories using a geometrical picture of correlations. We show that local hidden variable theories can be characterized in terms of limited computational expressiveness, which allows us to characterize families of Bell inequalities. The limited computational expressiveness for many settings (each with many outcomes) generalizes previous results about the many-party situation each with a choice of two possible measurements (each with two outcomes). Using this computational picture we present generalizations of the Popescu-Rohrlich nonlocal box for many parties and nonbinary inputs and outputs at each site. Finally, we comment on the effect of preprocessing on measurement data in our generalized setting and show that it becomes problematic outside of the binary setting, in that it allows local hidden variable theories to simulate maximally nonlocal correlations such as those of these generalized Popescu-Rohrlich nonlocal boxes.

  15. Mg co-ordination with potential carcinogenic molecule acrylamide: Spectroscopic, computational and cytotoxicity studies

    Science.gov (United States)

    Singh, Ranjana; Mishra, Vijay K.; Singh, Hemant K.; Sharma, Gunjan; Koch, Biplob; Singh, Bachcha; Singh, Ranjan K.

    2018-03-01

    Acrylamide (acr) is a potential toxic molecule produced in thermally processed food stuff. Acr-Mg complex has been synthesized chemically and characterized by spectroscopic techniques. The binding sites of acr with Mg were identified by experimental and computational methods. Both experimental and theoretical results suggest that Mg coordinated with the oxygen atom of Cdbnd O group of acr. In-vitro cytotoxicity studies revealed significant decrease in the toxic level of acr-Mg complex as compared to pure acr. The decrease in toxicity on complexation with Mg may be a useful step for future research to reduce the toxicity of acr.

  16. Numerical computation of generalized importance functions

    International Nuclear Information System (INIS)

    Gomit, J.M.; Nasr, M.; Ngyuen van Chi, G.; Pasquet, J.P.; Planchard, J.

    1981-01-01

    Thus far, an important effort has been devoted to developing and applying generalized perturbation theory in reactor physics analysis. In this work we are interested in the calculation of the importance functions by the method of A. Gandini. We have noted that in this method the convergence of the iterative procedure adopted is not rapid. Hence to accelerate this convergence we have used the semi-iterative technique. Two computer codes have been developed for one and two dimensional calculations (SPHINX-1D and SPHINX-2D). The advantage of our calculation was confirmed by some comparative tests in which the iteration number and the computing time were highly reduced with respect to classical calculation (CIAP-1D and CIAP-2D). (orig.) [de

  17. General-Purpose Software For Computer Graphics

    Science.gov (United States)

    Rogers, Joseph E.

    1992-01-01

    NASA Device Independent Graphics Library (NASADIG) is general-purpose computer-graphics package for computer-based engineering and management applications which gives opportunity to translate data into effective graphical displays for presentation. Features include two- and three-dimensional plotting, spline and polynomial interpolation, control of blanking of areas, multiple log and/or linear axes, control of legends and text, control of thicknesses of curves, and multiple text fonts. Included are subroutines for definition of areas and axes of plots; setup and display of text; blanking of areas; setup of style, interpolation, and plotting of lines; control of patterns and of shading of colors; control of legends, blocks of text, and characters; initialization of devices; and setting of mixed alphabets. Written in FORTRAN 77.

  18. 29 CFR 541.400 - General rule for computer employees.

    Science.gov (United States)

    2010-07-01

    ... OUTSIDE SALES EMPLOYEES Computer Employees § 541.400 General rule for computer employees. (a) Computer... computer employees whose primary duty consists of: (1) The application of systems analysis techniques and...

  19. THE NASA AMES POLYCYCLIC AROMATIC HYDROCARBON INFRARED SPECTROSCOPIC DATABASE: THE COMPUTED SPECTRA

    International Nuclear Information System (INIS)

    Bauschlicher, C. W.; Ricca, A.; Boersma, C.; Mattioda, A. L.; Cami, J.; Peeters, E.; Allamandola, L. J.; Sanchez de Armas, F.; Puerta Saborido, G.; Hudgins, D. M.

    2010-01-01

    The astronomical emission features, formerly known as the unidentified infrared bands, are now commonly ascribed to polycyclic aromatic hydrocarbons (PAHs). The laboratory experiments and computational modeling done at the NASA Ames Research Center to create a collection of PAH IR spectra relevant to test and refine the PAH hypothesis have been assembled into a spectroscopic database. This database now contains over 800 PAH spectra spanning 2-2000 μm (5000-5 cm -1 ). These data are now available on the World Wide Web at www.astrochem.org/pahdb. This paper presents an overview of the computational spectra in the database and the tools developed to analyze and interpret astronomical spectra using the database. A description of the online and offline user tools available on the Web site is also presented.

  20. 47 CFR 32.6124 - General purpose computers expense.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 2 2010-10-01 2010-10-01 false General purpose computers expense. 32.6124... General purpose computers expense. This account shall include the costs of personnel whose principal job is the physical operation of general purpose computers and the maintenance of operating systems. This...

  1. Understanding reactivity of two newly synthetized imidazole derivatives by spectroscopic characterization and computational study

    Science.gov (United States)

    Hossain, Mossaraf; Thomas, Renjith; Mary, Y. Sheena; Resmi, K. S.; Armaković, Stevan; Armaković, Sanja J.; Nanda, Ashis Kumar; Vijayakumar, G.; Van Alsenoy, C.

    2018-04-01

    Two newly synthetized imidazole derivatives (1-(4-methoxyphenyl)-4,5-dimethyl-1H-imidazole-2-yl acetate (MPDIA) and 1-(4-bromophenyl)-4,5-dimethyl-1H-imidazole-2-yl acetate (BPDIA)) have been prepared by solvent-free synthesis pathway and their specific spectroscopic and reactive properties have been discussed based on combined experimental and computational approaches. Aside of synthesis, experimental part of this work included measurements of IR, FT-Raman and NMR spectra. All of the aforementioned spectra were also obtained computationally, within the framework of density functional theory (DFT) approach. Additionally, DFT calculations have been used in order to investigate local reactivity properties based on molecular orbital theory, molecular electrostatic potential (MEP), average local ionization energy (ALIE), Fukui functions and bond dissociation energy (BDE). Molecular dynamics (MD) simulations have been used in order to obtain radial distribution functions (RDF), which were used for identification of the atoms with pronounced interactions with water molecules. MEP showed negative regions are mainly localized over N28, O29, O35 atoms, it is represent with red colour in rainbow color scheme for MPDIA and BPDIA (which are most reactive sites for electrophilic attack). The first order hyperpolarizabilities of MPDIA and BPDIA are 20.15 and 6.10 times that of the standard NLO material urea. Potential interaction with antihypertensive protein hydrolase.

  2. 47 CFR 32.2124 - General purpose computers.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 2 2010-10-01 2010-10-01 false General purpose computers. 32.2124 Section 32... General purpose computers. (a) This account shall include the original cost of computers and peripheral... financial, statistical, or other business analytical reports; preparation of payroll, customer bills, and...

  3. STATEFIT: a computer program to facilitate the interpretation of spectroscopic data

    International Nuclear Information System (INIS)

    Burson, S.B.; Wood, G.T.; Batson, C.H.

    1975-05-01

    STATEFIT is a program written in Fortran IV for the IBM 370/195 computer to facilitate the interpretation of spectroscopic data. A list containing all the experimentally determined γ-ray energies, their relative intensities, the uncertainties in each, and when known, the total internal conversion coefficients is supplied as input. A preliminary partial decay scheme is assumed to already exist, and a second list specifying those γ rays representing transitions between members of this group of energy levels is also provided as input data. Before execution is allowed to proceed, the two lists are subjected to ten different tests to locate possible user errors and if any are found, they are noted. Identification of a fatal error such as the inadvertent assignment of two different gamma rays between one pair of states, causes termination. The energies of the presumed states are not estimated, but are treated as varible parameters. For purposes of the calculation, each state is identified by an arbitrary index number. The best energy values of the states are determined by making a least-squares fit of the level energies to the network of assigned gamma rays. Transition intensities R/sub T/ = R/sub r/ (1 + a) are calculated, and for radioactive samples the fractional beta feeding to each state is computed. A table of all possible transitions between the calculated states is then computed and listed in monotonically increasing order. The complete list of experimentally observed gamma rays is then searched for additional acceptable assignments. The acceptance criterion for selection of such possible additional assignments is externally adjustable. Space required for 20 excited states, 50 assigned gamma rays and 100 observed gamma rays is 15C60 16 (89,184 10 ) bytes. Space required for 120 excited states, 500 assigned gamma rays and 1200 observed gamma rays is 61A58 16 (399,960 10 ) bytes

  4. Characterization of a spectroscopic detector for application in x-ray computed tomography

    Science.gov (United States)

    Dooraghi, Alex A.; Fix, Brian J.; Smith, Jerel A.; Brown, William D.; Azevedo, Stephen G.; Martz, Harry E.

    2017-09-01

    Recent advances in cadmium telluride (CdTe) energy-discriminating pixelated detectors have enabled the possibility of Multi-Spectral X-ray Computed Tomography (MSXCT) to incorporate spectroscopic information into CT. MultiX ME 100 V2 is a CdTe-based spectroscopic x-ray detector array capable of recording energies from 20 to 160 keV in 1.1 keV energy bin increments. Hardware and software have been designed to perform radiographic and computed tomography tasks with this spectroscopic detector. Energy calibration is examined using the end-point energy of a bremsstrahlung spectrum and radioisotope spectral lines. When measuring the spectrum from Am-241 across 500 detector elements, the standard deviation of the peak-location and FWHM measurements are +/- 0.4 and +/- 0.6 keV, respectively. As these values are within the energy bin size (1.1 keV), detector elements are consistent with each other. The count rate is characterized, using a nonparalyzable model with a dead time of 64 +/- 5 ns. This is consistent with the manufacturer's quoted per detector-element linear-deviation at 2 Mpps (million photons per sec) of 8.9 % (typical) and 12 % (max). When comparing measured and simulated spectra, a low-energy tail is visible in the measured data due to the spectral response of the detector. If no valid photon detections are expected in the low-energy tail, then a background subtraction may be applied to allow for a possible first-order correction. If photons are expected in the low-energy tail, a detailed model must be implemented. A radiograph of an aluminum step wedge with a maximum height of 20 mm shows an underestimation of attenuation by about 10 % at 60 keV. This error is due to partial energy deposition from higher energy (>60 keV) photons into a lower-energy ( 60 keV) bin, reducing the apparent attenuation. A radiograph of a polytetrafluoroethylene (PTFE) cylinder taken using a bremsstrahlung spectrum from an x-ray voltage of 100 kV filtered by 1.3 mm Cu is

  5. Characterization of a spectroscopic detector for application in x-ray computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Dooraghi, A. A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Fix, B. J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Smith, J. A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Brown, W. D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Azevedo, S. G. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Martz, H. E. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-04-21

    Recent advances in cadmium telluride (CdTe) energy-discriminating pixelated detectors have enabled the possibility of Multi-Spectral X-ray Computed Tomography (MSXCT) to incorporate spectroscopic information into CT. MultiX ME 100 V2 is a CdTe-based spectroscopic x-ray detector array capable of recording energies from 20 to 160 keV in 1.1 keV energy bin increments. Hardware and software have been designed to perform radiographic and computed tomography tasks with this spectroscopic detector. Energy calibration is examined using the end-point energy of a bremsstrahlung spectrum and radioisotope spectral lines. When measuring the spectrum from Am-241 across 500 detector elements, the standard deviation of the peak-location and FWHM measurements are ±0.4 and ±0.6 keV, respectively. As these values are within the energy bin size (1.1 keV), detector elements are consistent with each other. The count rate is characterized, using a nonparalyzable model with a dead time of 64 ± 5 ns. This is consistent with the manufacturer’s quoted per detector-element linear-deviation at 2 Mpps (million photons per sec) of 8.9% (typical) and 12% (max). When comparing measured and simulated spectra, a low-energy tail is visible in the measured data due to the spectral response of the detector. If no valid photon detections are expected in the low-energy tail, then a background subtraction may be applied to allow for a possible first-order correction. If photons are expected in the low-energy tail, a detailed model must be implemented. A radiograph of an aluminum step wedge with a maximum height of about 20 mm shows an underestimation of attenuation by about 10% at 60 keV. This error is due to partial energy deposition from higher-energy (> 60 keV) photons into a lower-energy (~60 keV) bin, reducing the apparent attenuation. A radiograph of a PTFE cylinder taken using a bremsstrahlung spectrum from an x-ray voltage of 100 kV filtered by 1.3 mm Cu is reconstructed using Abel inversion

  6. Particle in a Disk: A Spectroscopic and Computational Laboratory Exercise Studying the Polycyclic Aromatic Hydrocarbon Corannulene

    Science.gov (United States)

    Frey, E. Ramsey; Sygula, Andrzej; Hammer, Nathan I.

    2014-01-01

    This laboratory exercise introduces undergraduate chemistry majors to the spectroscopic and theoretical study of the polycyclic aromatic hydrocarbon (PAH), corannulene. Students explore the spectroscopic properties of corannulene using UV-vis and Raman vibrational spectroscopies. They compare their experimental results to simulated vibrational…

  7. Computing generalized Langevin equations and generalized Fokker-Planck equations.

    Science.gov (United States)

    Darve, Eric; Solomon, Jose; Kia, Amirali

    2009-07-07

    The Mori-Zwanzig formalism is an effective tool to derive differential equations describing the evolution of a small number of resolved variables. In this paper we present its application to the derivation of generalized Langevin equations and generalized non-Markovian Fokker-Planck equations. We show how long time scales rates and metastable basins can be extracted from these equations. Numerical algorithms are proposed to discretize these equations. An important aspect is the numerical solution of the orthogonal dynamics equation which is a partial differential equation in a high dimensional space. We propose efficient numerical methods to solve this orthogonal dynamics equation. In addition, we present a projection formalism of the Mori-Zwanzig type that is applicable to discrete maps. Numerical applications are presented from the field of Hamiltonian systems.

  8. Characterization of the Elusive Conformers of Glycine from State-of-the-Art Structural, Thermodynamic, and Spectroscopic Computations: Theory Complements Experiment.

    Science.gov (United States)

    Barone, Vincenzo; Biczysko, Malgorzata; Bloino, Julien; Puzzarini, Cristina

    2013-03-12

    A state-of-the-art computational strategy for the evaluation of accurate molecular structures as well as thermodynamic and spectroscopic properties along with the direct simulation of infrared (IR) and Raman spectra is established, validated (on the basis of the experimental data available for the Ip glycine conformer) and then used to provide a reliable and accurate characterization of the elusive IVn/gtt and IIIp/tct glycine conformers. The integrated theoretical model proposed is based on accurate post-Hartree-Fock computations (involving composite schemes) of energies, structures, properties, and harmonic force fields coupled to DFT corrections for the proper inclusion of vibrational effects at an anharmonic level (as provided by general second-order perturbative approach). It is shown that the approach presented here allows the evaluation of structural, thermodynamic, and spectroscopic properties with an overall accuracy of about, or better than, 0.001 Å, 20 MHz, 1 kJ·mol(-1), and 10 cm(-1) for bond distances, rotational constants, conformational enthalpies, and vibrational frequencies, respectively. The high accuracy of the computational results allows one to support and complement experimental studies, thus providing (i) an unequivocal identification of several conformers concomitantly present in the experimental mixture and (ii) data not available or difficult to experimentally derive.

  9. Crystallographic computing system JANA2006: General features

    Czech Academy of Sciences Publication Activity Database

    Petříček, Václav; Dušek, Michal; Palatinus, Lukáš

    2014-01-01

    Roč. 229, č. 5 (2014), s. 345-352 ISSN 0044-2968 R&D Projects: GA ČR(CZ) GAP204/11/0809; GA ČR(CZ) GA14-03276S Grant - others:AV ČR(CZ) Praemium Academiae Institutional support: RVO:68378271 Keywords : JANA2006 * aperiodic structures * magnetic structures * crystallographic computing Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 1.310, year: 2014

  10. General aviation design synthesis utilizing interactive computer graphics

    Science.gov (United States)

    Galloway, T. L.; Smith, M. R.

    1976-01-01

    Interactive computer graphics is a fast growing area of computer application, due to such factors as substantial cost reductions in hardware, general availability of software, and expanded data communication networks. In addition to allowing faster and more meaningful input/output, computer graphics permits the use of data in graphic form to carry out parametric studies for configuration selection and for assessing the impact of advanced technologies on general aviation designs. The incorporation of interactive computer graphics into a NASA developed general aviation synthesis program is described, and the potential uses of the synthesis program in preliminary design are demonstrated.

  11. Feed-forward general-purpose computer

    Energy Technology Data Exchange (ETDEWEB)

    Yamada, H; Yoshioka, Y; Nakamura, T; Shigei, Y

    1983-08-01

    The feed forward machine (FFM) proposed by the authors has a CPU composed of many fixed arithmetic units and registers. Many features of the FFM which are compatible with concurrent operating and reduce the instruction requirement for store are reported. In order to evaluate the FFM, the minimum execution time of instructions is discussed by using the Petri Net model. From this it is predicted that the execution time will be 0.46-0.6 times the real execution time. Furthermore, it is concluded that the program for the FFM will be reduced in size with respect to the program for the Von Neumann computers. 12 references.

  12. Computational Aeroacoustics Using the Generalized Lattice Boltzmann Equation, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — The overall objective of the proposed project is to develop a generalized lattice Boltzmann (GLB) approach as a potential computational aeroacoustics (CAA) tool for...

  13. Using CAMAL for algebraic computations in general relativity

    International Nuclear Information System (INIS)

    Fitch, J.P.

    1979-01-01

    CAMAL is a collection of computer algebra systems developed in Cambridge, England for use mainly in theoretical physics. One of these was designed originally for general relativity calculations, although it is often used in other fields. In a recent paper Cohen, Leringe, and Sundblad compared six systems for algebraic computations applied to general relativity available in Stockholm. Here similar information for CAMAL is given and by using the same tests CAMAL is added to the comparison. (author)

  14. Parameter Estimation for a Computable General Equilibrium Model

    DEFF Research Database (Denmark)

    Arndt, Channing; Robinson, Sherman; Tarp, Finn

    2002-01-01

    We introduce a maximum entropy approach to parameter estimation for computable general equilibrium (CGE) models. The approach applies information theory to estimating a system of non-linear simultaneous equations. It has a number of advantages. First, it imposes all general equilibrium constraints...

  15. Parameter Estimation for a Computable General Equilibrium Model

    DEFF Research Database (Denmark)

    Arndt, Channing; Robinson, Sherman; Tarp, Finn

    We introduce a maximum entropy approach to parameter estimation for computable general equilibrium (CGE) models. The approach applies information theory to estimating a system of nonlinear simultaneous equations. It has a number of advantages. First, it imposes all general equilibrium constraints...

  16. New Generation General Purpose Computer (GPC) compact IBM unit

    Science.gov (United States)

    1991-01-01

    New Generation General Purpose Computer (GPC) compact IBM unit replaces a two-unit earlier generation computer. The new IBM unit is documented in table top views alone (S91-26867, S91-26868), with the onboard equipment it supports including the flight deck CRT screen and keypad (S91-26866), and next to the two earlier versions it replaces (S91-26869).

  17. Perbandingan Kemampuan Embedded Computer dengan General Purpose Computer untuk Pengolahan Citra

    Directory of Open Access Journals (Sweden)

    Herryawan Pujiharsono

    2017-08-01

    Full Text Available Perkembangan teknologi komputer membuat pengolahan citra saat ini banyak dikembangkan untuk dapat membantu manusia di berbagai bidang pekerjaan. Namun, tidak semua bidang pekerjaan dapat dikembangkan dengan pengolahan citra karena tidak mendukung penggunaan komputer sehingga mendorong pengembangan pengolahan citra dengan mikrokontroler atau mikroprosesor khusus. Perkembangan mikrokontroler dan mikroprosesor memungkinkan pengolahan citra saat ini dapat dikembangkan dengan embedded computer atau single board computer (SBC. Penelitian ini bertujuan untuk menguji kemampuan embedded computer dalam mengolah citra dan membandingkan hasilnya dengan komputer pada umumnya (general purpose computer. Pengujian dilakukan dengan mengukur waktu eksekusi dari empat operasi pengolahan citra yang diberikan pada sepuluh ukuran citra. Hasil yang diperoleh pada penelitian ini menunjukkan bahwa optimasi waktu eksekusi embedded computer lebih baik jika dibandingkan dengan general purpose computer dengan waktu eksekusi rata-rata embedded computer adalah 4-5 kali waktu eksekusi general purpose computer dan ukuran citra maksimal yang tidak membebani CPU terlalu besar untuk embedded computer adalah 256x256 piksel dan untuk general purpose computer adalah 400x300 piksel.

  18. Intermolecular interaction of fosinopril with bovine serum albumin (BSA): The multi-spectroscopic and computational investigation.

    Science.gov (United States)

    Zhou, Kai-Li; Pan, Dong-Qi; Lou, Yan-Yue; Shi, Jie-Hua

    2018-04-16

    The intermolecular interaction of fosinopril, an angiotensin converting enzyme inhibitor with bovine serum albumin (BSA), has been investigated in physiological buffer (pH 7.4) by multi-spectroscopic methods and molecular docking technique. The results obtained from fluorescence and UV absorption spectroscopy revealed that the fluorescence quenching mechanism of BSA induced by fosinopril was mediated by the combined dynamic and static quenching, and the static quenching was dominant in this system. The binding constant, K b , value was found to lie between 2.69 × 10 3 and 9.55 × 10 3  M -1 at experimental temperatures (293, 298, 303, and 308 K), implying the low or intermediate binding affinity between fosinopril and BSA. Competitive binding experiments with site markers (phenylbutazone and diazepam) suggested that fosinopril preferentially bound to the site I in sub-domain IIA on BSA, as evidenced by molecular docking analysis. The negative sign for enthalpy change (ΔH 0 ) and entropy change (ΔS 0 ) indicated that van der Waals force and hydrogen bonds played important roles in the fosinopril-BSA interaction, and 8-anilino-1-naphthalenesulfonate binding assay experiments offered evidence of the involvements of hydrophobic interactions. Moreover, spectroscopic results (synchronous fluorescence, 3-dimensional fluorescence, and Fourier transform infrared spectroscopy) indicated a slight conformational change in BSA upon fosinopril interaction. Copyright © 2018 John Wiley & Sons, Ltd.

  19. A general algorithm for computing distance transforms in linear time

    NARCIS (Netherlands)

    Meijster, A.; Roerdink, J.B.T.M.; Hesselink, W.H.; Goutsias, J; Vincent, L; Bloomberg, DS

    2000-01-01

    A new general algorithm fur computing distance transforms of digital images is presented. The algorithm consists of two phases. Both phases consist of two scans, a forward and a backward scan. The first phase scans the image column-wise, while the second phase scans the image row-wise. Since the

  20. Broken-pair, generalized seniority and interacting boson approximations in a spectroscopic study of Sn nuclei

    International Nuclear Information System (INIS)

    Bonsignori, K.; Allaart, K.; Egmond, A. van

    1983-01-01

    A broken-pair study of Sn nuclei is reported in which the model space includes two broken pair states. It is shown that for even Sn nuclei, with a rather simple Gaussian interaction and with single-particle-energies derived from data on odd nuclei, the main features of the excitation spectra up to about 3.5 MeV may be reproduced in this way. The idea of the generalized seniority scheme, that the composition of S-pair operator and that of the D-pair operator may be independent of the total number of pairs, is confirmed by the pair structures which result from energy minimization and diagonalization for each number of pairs separately. A general procedure is described to derive IBA parameters when the valence orbits are nondegenerate. Numerical results for Sn nuclei are given. (U.K.)

  1. General-purpose parallel simulator for quantum computing

    International Nuclear Information System (INIS)

    Niwa, Jumpei; Matsumoto, Keiji; Imai, Hiroshi

    2002-01-01

    With current technologies, it seems to be very difficult to implement quantum computers with many qubits. It is therefore of importance to simulate quantum algorithms and circuits on the existing computers. However, for a large-size problem, the simulation often requires more computational power than is available from sequential processing. Therefore, simulation methods for parallel processors are required. We have developed a general-purpose simulator for quantum algorithms/circuits on the parallel computer (Sun Enterprise4500). It can simulate algorithms/circuits with up to 30 qubits. In order to test efficiency of our proposed methods, we have simulated Shor's factorization algorithm and Grover's database search, and we have analyzed robustness of the corresponding quantum circuits in the presence of both decoherence and operational errors. The corresponding results, statistics, and analyses are presented in this paper

  2. Synthesis, structure, spectroscopic investigations, and computational studies of optically pure β-ketoamide

    International Nuclear Information System (INIS)

    Mtat, D.; Touati, R.; Guerfel, T.; Walha, K.; Ben Hassine, B.

    2016-01-01

    Chemical preparation, X-ray single crystal diffraction, IR and NMR spectroscopic investigations of a novel nonlinear optical organic compound (C 17 H 22 NO 2 Cl) are described. The compound crystallizes in the orthorhombic system with the non-centrosymmetric sp. gr. P2 1 2 1 2 1 . In the crystal structure, molecules are interconnected by N–H…O hydrogen bonds forming infinite chains along a axis. The Hirshfeld surface and associated fingerprint plots of the compound are presented to explore the nature of intermolecular interactions and their relative contributions in building the solid-state architecture. The molecular HOMO–LUMO compositions and their respective energy gaps are also drawn to explain the activity of the compound. The first hyperpolarizability β tot of the title compound is determined using DFT calculations. The optical properties are also investigated by UV–Vis absorption spectrum.

  3. Synthesis, structure, spectroscopic investigations, and computational studies of optically pure β-ketoamide

    Energy Technology Data Exchange (ETDEWEB)

    Mtat, D.; Touati, R. [Université de Monastir, Laboratoire de Synthèse Organique Asymétrique et Catalyse Homogène (UR11ES56), Faculté des Sciences (Tunisia); Guerfel, T., E-mail: taha-guerfel@yahoo.fr [Université de Kairouan, Laboratoire d’Electrochimie, Matériaux et Environnement (Tunisia); Walha, K. [Université de Sfax, M.E.S.Lab. Faculté des Sciences de Sfax (Tunisia); Ben Hassine, B. [Université de Monastir, Laboratoire de Synthèse Organique Asymétrique et Catalyse Homogène (UR11ES56), Faculté des Sciences (Tunisia)

    2016-12-15

    Chemical preparation, X-ray single crystal diffraction, IR and NMR spectroscopic investigations of a novel nonlinear optical organic compound (C{sub 17}H{sub 22}NO{sub 2}Cl) are described. The compound crystallizes in the orthorhombic system with the non-centrosymmetric sp. gr. P2{sub 1}2{sub 1}2{sub 1}. In the crystal structure, molecules are interconnected by N–H…O hydrogen bonds forming infinite chains along a axis. The Hirshfeld surface and associated fingerprint plots of the compound are presented to explore the nature of intermolecular interactions and their relative contributions in building the solid-state architecture. The molecular HOMO–LUMO compositions and their respective energy gaps are also drawn to explain the activity of the compound. The first hyperpolarizability β{sub tot} of the title compound is determined using DFT calculations. The optical properties are also investigated by UV–Vis absorption spectrum.

  4. Parallelized computation for computer simulation of electrocardiograms using personal computers with multi-core CPU and general-purpose GPU.

    Science.gov (United States)

    Shen, Wenfeng; Wei, Daming; Xu, Weimin; Zhu, Xin; Yuan, Shizhong

    2010-10-01

    Biological computations like electrocardiological modelling and simulation usually require high-performance computing environments. This paper introduces an implementation of parallel computation for computer simulation of electrocardiograms (ECGs) in a personal computer environment with an Intel CPU of Core (TM) 2 Quad Q6600 and a GPU of Geforce 8800GT, with software support by OpenMP and CUDA. It was tested in three parallelization device setups: (a) a four-core CPU without a general-purpose GPU, (b) a general-purpose GPU plus 1 core of CPU, and (c) a four-core CPU plus a general-purpose GPU. To effectively take advantage of a multi-core CPU and a general-purpose GPU, an algorithm based on load-prediction dynamic scheduling was developed and applied to setting (c). In the simulation with 1600 time steps, the speedup of the parallel computation as compared to the serial computation was 3.9 in setting (a), 16.8 in setting (b), and 20.0 in setting (c). This study demonstrates that a current PC with a multi-core CPU and a general-purpose GPU provides a good environment for parallel computations in biological modelling and simulation studies. Copyright 2010 Elsevier Ireland Ltd. All rights reserved.

  5. Installation of new Generation General Purpose Computer (GPC) compact unit

    Science.gov (United States)

    1991-01-01

    In the Kennedy Space Center's (KSC's) Orbiter Processing Facility (OPF) high bay 2, Spacecraft Electronics technician Ed Carter (right), wearing clean suit, prepares for (26864) and installs (26865) the new Generation General Purpose Computer (GPC) compact IBM unit in Atlantis', Orbiter Vehicle (OV) 104's, middeck avionics bay as Orbiter Systems Quality Control technician Doug Snider looks on. Both men work for NASA contractor Lockheed Space Operations Company. All three orbiters are being outfitted with the compact IBM unit, which replaces a two-unit earlier generation computer.

  6. Learning general phonological rules from distributional information: a computational model.

    Science.gov (United States)

    Calamaro, Shira; Jarosz, Gaja

    2015-04-01

    Phonological rules create alternations in the phonetic realizations of related words. These rules must be learned by infants in order to identify the phonological inventory, the morphological structure, and the lexicon of a language. Recent work proposes a computational model for the learning of one kind of phonological alternation, allophony (Peperkamp, Le Calvez, Nadal, & Dupoux, 2006). This paper extends the model to account for learning of a broader set of phonological alternations and the formalization of these alternations as general rules. In Experiment 1, we apply the original model to new data in Dutch and demonstrate its limitations in learning nonallophonic rules. In Experiment 2, we extend the model to allow it to learn general rules for alternations that apply to a class of segments. In Experiment 3, the model is further extended to allow for generalization by context; we argue that this generalization must be constrained by linguistic principles. Copyright © 2014 Cognitive Science Society, Inc.

  7. Computer code for general analysis of radon risks (GARR)

    International Nuclear Information System (INIS)

    Ginevan, M.

    1984-09-01

    This document presents a computer model for general analysis of radon risks that allow the user to specify a large number of possible models with a small number of simple commands. The model is written in a version of BASIC which conforms closely to the American National Standards Institute (ANSI) definition for minimal BASIC and thus is readily modified for use on a wide variety of computers and, in particular, microcomputers. Model capabilities include generation of single-year life tables from 5-year abridged data, calculation of multiple-decrement life tables for lung cancer for the general population, smokers, and nonsmokers, and a cohort lung cancer risk calculation that allows specification of level and duration of radon exposure, the form of the risk model, and the specific population assumed at risk. 36 references, 8 figures, 7 tables

  8. Space shuttle general purpose computers (GPCs) (current and future versions)

    Science.gov (United States)

    1988-01-01

    Current and future versions of general purpose computers (GPCs) for space shuttle orbiters are represented in this frame. The two boxes on the left (AP101B) represent the current GPC configuration, with the input-output processor at far left and the central processing unit (CPU) at its side. The upgraded version combines both elements in a single unit (far right, AP101S).

  9. Spectroscopic characterization, antimicrobial activity, DFT computation and docking studies of sulfonamide Schiff bases

    Science.gov (United States)

    Mondal, Sudipa; Mandal, Santi M.; Mondal, Tapan Kumar; Sinha, Chittaranjan

    2017-01-01

    Schiff bases synthesised from the condensation of 2-(hydroxy)naphthaldehyde and sulfonamides (sufathiazole (STZ), sulfapyridine (SPY), sulfadiazine (SDZ), sulfamerazine (SMZ) and sulfaguanidine (SGN)) are characterized by different spectroscopic data (FTIR, UV-Vis, Mass, NMR) and two of them, (E)-4-(((2-hydroxynaphthalen-1-yl)methylene)amino)-N-(thiazol-2-yl)benzenesulfonamide (1a) and (E)-N-(diaminomethylene)-4-(((2-hydroxynaphthalen-1-yl)methylene)amino)benzenesulfonamide (1e) have been confirmed by single crystal X-ray structure determination. Antimicrobial activities of the Schiff bases have been evaluated against certified and resistant Gram positive (Staphylococcus aureus, Enterococcus facelis) and Gram negative (Streptococcus pyogenes, Salmonella typhi, Shigella dysenteriae, Shigella flexneri, Klebsiella pneumonia) pathogens. Performance of Schiff base against the resistant pathogens are better than standard stain and MIC data lie 32-128 μg/ml while parent sulfonamides are effectively inactive (MIC >512 μg/ml). The DFT optimized structures of the Schiff bases have been used to accomplish molecular docking studies with DHPS (dihydropteroate synthase) protein structure (downloaded from Protein Data Bank) to establish the most preferred mode of interaction. ADMET filtration, Cytotoxicity (MTT assay) and haemolysis assay have been examined for evaluation of druglike character.

  10. Generalized flow and determinism in measurement-based quantum computation

    Energy Technology Data Exchange (ETDEWEB)

    Browne, Daniel E [Department of Materials, University of Oxford, Parks Road, Oxford OX1 3PU (United Kingdom); Kashefi, Elham [Computing Laboratory and Christ Church College, University of Oxford, Parks Road, Oxford OX1 3QD (United Kingdom); Mhalla, Mehdi [Laboratoire d' Informatique de Grenoble, CNRS - Centre national de la recherche scientifique, Universite de Grenoble (France); Perdrix, Simon [Preuves, Programmes et Systemes (PPS), Universite Paris Diderot, Paris (France)

    2007-08-15

    We extend the notion of quantum information flow defined by Danos and Kashefi (2006 Phys. Rev. A 74 052310) for the one-way model (Raussendorf and Briegel 2001 Phys. Rev. Lett. 86 910) and present a necessary and sufficient condition for the stepwise uniformly deterministic computation in this model. The generalized flow also applied in the extended model with measurements in the (X, Y), (X, Z) and (Y, Z) planes. We apply both measurement calculus and the stabiliser formalism to derive our main theorem which for the first time gives a full characterization of the stepwise uniformly deterministic computation in the one-way model. We present several examples to show how our result improves over the traditional notion of flow, such as geometries (entanglement graph with input and output) with no flow but having generalized flow and we discuss how they lead to an optimal implementation of the unitaries. More importantly one can also obtain a better quantum computation depth with the generalized flow rather than with flow. We believe our characterization result is particularly valuable for the study of the algorithms and complexity in the one-way model.

  11. Generalized flow and determinism in measurement-based quantum computation

    International Nuclear Information System (INIS)

    Browne, Daniel E; Kashefi, Elham; Mhalla, Mehdi; Perdrix, Simon

    2007-01-01

    We extend the notion of quantum information flow defined by Danos and Kashefi (2006 Phys. Rev. A 74 052310) for the one-way model (Raussendorf and Briegel 2001 Phys. Rev. Lett. 86 910) and present a necessary and sufficient condition for the stepwise uniformly deterministic computation in this model. The generalized flow also applied in the extended model with measurements in the (X, Y), (X, Z) and (Y, Z) planes. We apply both measurement calculus and the stabiliser formalism to derive our main theorem which for the first time gives a full characterization of the stepwise uniformly deterministic computation in the one-way model. We present several examples to show how our result improves over the traditional notion of flow, such as geometries (entanglement graph with input and output) with no flow but having generalized flow and we discuss how they lead to an optimal implementation of the unitaries. More importantly one can also obtain a better quantum computation depth with the generalized flow rather than with flow. We believe our characterization result is particularly valuable for the study of the algorithms and complexity in the one-way model

  12. Hypersonic Shock Wave Computations Using the Generalized Boltzmann Equation

    Science.gov (United States)

    Agarwal, Ramesh; Chen, Rui; Cheremisin, Felix G.

    2006-11-01

    Hypersonic shock structure in diatomic gases is computed by solving the Generalized Boltzmann Equation (GBE), where the internal and translational degrees of freedom are considered in the framework of quantum and classical mechanics respectively [1]. The computational framework available for the standard Boltzmann equation [2] is extended by including both the rotational and vibrational degrees of freedom in the GBE. There are two main difficulties encountered in computation of high Mach number flows of diatomic gases with internal degrees of freedom: (1) a large velocity domain is needed for accurate numerical description of the distribution function resulting in enormous computational effort in calculation of the collision integral, and (2) about 50 energy levels are needed for accurate representation of the rotational spectrum of the gas. Our methodology addresses these problems, and as a result the efficiency of calculations has increased by several orders of magnitude. The code has been validated by computing the shock structure in Nitrogen for Mach numbers up to 25 including the translational and rotational degrees of freedom. [1] Beylich, A., ``An Interlaced System for Nitrogen Gas,'' Proc. of CECAM Workshop, ENS de Lyon, France, 2000. [2] Cheremisin, F., ``Solution of the Boltzmann Kinetic Equation for High Speed Flows of a Rarefied Gas,'' Proc. of the 24th Int. Symp. on Rarefied Gas Dynamics, Bari, Italy, 2004.

  13. Computer assisted instruction in the general chemistry laboratory

    Science.gov (United States)

    Pate, Jerry C.

    This dissertation examines current applications concerning the use of computer technology to enhance instruction in the general chemistry laboratory. The dissertation critiques widely-used educational software, and explores examples of multimedia presentations such as those used in beginning chemistry laboratory courses at undergraduate and community colleges. The dissertation describes a prototype compact disc (CD) used to (a) introduce the general chemistry laboratory, (b) familiarize students with using chemistry laboratory equipment, (c) introduce laboratory safety practices, and (d) provide approved techniques for maintaining a laboratory notebook. Upon completing the CD portion of the pre-lab, students are linked to individual self-help (WebCT) quizzes covering the information provided on the CD. The CD is designed to improve student understanding of basic concepts, techniques, and procedures used in the general chemistry laboratory.

  14. Insight into the binding mechanism of imipenem to human serum albumin by spectroscopic and computational approaches.

    Science.gov (United States)

    Rehman, Md Tabish; Shamsi, Hira; Khan, Asad U

    2014-06-02

    The mechanism of interaction between imipenem and HSA was investigated by various techniques like fluorescence, UV.vis absorbance, FRET, circular dichroism, urea denaturation, enzyme kinetics, ITC, and molecular docking. We found that imipenem binds to HSA at a high affinity site located in subdomain IIIA (Sudlow's site I) and a low affinity site located in subdomain IIA.IIB. Electrostatic interactions played a vital role along with hydrogen bonding and hydrophobic interactions in stabilizing the imipenem.HSA complex at subdomain IIIA, while only electrostatic and hydrophobic interactions were present at subdomain IIA.IIB. The binding and thermodynamic parameters obtained by ITC showed that the binding of imipenem to HSA was a spontaneous process (ΔGD⁰(D)= -32.31 kJ mol(-1) for high affinity site and ΔGD⁰(D) = -23.02 kJ mol(-1) for low affinity site) with binding constants in the range of 10(4)-10(5) M(-1). Spectroscopic investigation revealed only one binding site of imipenem on HSA (Ka∼10(4) M(-1)). FRET analysis showed that the binding distance between imipenem and HSA (Trp-214) was optimal (r = 4.32 nm) for quenching to occur. Decrease in esterase-like activity of HSA in the presence of imipenem showed that Arg-410 and Tyr-411 of subdomain IIIA (Sudlow's site II) were directly involved in the binding process. CD spectral analysis showed altered conformation of HSA upon imipenem binding. Moreover, the binding of imipenem to subdomain IIIA (Sudlow's site II) of HSA also affected its folding pathway as clear from urea-induced denaturation studies.

  15. The NASA Ames Polycyclic Aromatic Hydrocarbon Infrared Spectroscopic Database : The Computed Spectra

    NARCIS (Netherlands)

    Bauschlicher, C. W.; Boersma, C.; Ricca, A.; Mattioda, A. L.; Cami, J.; Peeters, E.; de Armas, F. Sanchez; Saborido, G. Puerta; Hudgins, D. M.; Allamandola, L. J.

    The astronomical emission features, formerly known as the unidentified infrared bands, are now commonly ascribed to polycyclic aromatic hydrocarbons (PAHs). The laboratory experiments and computational modeling done at the NASA Ames Research Center to create a collection of PAH IR spectra relevant

  16. X-ray computer tomography, ultrasound and vibrational spectroscopic evaluation techniques of polymer gel dosimeters

    International Nuclear Information System (INIS)

    Baldock, Clive

    2004-01-01

    Since Gore et al published their paper on Fricke gel dosimetry, the predominant method of evaluation of both Fricke and polymer gel dosimeters has been magnetic resonance imaging (MRI). More recently optical computer tomography (CT) has also been a favourable evaluation method. Other techniques have been explored and developed as potential evaluation techniques in gel dosimetry. This paper reviews these other developments

  17. Developments of the general computer network of NIPNE-HH

    International Nuclear Information System (INIS)

    Mirica, M.; Constantinescu, S.; Danet, A.

    1997-01-01

    Since 1991 the general computer network of NIPNE-HH was developed and connected to RNCN (Romanian National Computer Network) for research and development and it offers to the Romanian physics research community an efficient and cost-effective infrastructure to communicate and collaborate with fellow researchers abroad, and to collect and exchange the most up-to-date information in their research area. RNCN is targeted on the following main objectives: Setting up a technical and organizational infrastructure meant to provide national and international electronic services for the Romanian scientific research community; - Providing a rapid and competitive tool for the exchange of information in the framework of Research and Development (R-D) community; - Using the scientific and technical data bases available in the country and offered by the national networks from other countries through international networks; - Providing a support for information, scientific and technical co-operation. RNCN has two international links: to EBONE via ACONET (64kbps) and to EuropaNET via Hungarnet (64 kbps). The guiding principle in designing the project of general computer network of NIPNE-HH, as part of RNCN, was to implement an open system based on OSI standards taking into account the following criteria: - development of a flexible solution, according to OSI specifications; - solutions of reliable gateway with the existing network already in use,allowing the access to the worldwide networks; - using the TCP/IP transport protocol for each Local Area Network (LAN) and for the connection to RNCN; - ensuring the integration of different and heterogeneous software and hardware platforms (DOS, Windows, UNIX, VMS, Linux, etc) through some specific interfaces. The major objectives achieved in direction of developing the general computer network of NIPNE-HH are: - linking all the existing and newly installed computer equipment and providing an adequate connectivity. LANs from departments

  18. Pharmaceutical industry and trade liberalization using computable general equilibrium model.

    Science.gov (United States)

    Barouni, M; Ghaderi, H; Banouei, Aa

    2012-01-01

    Computable general equilibrium models are known as a powerful instrument in economic analyses and widely have been used in order to evaluate trade liberalization effects. The purpose of this study was to provide the impacts of trade openness on pharmaceutical industry using CGE model. Using a computable general equilibrium model in this study, the effects of decrease in tariffs as a symbol of trade liberalization on key variables of Iranian pharmaceutical products were studied. Simulation was performed via two scenarios in this study. The first scenario was the effect of decrease in tariffs of pharmaceutical products as 10, 30, 50, and 100 on key drug variables, and the second was the effect of decrease in other sectors except pharmaceutical products on vital and economic variables of pharmaceutical products. The required data were obtained and the model parameters were calibrated according to the social accounting matrix of Iran in 2006. The results associated with simulation demonstrated that the first scenario has increased import, export, drug supply to markets and household consumption, while import, export, supply of product to market, and household consumption of pharmaceutical products would averagely decrease in the second scenario. Ultimately, society welfare would improve in all scenarios. We presents and synthesizes the CGE model which could be used to analyze trade liberalization policy issue in developing countries (like Iran), and thus provides information that policymakers can use to improve the pharmacy economics.

  19. Generalized fish life-cycle poplulation model and computer program

    International Nuclear Information System (INIS)

    DeAngelis, D.L.; Van Winkle, W.; Christensen, S.W.; Blum, S.R.; Kirk, B.L.; Rust, B.W.; Ross, C.

    1978-03-01

    A generalized fish life-cycle population model and computer program have been prepared to evaluate the long-term effect of changes in mortality in age class 0. The general question concerns what happens to a fishery when density-independent sources of mortality are introduced that act on age class 0, particularly entrainment and impingement at power plants. This paper discusses the model formulation and computer program, including sample results. The population model consists of a system of difference equations involving age-dependent fecundity and survival. The fecundity for each age class is assumed to be a function of both the fraction of females sexually mature and the weight of females as they enter each age class. Natural mortality for age classes 1 and older is assumed to be independent of population size. Fishing mortality is assumed to vary with the number and weight of fish available to the fishery. Age class 0 is divided into six life stages. The probability of survival for age class 0 is estimated considering both density-independent mortality (natural and power plant) and density-dependent mortality for each life stage. Two types of density-dependent mortality are included. These are cannibalism of each life stage by older age classes and intra-life-stage competition

  20. Spectroscopic and computational study of a nonheme iron nitrosyl center in a biosynthetic model of nitric oxide reductase.

    Science.gov (United States)

    Chakraborty, Saumen; Reed, Julian; Ross, Matthew; Nilges, Mark J; Petrik, Igor D; Ghosh, Soumya; Hammes-Schiffer, Sharon; Sage, J Timothy; Zhang, Yong; Schulz, Charles E; Lu, Yi

    2014-02-24

    A major barrier to understanding the mechanism of nitric oxide reductases (NORs) is the lack of a selective probe of NO binding to the nonheme FeB center. By replacing the heme in a biosynthetic model of NORs, which structurally and functionally mimics NORs, with isostructural ZnPP, the electronic structure and functional properties of the FeB nitrosyl complex was probed. This approach allowed observation of the first S=3/2 nonheme {FeNO}(7) complex in a protein-based model system of NOR. Detailed spectroscopic and computational studies show that the electronic state of the {FeNO}(7) complex is best described as a high spin ferrous iron (S=2) antiferromagnetically coupled to an NO radical (S=1/2) [Fe(2+)-NO(.)]. The radical nature of the FeB -bound NO would facilitate N-N bond formation by radical coupling with the heme-bound NO. This finding, therefore, supports the proposed trans mechanism of NO reduction by NORs. Copyright © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. Spectroscopic and computational studies of ionic clusters as models of solvation and atmospheric reactions

    Science.gov (United States)

    Kuwata, Keith T.

    Ionic clusters are useful as model systems for the study of fundamental processes in solution and in the atmosphere. Their structure and reactivity can be studied in detail using vibrational predissociation spectroscopy, in conjunction with high level ab initio calculations. This thesis presents the applications of infrared spectroscopy and computation to a variety of gas-phase cluster systems. A crucial component of the process of stratospheric ozone depletion is the action of polar stratospheric clouds (PSCs) to convert the reservoir species HCl and chlorine nitrate (ClONO2) to photochemically labile compounds. Quantum chemistry was used to explore one possible mechanism by which this activation is effected: Cl- + ClONO2 /to Cl2 + NO3- eqno(1)Correlated ab initio calculations predicted that the direct reaction of chloride ion with ClONO2 is facile, which was confirmed in an experimental kinetics study. In the reaction a weakly bound intermediate Cl2-NO3- is formed, with ~70% of the charge localized on the nitrate moiety. This enables the Cl2-NO3- cluster to be well solvated even in bulk solution, allowing (1) to be facile on PSCs. Quantum chemistry was also applied to the hydration of nitrosonium ion (NO+), an important process in the ionosphere. The calculations, in conjunction with an infrared spectroscopy experiment, revealed the structure of the gas-phase clusters NO+(H2O)n. The large degree of covalent interaction between NO+ and the lone pairs of the H2O ligands is contrasted with the weak electrostatic bonding between iodide ion and H2O. Finally, the competition between ion solvation and solvent self-association is explored for the gas-phase clusters Cl/-(H2O)n and Cl-(NH3)n. For the case of water, vibrational predissociation spectroscopy reveals less hydrogen bonding among H2O ligands than predicted by ab initio calculations. Nevertheless, for n /ge 5, cluster structure is dominated by water-water interactions, with Cl- only partially solvated by the

  2. Interaction of promethazine and adiphenine to human hemoglobin: A comparative spectroscopic and computational analysis

    Science.gov (United States)

    Maurya, Neha; ud din Parray, Mehraj; Maurya, Jitendra Kumar; Kumar, Amit; Patel, Rajan

    2018-06-01

    The binding nature of amphiphilic drugs viz. promethazine hydrochloride (PMT) and adiphenine hydrochloride (ADP), with human hemoglobin (Hb) was unraveled by fluorescence, absorbance, time resolved fluorescence, fluorescence resonance energy transfer (FRET) and circular dichroism (CD) spectral techniques in combination with molecular docking and molecular dynamic simulation methods. The steady state fluorescence spectra indicated that both PMT and ADP quenches the fluorescence of Hb through static quenching mechanism which was further confirmed by time resolved fluorescence spectra. The UV-Vis spectroscopy suggested ground state complex formation. The activation energy (Ea) was observed more in the case of Hb-ADP than Hb-PMT interaction system. The FRET result indicates the high probability of energy transfer from β Trp37 residue of Hb to the PMT (r = 2.02 nm) and ADP (r = 2.33 nm). The thermodynamic data reveal that binding of PMT with Hb are exothermic in nature involving hydrogen bonding and van der Waal interaction whereas in the case of ADP hydrophobic forces play the major role and binding process is endothermic in nature. The CD results show that both PMT and ADP, induced secondary structural changes of Hb and unfold the protein by losing a large helical content while the effect is more pronounced with ADP. Additionally, we also utilized computational approaches for deep insight into the binding of these drugs with Hb and the results are well matched with our experimental results.

  3. A computational and spectroscopic study of the gas-phase conformers of adrenaline

    Science.gov (United States)

    Çarçabal, P.; Snoek, L. C.; van Mourik, T.

    The conformational landscapes of the neurotransmitter l-adrenaline (l-epinephrine) and its diastereoisomer pseudo-adrenaline, isolated in the gas phase and un-protonated, have been investigated by using a combination of mass-selected ultraviolet and infrared holeburn spectroscopy, following laser desorption of the sample into a pulsed supersonic argon jet, and DFT and ab initio computation (at the B3LYP/6-31+G*, MP2/6-31+G* and MP2/aug-cc-pVDZ levels of theory). Both for adrenaline and its diastereoisomer, pseudo-adrenaline, one dominant molecular conformation, very similar to the one seen in noradrenaline, has been observed. It could be assigned to an extended side-chain structure (AG1a) stabilized by an OH → N intramolecular hydrogen bond. An intramolecular hydrogen bond is also formed between the neighbouring hydroxyl groups on the catechol ring. The presence of further conformers for both diastereoisomers could not be excluded, but overlapping electronic spectra and low ion signals prevented further assignments.

  4. Computable general equilibrium model fiscal year 2013 capability development report

    Energy Technology Data Exchange (ETDEWEB)

    Edwards, Brian Keith [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Rivera, Michael Kelly [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Boero, Riccardo [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-05-17

    This report documents progress made on continued developments of the National Infrastructure Simulation and Analysis Center (NISAC) Computable General Equilibrium Model (NCGEM), developed in fiscal year 2012. In fiscal year 2013, NISAC the treatment of the labor market and tests performed with the model to examine the properties of the solutions computed by the model. To examine these, developers conducted a series of 20 simulations for 20 U.S. States. Each of these simulations compared an economic baseline simulation with an alternative simulation that assumed a 20-percent reduction in overall factor productivity in the manufacturing industries of each State. Differences in the simulation results between the baseline and alternative simulations capture the economic impact of the reduction in factor productivity. While not every State is affected in precisely the same way, the reduction in manufacturing industry productivity negatively affects the manufacturing industries in each State to an extent proportional to the reduction in overall factor productivity. Moreover, overall economic activity decreases when manufacturing sector productivity is reduced. Developers ran two additional simulations: (1) a version of the model for the State of Michigan, with manufacturing divided into two sub-industries (automobile and other vehicle manufacturing as one sub-industry and the rest of manufacturing as the other subindustry); and (2) a version of the model for the United States, divided into 30 industries. NISAC conducted these simulations to illustrate the flexibility of industry definitions in NCGEM and to examine the simulation properties of in more detail.

  5. Numerical computation of gravitational field for general axisymmetric objects

    Science.gov (United States)

    Fukushima, Toshio

    2016-10-01

    We developed a numerical method to compute the gravitational field of a general axisymmetric object. The method (I) numerically evaluates a double integral of the ring potential by the split quadrature method using the double exponential rules, and (II) derives the acceleration vector by numerically differentiating the numerically integrated potential by Ridder's algorithm. Numerical comparison with the analytical solutions for a finite uniform spheroid and an infinitely extended object of the Miyamoto-Nagai density distribution confirmed the 13- and 11-digit accuracy of the potential and the acceleration vector computed by the method, respectively. By using the method, we present the gravitational potential contour map and/or the rotation curve of various axisymmetric objects: (I) finite uniform objects covering rhombic spindles and circular toroids, (II) infinitely extended spheroids including Sérsic and Navarro-Frenk-White spheroids, and (III) other axisymmetric objects such as an X/peanut-shaped object like NGC 128, a power-law disc with a central hole like the protoplanetary disc of TW Hya, and a tear-drop-shaped toroid like an axisymmetric equilibrium solution of plasma charge distribution in an International Thermonuclear Experimental Reactor-like tokamak. The method is directly applicable to the electrostatic field and will be easily extended for the magnetostatic field. The FORTRAN 90 programs of the new method and some test results are electronically available.

  6. Direct computation of scattering matrices for general quantum graphs

    International Nuclear Information System (INIS)

    Caudrelier, V.; Ragoucy, E.

    2010-01-01

    We present a direct and simple method for the computation of the total scattering matrix of an arbitrary finite noncompact connected quantum graph given its metric structure and local scattering data at each vertex. The method is inspired by the formalism of Reflection-Transmission algebras and quantum field theory on graphs though the results hold independently of this formalism. It yields a simple and direct algebraic derivation of the formula for the total scattering and has a number of advantages compared to existing recursive methods. The case of loops (or tadpoles) is easily incorporated in our method. This provides an extension of recent similar results obtained in a completely different way in the context of abstract graph theory. It also allows us to discuss briefly the inverse scattering problem in the presence of loops using an explicit example to show that the solution is not unique in general. On top of being conceptually very easy, the computational advantage of the method is illustrated on two examples of 'three-dimensional' graphs (tetrahedron and cube) for which other methods are rather heavy or even impractical.

  7. Computational Investigation on the Spectroscopic Properties of Thiophene Based Europium β-Diketonate Complexes.

    Science.gov (United States)

    Greco, Claudio; Moro, Giorgio; Bertini, Luca; Biczysko, Malgorzata; Barone, Vincenzo; Cosentino, Ugo

    2014-02-11

    differences between calculated and experimental results are observed for compound 4, for which difficulties in the experimental determination of the triplet state energy were encountered: our results show that the negligible photoluminescence quantum yield of this compound is due to the fact that the energy of the most stable triplet state is significantly lower than that of the resonance level of the Europium ion, and thus the energy transfer process is prevented. These results confirm the reliability of the adopted computational approach in calculating the energy of the lowest triplet state energy of these systems, a key parameter in the design of new ligands for lanthanide complexes presenting large photoluminescence quantum yields.

  8. Computational and Spectroscopic Investigations of the Molecular Scale Structure and Dynamics of Geologically Important Fluids and Mineral-Fluid Interfaces

    International Nuclear Information System (INIS)

    Kirkpatrick, R. James; Kalinichev, Andrey G.

    2008-01-01

    significantly larger systems. These calculations have allowed us, for the first time, to study the effects of metal cations with different charges and charge density on the NOM aggregation in aqueous solutions. Other computational work has looked at the longer-time-scale dynamical behavior of aqueous species at mineral-water interfaces investigated simultaneously by NMR spectroscopy. Our experimental NMR studies have focused on understanding the structure and dynamics of water and dissolved species at mineral-water interfaces and in two-dimensional nano-confinement within clay interlayers. Combined NMR and MD study of H2O, Na+, and Cl- interactions with the surface of quartz has direct implications regarding interpretation of sum frequency vibrational spectroscopic experiments for this phase and will be an important reference for future studies. We also used NMR to examine the behavior of K+ and H2O in the interlayer and at the surfaces of the clay minerals hectorite and illite-rich illite-smectite. This the first time K+ dynamics has been characterized spectroscopically in geochemical systems. Preliminary experiments were also performed to evaluate the potential of 75As NMR as a probe of arsenic geochemical behavior. The 75As NMR study used advanced signal enhancement methods, introduced a new data acquisition approach to minimize the time investment in ultra-wide-line NMR experiments, and provides the first evidence of a strong relationship between the chemical shift and structural parameters for this experimentally challenging nucleus. We have also initiated a series of inelastic and quasi-elastic neutron scattering measurements of water dynamics in the interlayers of clays and layered double hydroxides. The objective of these experiments is to probe the correlations of water molecular motions in confined spaces over the scale of times and distances most directly comparable to our MD simulations and on a time scale different than that probed by NMR. This work is being done

  9. Catalytic surface radical in dye-decolorizing peroxidase: a computational, spectroscopic and site-directed mutagenesis study

    Science.gov (United States)

    Linde, Dolores; Pogni, Rebecca; Cañellas, Marina; Lucas, Fátima; Guallar, Victor; Baratto, Maria Camilla; Sinicropi, Adalgisa; Sáez-Jiménez, Verónica; Coscolín, Cristina; Romero, Antonio; Medrano, Francisco Javier; Ruiz-Dueñas, Francisco J.; Martínez, Angel T.

    2014-01-01

    Dye-decolorizing peroxidase (DyP) of Auricularia auricula-judae has been expressed in Escherichia coli as a representative of a new DyP family, and subjected to mutagenic, spectroscopic, crystallographic and computational studies. The crystal structure of DyP shows a buried haem cofactor, and surface tryptophan and tyrosine residues potentially involved in long-range electron transfer from bulky dyes. Simulations using PELE (Protein Energy Landscape Exploration) software provided several binding-energy optima for the anthraquinone-type RB19 (Reactive Blue 19) near the above aromatic residues and the haem access-channel. Subsequent QM/MM (quantum mechanics/molecular mechanics) calculations showed a higher tendency of Trp-377 than other exposed haem-neighbouring residues to harbour a catalytic protein radical, and identified the electron-transfer pathway. The existence of such a radical in H2O2-activated DyP was shown by low-temperature EPR, being identified as a mixed tryptophanyl/tyrosyl radical in multifrequency experiments. The signal was dominated by the Trp-377 neutral radical contribution, which disappeared in the W377S variant, and included a tyrosyl contribution assigned to Tyr-337 after analysing the W377S spectra. Kinetics of substrate oxidation by DyP suggests the existence of high- and low-turnover sites. The high-turnover site for oxidation of RB19 (kcat> 200 s−1) and other DyP substrates was assigned to Trp-377 since it was absent from the W377S variant. The low-turnover site/s (RB19 kcat ~20 s−1) could correspond to the haem access-channel, since activity was decreased when the haem channel was occluded by the G169L mutation. If a tyrosine residue is also involved, it will be different from Tyr-337 since all activities are largely unaffected in the Y337S variant. PMID:25495127

  10. Isolation, characterization, spectroscopic properties and quantum chemical computations of an important phytoalexin resveratrol as antioxidant component from Vitis labrusca L. and their chemical compositions

    Science.gov (United States)

    Güder, Aytaç; Korkmaz, Halil; Gökce, Halil; Alpaslan, Yelda Bingöl; Alpaslan, Gökhan

    2014-12-01

    In this study, isolation and characterization of trans-resveratrol (RES) as an antioxidant compound were carried out from VLE, VLG and VLS. Furthermore, antioxidant activities were evaluated by using six different methods. Finally, total phenolic, flavonoid, ascorbic acid, anthocyanin, lycopene, β-carotene and vitamin E contents were carried out. In addition, the FT-IR, 13C and 1H NMR chemical shifts and UV-vis. spectra of trans-resveratrol were experimentally recorded. Quantum chemical computations such as the molecular geometry, vibrational frequencies, UV-vis. spectroscopic parameters, HOMOs-LUMOs energies, molecular electrostatic potential (MEP), natural bond orbitals (NBO) and nonlinear optics (NLO) properties of title molecule have been calculated by using DFT/B3PW91 method with 6-311++G(d,p) basis set in ground state for the first time. The obtained results show that the calculated spectroscopic data are in a good agreement with experimental data.

  11. Cloud Computing Security in Openstack Architecture: General Overview

    Directory of Open Access Journals (Sweden)

    Gleb Igorevich Shakulo

    2015-10-01

    Full Text Available The subject of article is cloud computing security. Article begins with author analyzing cloud computing advantages and disadvantages, factors of growth, both positive and negative. Among latter, security is deemed one of the most prominent. Furthermore, author takes architecture of OpenStack project as an example for study: describes its essential components and their interconnection. As conclusion, author raises series of questions as possible areas of further research to resolve security concerns, thus making cloud computing more secure technology.

  12. GPUs: An Emerging Platform for General-Purpose Computation

    Science.gov (United States)

    2007-08-01

    programming; real-time cinematic quality graphics Peak stream (26) License required (limited time no- cost evaluation program) Commercially...folding.stanford.edu (accessed 30 March 2007). 2. Fan, Z.; Qiu, F.; Kaufman, A.; Yoakum-Stover, S. GPU Cluster for High Performance Computing. ACM/IEEE...accessed 30 March 2007). 8. Goodnight, N.; Wang, R.; Humphreys, G. Computation on Programmable Graphics Hardware. IEEE Computer Graphics and

  13. Control rod computer code IAMCOS: general theory and numerical methods

    International Nuclear Information System (INIS)

    West, G.

    1982-11-01

    IAMCOS is a computer code for the description of mechanical and thermal behavior of cylindrical control rods for fast breeders. This code version was applied, tested and modified from 1979 to 1981. In this report are described the basic model (02 version), theoretical definitions and computation methods [fr

  14. The NASA Ames PAH IR Spectroscopic Database: Computational Version 3.00 with Updated Content and the Introduction of Multiple Scaling Factors

    Science.gov (United States)

    Bauschlicher, Charles W., Jr.; Ricca, A.; Boersma, C.; Allamandola, L. J.

    2018-02-01

    Version 3.00 of the library of computed spectra in the NASA Ames PAH IR Spectroscopic Database (PAHdb) is described. Version 3.00 introduces the use of multiple scale factors, instead of the single scaling factor used previously, to align the theoretical harmonic frequencies with the experimental fundamentals. The use of multiple scale factors permits the use of a variety of basis sets; this allows new PAH species to be included in the database, such as those containing oxygen, and yields an improved treatment of strained species and those containing nitrogen. In addition, the computed spectra of 2439 new PAH species have been added. The impact of these changes on the analysis of an astronomical spectrum through database-fitting is considered and compared with a fit using Version 2.00 of the library of computed spectra. Finally, astronomical constraints are defined for the PAH spectral libraries in PAHdb.

  15. Cloud Computing Security in Openstack Architecture: General Overview

    OpenAIRE

    Gleb Igorevich Shakulo

    2015-01-01

    The subject of article is cloud computing security. Article begins with author analyzing cloud computing advantages and disadvantages, factors of growth, both positive and negative. Among latter, security is deemed one of the most prominent. Furthermore, author takes architecture of OpenStack project as an example for study: describes its essential components and their interconnection. As conclusion, author raises series of questions as possible areas of further research to resolve security c...

  16. Editorial: Computational Creativity, Concept Invention, and General Intelligence

    Science.gov (United States)

    Besold, Tarek R.; Kühnberger, Kai-Uwe; Veale, Tony

    2015-12-01

    Over the last decade, computational creativity as a field of scientific investigation and computational systems engineering has seen growing popularity. Still, the levels of development between projects aiming at systems for artistic production or performance and endeavours addressing creative problem-solving or models of creative cognitive capacities is diverging. While the former have already seen several great successes, the latter still remain in their infancy. This volume collects reports on work trying to close the accrued gap.

  17. Motion of Br2 molecules in clathrate cages. A computational study of the dynamic effects on its spectroscopic behavior.

    Science.gov (United States)

    Bernal-Uruchurtu, M I; Janda, Kenneth C; Hernández-Lamoneda, R

    2015-01-22

    This work looks into the spectroscopic behavior of bromine molecules trapped in clathrate cages combining different methodologies. We developed a semiempirical quantum mechanical model to incorporate through molecular dynamics trajectories, the effect movement of bromine molecules in clathrate cages has on its absorption spectra. A simple electrostatic model simulating the cage environment around bromine predicts a blue shift in the spectra, in good agreement with the experimental evidence.

  18. Scintillation camera-computer systems: General principles of quality control

    International Nuclear Information System (INIS)

    Ganatra, R.D.

    1992-01-01

    Scintillation camera-computer systems are designed to allow the collection, digital analysis and display of the image data from a scintillation camera. The components of the computer in such a system are essentially the same as those of a computer used in any other application, i.e. a central processing unit (CPU), memory and magnetic storage. Additional hardware items necessary for nuclear medicine applications are an analogue-to-digital converter (ADC), which converts the analogue signals from the camera to digital numbers, and an image display. It is possible that the transfer of data from camera to computer degrades the information to some extent. The computer can generate the image for display, but it also provides the capability of manipulating the primary data to improve the display of the image. The first function of conversion from analogue to digital mode is not within the control of the operator, but the second type of manipulation is in the control of the operator. These type of manipulations should be done carefully without sacrificing the integrity of the incoming information

  19. A hyperpower iterative method for computing the generalized Drazin ...

    Indian Academy of Sciences (India)

    Shwetabh Srivastava

    [6, 7]. A number of direct and iterative methods for com- putation of the Drazin inverse were developed in [8–12]. Its extension to Banach algebras is known as the generalized Drazin inverse and was established in [13]. Let J denote the complex. Banach algebra with the unit 1. The generalized Drazin inverse of an element ...

  20. SYNCOM: A general syntax conversion language and computer program

    International Nuclear Information System (INIS)

    Bindon, D.C.

    1972-09-01

    The problems of syntax conversion are discussed and the reasons given for the choice of the Interpretive method. A full description is given of the SYNCON language and computer program together with brief details of some programs written in the language. (author)

  1. Computational Complexity of Some Problems on Generalized Cellular Automations

    Directory of Open Access Journals (Sweden)

    P. G. Klyucharev

    2012-03-01

    Full Text Available We prove that the preimage problem of a generalized cellular automation is NP-hard. The results of this work are important for supporting the security of the ciphers based on the cellular automations.

  2. Proper generalized decompositions an introduction to computer implementation with Matlab

    CERN Document Server

    Cueto, Elías; Alfaro, Icíar

    2016-01-01

    This book is intended to help researchers overcome the entrance barrier to Proper Generalized Decomposition (PGD), by providing a valuable tool to begin the programming task. Detailed Matlab Codes are included for every chapter in the book, in which the theory previously described is translated into practice. Examples include parametric problems, non-linear model order reduction and real-time simulation, among others. Proper Generalized Decomposition (PGD) is a method for numerical simulation in many fields of applied science and engineering. As a generalization of Proper Orthogonal Decomposition or Principal Component Analysis to an arbitrary number of dimensions, PGD is able to provide the analyst with very accurate solutions for problems defined in high dimensional spaces, parametric problems and even real-time simulation. .

  3. Automatic computation and solution of generalized harmonic balance equations

    Science.gov (United States)

    Peyton Jones, J. C.; Yaser, K. S. A.; Stevenson, J.

    2018-02-01

    Generalized methods are presented for generating and solving the harmonic balance equations for a broad class of nonlinear differential or difference equations and for a general set of harmonics chosen by the user. In particular, a new algorithm for automatically generating the Jacobian of the balance equations enables efficient solution of these equations using continuation methods. Efficient numeric validation techniques are also presented, and the combined algorithm is applied to the analysis of dc, fundamental, second and third harmonic response of a nonlinear automotive damper.

  4. Factors Affecting Preservice Teachers' Computer Use for General Purposes: Implications for Computer Training Courses

    Science.gov (United States)

    Zogheib, Salah

    2014-01-01

    As the majority of educational research has focused on preservice teachers' computer use for "educational purposes," the question remains: Do preservice teachers use computer technology for daily life activities and encounters? And do preservice teachers' personality traits and motivational beliefs related to computer training provided…

  5. A hyperpower iterative method for computing the generalized Drazin ...

    Indian Academy of Sciences (India)

    A quadratically convergent Newton-type iterative scheme is proposed for approximating the generalized Drazin inverse bd of the Banach algebra element b. Further, its extension into the form of the hyperpower iterative method of arbitrary order p ≤ 2 is presented. Convergence criteria along with the estimation of error ...

  6. DEEP: a general computational framework for predicting enhancers

    KAUST Repository

    Kleftogiannis, Dimitrios A.

    2014-11-05

    Transcription regulation in multicellular eukaryotes is orchestrated by a number of DNA functional elements located at gene regulatory regions. Some regulatory regions (e.g. enhancers) are located far away from the gene they affect. Identification of distal regulatory elements is a challenge for the bioinformatics research. Although existing methodologies increased the number of computationally predicted enhancers, performance inconsistency of computational models across different cell-lines, class imbalance within the learning sets and ad hoc rules for selecting enhancer candidates for supervised learning, are some key questions that require further examination. In this study we developed DEEP, a novel ensemble prediction framework. DEEP integrates three components with diverse characteristics that streamline the analysis of enhancer\\'s properties in a great variety of cellular conditions. In our method we train many individual classification models that we combine to classify DNA regions as enhancers or non-enhancers. DEEP uses features derived from histone modification marks or attributes coming from sequence characteristics. Experimental results indicate that DEEP performs better than four state-of-the-art methods on the ENCODE data. We report the first computational enhancer prediction results on FANTOM5 data where DEEP achieves 90.2% accuracy and 90% geometric mean (GM) of specificity and sensitivity across 36 different tissues. We further present results derived using in vivo-derived enhancer data from VISTA database. DEEP-VISTA, when tested on an independent test set, achieved GM of 80.1% and accuracy of 89.64%. DEEP framework is publicly available at http://cbrc.kaust.edu.sa/deep/.

  7. DEEP: a general computational framework for predicting enhancers

    KAUST Repository

    Kleftogiannis, Dimitrios A.; Kalnis, Panos; Bajic, Vladimir B.

    2014-01-01

    Transcription regulation in multicellular eukaryotes is orchestrated by a number of DNA functional elements located at gene regulatory regions. Some regulatory regions (e.g. enhancers) are located far away from the gene they affect. Identification of distal regulatory elements is a challenge for the bioinformatics research. Although existing methodologies increased the number of computationally predicted enhancers, performance inconsistency of computational models across different cell-lines, class imbalance within the learning sets and ad hoc rules for selecting enhancer candidates for supervised learning, are some key questions that require further examination. In this study we developed DEEP, a novel ensemble prediction framework. DEEP integrates three components with diverse characteristics that streamline the analysis of enhancer's properties in a great variety of cellular conditions. In our method we train many individual classification models that we combine to classify DNA regions as enhancers or non-enhancers. DEEP uses features derived from histone modification marks or attributes coming from sequence characteristics. Experimental results indicate that DEEP performs better than four state-of-the-art methods on the ENCODE data. We report the first computational enhancer prediction results on FANTOM5 data where DEEP achieves 90.2% accuracy and 90% geometric mean (GM) of specificity and sensitivity across 36 different tissues. We further present results derived using in vivo-derived enhancer data from VISTA database. DEEP-VISTA, when tested on an independent test set, achieved GM of 80.1% and accuracy of 89.64%. DEEP framework is publicly available at http://cbrc.kaust.edu.sa/deep/.

  8. GIANT: a computer code for General Interactive ANalysis of Trajectories

    International Nuclear Information System (INIS)

    Jaeger, J.; Lee, M.; Servranckx, R.; Shoaee, H.

    1985-04-01

    Many model-driven diagnostic and correction procedures have been developed at SLAC for the on-line computer controlled operation of SPEAR, PEP, the LINAC, and the Electron Damping Ring. In order to facilitate future applications and enhancements, these procedures are being collected into a single program, GIANT. The program allows interactive diagnosis as well as performance optimization of any beam transport line or circular machine. The test systems for GIANT are those of the SLC project. The organization of this program and some of the recent applications of the procedures will be described in this paper

  9. Spectroscopic profiling and computational study of the binding of tschimgine: A natural monoterpene derivative, with calf thymus DNA

    Science.gov (United States)

    Khajeh, Masoumeh Ashrafi; Dehghan, Gholamreza; Dastmalchi, Siavoush; Shaghaghi, Masoomeh; Iranshahi, Mehrdad

    2018-03-01

    DNA is a major target for a number of anticancer substances. Interaction studies between small molecules and DNA are essential for rational drug designing to influence main biological processes and also introducing new probes for the assay of DNA. Tschimgine (TMG) is a monoterpene derivative with anticancer properties. In the present study we tried to elucidate the interaction of TMG with calf thymus DNA (CT-DNA) using different spectroscopic methods. UV-visible absorption spectrophotometry, fluorescence and circular dichroism (CD) spectroscopies as well as molecular docking study revealed formation of complex between TMG and CT-DNA. Binding constant (Kb) between TMG and DNA was 2.27 × 104 M- 1, that is comparable to groove binding agents. The fluorescence spectroscopic data revealed that the quenching mechanism of fluorescence of TMG by CT-DNA is static quenching. Thermodynamic parameters (ΔH TMG with CT-DNA. Competitive binding assay with methylene blue (MB) and Hoechst 33258 using fluorescence spectroscopy displayed that TMG possibly binds to the minor groove of CT-DNA. These observations were further confirmed by CD spectral analysis, viscosity measurements and molecular docking.

  10. GRG computer algebra system in gravitation and general relativity theory

    International Nuclear Information System (INIS)

    Zhitnikov, V.V.; Obukhova, I.G.

    1985-01-01

    The main concepts and capabilities of the GRG specialized computer agebra system intended for performing calculations in the gravitation theory are described. The GRG system is written in the STANDARD LISP language. The program consists of two parts: the first one - for setting initial data, the second one - for specifying a consequence of calculations. The system can function in three formalisms: a coordinate, a tetradic with the Lorentz basis and a spinor ones. The major capabilities of the GRG system are the following: calculation of connectivity and curvature according to the specified metrics, tetrad and torsion; metric type determination according to Petrov; calculation of the Bianchi indentities; operation with an electromagnetic field; tetradic rotations; coordinate conversions

  11. General-Purpose Computation with Neural Networks: A Survey of Complexity Theoretic Results

    Czech Academy of Sciences Publication Activity Database

    Šíma, Jiří; Orponen, P.

    2003-01-01

    Roč. 15, č. 12 (2003), s. 2727-2778 ISSN 0899-7667 R&D Projects: GA AV ČR IAB2030007; GA ČR GA201/02/1456 Institutional research plan: AV0Z1030915 Keywords : computational power * computational complexity * perceptrons * radial basis functions * spiking neurons * feedforward networks * reccurent networks * probabilistic computation * analog computation Subject RIV: BA - General Mathematics Impact factor: 2.747, year: 2003

  12. Computing Generalized Matrix Inverse on Spiking Neural Substrate

    Science.gov (United States)

    Shukla, Rohit; Khoram, Soroosh; Jorgensen, Erik; Li, Jing; Lipasti, Mikko; Wright, Stephen

    2018-01-01

    Emerging neural hardware substrates, such as IBM's TrueNorth Neurosynaptic System, can provide an appealing platform for deploying numerical algorithms. For example, a recurrent Hopfield neural network can be used to find the Moore-Penrose generalized inverse of a matrix, thus enabling a broad class of linear optimizations to be solved efficiently, at low energy cost. However, deploying numerical algorithms on hardware platforms that severely limit the range and precision of representation for numeric quantities can be quite challenging. This paper discusses these challenges and proposes a rigorous mathematical framework for reasoning about range and precision on such substrates. The paper derives techniques for normalizing inputs and properly quantizing synaptic weights originating from arbitrary systems of linear equations, so that solvers for those systems can be implemented in a provably correct manner on hardware-constrained neural substrates. The analytical model is empirically validated on the IBM TrueNorth platform, and results show that the guarantees provided by the framework for range and precision hold under experimental conditions. Experiments with optical flow demonstrate the energy benefits of deploying a reduced-precision and energy-efficient generalized matrix inverse engine on the IBM TrueNorth platform, reflecting 10× to 100× improvement over FPGA and ARM core baselines. PMID:29593483

  13. Computing Generalized Matrix Inverse on Spiking Neural Substrate

    Directory of Open Access Journals (Sweden)

    Rohit Shukla

    2018-03-01

    Full Text Available Emerging neural hardware substrates, such as IBM's TrueNorth Neurosynaptic System, can provide an appealing platform for deploying numerical algorithms. For example, a recurrent Hopfield neural network can be used to find the Moore-Penrose generalized inverse of a matrix, thus enabling a broad class of linear optimizations to be solved efficiently, at low energy cost. However, deploying numerical algorithms on hardware platforms that severely limit the range and precision of representation for numeric quantities can be quite challenging. This paper discusses these challenges and proposes a rigorous mathematical framework for reasoning about range and precision on such substrates. The paper derives techniques for normalizing inputs and properly quantizing synaptic weights originating from arbitrary systems of linear equations, so that solvers for those systems can be implemented in a provably correct manner on hardware-constrained neural substrates. The analytical model is empirically validated on the IBM TrueNorth platform, and results show that the guarantees provided by the framework for range and precision hold under experimental conditions. Experiments with optical flow demonstrate the energy benefits of deploying a reduced-precision and energy-efficient generalized matrix inverse engine on the IBM TrueNorth platform, reflecting 10× to 100× improvement over FPGA and ARM core baselines.

  14. Generalized added masses computation for fluid structure interaction

    International Nuclear Information System (INIS)

    Lazzeri, L.; Cecconi, S.; Scala, M.

    1983-01-01

    The aim of this paper a description of a method to simulate the dynamic effect of a fluid between two structures by means of an added mass and an added stiffness. The method is based on a potential theory which assumes the fluid is inviscid and incompressible (the case of compressibility is discussed); a solution of the corresponding field equation is given as a superposition of elementary conditions (i.e. applicable to elementary boundary conditions). Consequently the pressure and displacements of the fluid on the boundary are given as a function of the series coefficients; the ''work lost'' (i.e. the work done by the pressures on the difference between actual and estimated displacements) is minimized, in this way the expansion coefficients are related to the displacements on the boundaries. Virtual work procedures are then used to compute added masses. The particular case of a free surface (with gravity effects) is discussed, it is shown how the effect can be modelled by means of an added stiffness term. Some examples relative to vibrations in reservoirs are given and discussed. (orig.)

  15. Generalized Portable SHMEM Library for High Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Parzyszek, Krzysztof [Iowa State Univ., Ames, IA (United States)

    2003-01-01

    This dissertation describes the efforts to design and implement the Generalized Portable SHMEM library, GPSHMEM, as well as supplementary tools. There are two major components of the GPSHMEM project: the GPSHMEM library itself and the Fortran 77 source-to-source translator. The rest of this thesis is divided into two parts. Part I introduces the shared memory model and the distributed shared memory model. It explains the motivation behind GPSHMEM and presents its functionality and performance results. Part II is entirely devoted to the Fortran 77 translator call fgpp. The need for such a tool is demonstrated, functionality goals are stated, and the design issues are presented along with the development of the solutions.

  16. Computation of Optimal Monotonicity Preserving General Linear Methods

    KAUST Repository

    Ketcheson, David I.

    2009-07-01

    Monotonicity preserving numerical methods for ordinary differential equations prevent the growth of propagated errors and preserve convex boundedness properties of the solution. We formulate the problem of finding optimal monotonicity preserving general linear methods for linear autonomous equations, and propose an efficient algorithm for its solution. This algorithm reliably finds optimal methods even among classes involving very high order accuracy and that use many steps and/or stages. The optimality of some recently proposed methods is verified, and many more efficient methods are found. We use similar algorithms to find optimal strong stability preserving linear multistep methods of both explicit and implicit type, including methods for hyperbolic PDEs that use downwind-biased operators.

  17. Self-assembled via axial coordination magnesium porphyrin-imidazole appended fullerene dyad: spectroscopic, electrochemical, computational, and photochemical studies.

    Science.gov (United States)

    D'Souza, Francis; El-Khouly, Mohamed E; Gadde, Suresh; McCarty, Amy L; Karr, Paul A; Zandler, Melvin E; Araki, Yasuyaki; Ito, Osamu

    2005-05-26

    Spectroscopic, redox, and electron transfer reactions of a self-assembled donor-acceptor dyad formed by axial coordination of magnesium meso-tetraphenylporphyrin (MgTPP) and fulleropyrrolidine appended with an imidazole coordinating ligand (C(60)Im) were investigated. Spectroscopic studies revealed the formation of a 1:1 C(60)Im:MgTPP supramolecular complex, and the anticipated 1:2 complex could not be observed because of the needed large amounts of the axial coordinating ligand. The formation constant, K(1), for the 1:1 complex was found to be (1.5 +/- 0.3) x 10(4) M(-1), suggesting fairly stable complex formation. The geometric and electronic structures of the dyads were probed by ab initio B3LYP/3-21G() methods. The majority of the highest occupied frontier molecular orbital (HOMO) was found to be located on the MgTPP entity, while the lowest unoccupied molecular orbital (LUMO) was on the fullerene entity, suggesting that the charge-separated state of the supramolecular complex is C(60)Im(*-):MgTPP(*+). Redox titrations involving MgTPP and C(60)Im allowed accurate determination of the oxidation and reduction potentials of the donor and acceptor entities in the supramolecular complex. These studies revealed more difficult oxidation, by about 100 mV, for MgTPP in the pentacoordinated C(60)Im:MgTPP compared to pristine MgTPP in o-dichlorobenzene. A total of six one-electron redox processes corresponding to the oxidation and reduction of the zinc porphyrin ring and the reduction of fullerene entities was observed within the accessible potential window of the solvent. The excited state events were monitored by both steady state and time-resolved emission as well as transient absorption techniques. In o-dichlorobenzene, upon coordination of C(60)Im to MgTPP, the main quenching pathway involved electron transfer from the singlet excited MgTPP to the C(60)Im moiety. The rate of forward electron transfer, k(CS), calculated from the picosecond time-resolved emission

  18. Spectroscopic and Computational Investigations of Ligand Binding to IspH: Discovery of Non-diphosphate Inhibitors

    Energy Technology Data Exchange (ETDEWEB)

    O' Dowd, Bing [Department of Chemistry, University of Illinois, 600 South Mathews Avenue Urbana IL 61801 USA; Williams, Sarah [Department of Chemistry and Biochemistry, University of California at San Diego, La Jolla CA 92093 USA; Wang, Hongxin [Department of Chemistry, University of California, 1 Shields Avenue Davis CA 95616 USA; Lawrence Berkeley National Laboratory, 1 Cyclotron Road Berkeley CA 94720 USA; No, Joo Hwan [Center for Biophysics and Computational Biology, Urbana, IL (United States); Rao, Guodong [Department of Chemistry, University of Illinois, 600 South Mathews Avenue Urbana IL 61801 USA; Wang, Weixue [Center for Biophysics and Computational Biology, Urbana, IL (United States); McCammon, J. Andrew [Department of Chemistry and Biochemistry, University of California at San Diego, La Jolla CA 92093 USA; Howard Hughes Medical Institute, University of California at San Diego, La Jolla CA 92093 USA; National Biomedical Computation Resource, University of California at San Diego, La Jolla CA 92093 USA; Cramer, Stephen P. [Department of Chemistry, University of California, 1 Shields Avenue Davis CA 95616 USA; Lawrence Berkeley National Laboratory, 1 Cyclotron Road Berkeley CA 94720 USA; Oldfield, Eric [Department of Chemistry, University of Illinois, 600 South Mathews Avenue Urbana IL 61801 USA

    2017-04-07

    Isoprenoid biosynthesis is an important area for anti-infective drug development. One isoprenoid target described is (E)-1-hydroxy-2-methyl-but-2-enyl 4-diphosphate (HMBPP) reductase (IspH), which forms isopentenyl diphosphate and dimethylallyl diphosphate from HMBPP in a 2H + /2e - reduction. IspH contains a 4 Fe-4 S cluster, and in this work, we first investigated how small molecules bound to the cluster by using HYSCORE and NRVS spectroscopies. The results of these, as well as other structural and spectroscopic investigations, led to the conclusion that, in most cases, ligands bound to IspH 4 Fe-4 S clusters by η 1 coordination, forming tetrahedral geometries at the unique fourth Fe, ligand side chains preventing further ligand (e.g., H 2 O, O 2 ) binding. Based on these ideas, we used in silico methods to find drug-like inhibitors that might occupy the HMBPP substrate binding pocket and bind to Fe, leading to the discovery of a barbituric acid analogue with a K i value of ≈500 nm against Pseudomonas aeruginosa IspH.

  19. General design methodology applied to the research domain of physical programming for computer illiterate

    CSIR Research Space (South Africa)

    Smith, Andrew C

    2011-09-01

    Full Text Available The authors discuss the application of the 'general design methodology‘ in the context of a physical computing project. The aim of the project was to design and develop physical objects that could serve as metaphors for computer programming elements...

  20. Correlated ab initio calculations of spectroscopic parameters of SnO within the framework of the higher-order generalized Douglas-Kroll transformation.

    Science.gov (United States)

    Wolf, Alexander; Reiher, Markus; Hess, Bernd Artur

    2004-05-08

    The first molecular calculations with the generalized Douglas-Kroll method up to fifth order in the external potential (DKH5) are presented. We study the spectroscopic parameters and electron affinity of the tin oxide molecule SnO and its anion SnO(-) applying nonrelativistic as well as relativistic calculations with higher orders of the DK approximation. In order to guarantee highly accurate results close to the basis set limit, an all-electron basis for Sn of at least quintuple-zeta quality has been constructed and optimized. All-electron CCSD(T) calculations of the potential energy curves of both SnO and SnO(-) reproduce the experimental values very well. Relative energies and valence properties are already well described with the established standard second-order approximation DKH2 and the higher-order corrections DKH3-DKH5 hardly affect these quantities. However, an accurate description of total energies and inner-shell properties requires superior relativistic schemes up to DKH5. (c) 2004 American Institute of Physics.

  1. Molecular interaction of 2,4-diacetylphloroglucinol (DAPG) with human serum albumin (HSA): The spectroscopic, calorimetric and computational investigation

    Science.gov (United States)

    Pragna Lakshmi, T.; Mondal, Moumita; Ramadas, Krishna; Natarajan, Sakthivel

    2017-08-01

    Drug molecule interaction with human serum albumin (HSA) affects the distribution and elimination of the drug. The compound, 2,4-diacetylphloroglucinol (DAPG) has been known for its antimicrobial, antiviral, antihelminthic and anticancer properties. However, its interaction with HSA is not yet reported. In this study, the interaction between HSA and DAPG was investigated through steady-state fluorescence, time-resolved fluorescence (TRF), circular dichroism (CD), Fourier transform infrared (FT-IR) spectroscopy, isothermal titration calorimetry (ITC), molecular docking and molecular dynamics simulation (MDS). Fluorescence spectroscopy results showed the strong quenching of intrinsic fluorescence of HSA due to interaction with DAPG, through dynamic quenching mechanism. The compound bound to HSA with reversible and moderate affinity which explained its easy diffusion from circulatory system to target tissue. The thermodynamic parameters from fluorescence spectroscopic data clearly revealed the contribution of hydrophobic forces but, the role of hydrogen bonds was not negligible according to the ITC studies. The interaction was exothermic and spontaneous in nature. Binding with DAPG reduced the helical content of protein suggesting the unfolding of HSA. Site marker fluorescence experiments revealed the change in binding constant of DAPG in the presence of site I (warfarin) but not site II marker (ibuprofen) which confirmed that the DAPG bound to site I. ITC experiments also supported this as site I marker could not bind to HSA-DAPG complex while site II marker was accommodated in the complex. In silico studies further showed the lowest binding affinity and more stability of DAPG in site I than in site II. Thus the data presented in this study confirms the binding of DAPG to the site I of HSA which may help in further understanding of pharmacokinetic properties of DAPG.

  2. Computational Magnetohydrodynamics of General Materials in Generalized Coordinates and Applications to Laser-Target Interactions

    Science.gov (United States)

    MacGillivray, Jeff T.; Peterkin, Robert E., Jr.

    2003-10-01

    We have developed a multiblock arbitrary coordinate Hydromagnetics (MACH) code for computing the time-evolution of materials of arbitrary phase (solid, liquid, gas, and plasma) in response to forces that arise from material and magnetic pressures. MACH is a single-fluid, time-dependent, arbitrary Lagrangian-Eulerian (ALE) magnetohydrodynamic (MHD) simulation environment. The 2 1/2 -dimensional MACH2 and the parallel 3-D MACH3 are widely used in the MHD community to perform accurate simulation of the time evolution of electrically conducting materials in a wide variety of laboratory situations. In this presentation, we discuss simulations of the interaction of an intense laser beam with a solid target in an ambient gas. Of particular interest to us is a laser-supported detonation wave (blast wave) that originates near the surface of the target when the laser intensity is sufficiently large to vaporize target material within the focal spot of the beam. Because the MACH3 simulations are fully three-dimensional, we are able to simulate non-normal laser incidence. A magnetic field is also produced from plasma energy near the edge of the focal spot.

  3. A Computer-Based Laboratory Project for the Study of Stimulus Generalization and Peak Shift

    Science.gov (United States)

    Derenne, Adam; Loshek, Eevett

    2009-01-01

    This paper describes materials designed for classroom projects on stimulus generalization and peak shift. A computer program (originally written in QuickBASIC) is used for data collection and a Microsoft Excel file with macros organizes the raw data on a spreadsheet and creates generalization gradients. The program is designed for use with human…

  4. Cluster-state quantum computing enhanced by high-fidelity generalized measurements.

    Science.gov (United States)

    Biggerstaff, D N; Kaltenbaek, R; Hamel, D R; Weihs, G; Rudolph, T; Resch, K J

    2009-12-11

    We introduce and implement a technique to extend the quantum computational power of cluster states by replacing some projective measurements with generalized quantum measurements (POVMs). As an experimental demonstration we fully realize an arbitrary three-qubit cluster computation by implementing a tunable linear-optical POVM, as well as fast active feedforward, on a two-qubit photonic cluster state. Over 206 different computations, the average output fidelity is 0.9832+/-0.0002; furthermore the error contribution from our POVM device and feedforward is only of O(10(-3)), less than some recent thresholds for fault-tolerant cluster computing.

  5. Modulation of Donor-Acceptor Distance in a Series of Carbazole Push-Pull Dyes; A Spectroscopic and Computational Study

    Directory of Open Access Journals (Sweden)

    Joshua J. Sutton

    2018-02-01

    Full Text Available A series of eight carbazole-cyanoacrylate based donor-acceptor dyes were studied. Within the series the influence of modifying the thiophene bridge, linking donor and acceptor and a change in the nature of the acceptor, from acid to ester, was explored. In this joint experimental and computational study we have used electronic absorbance and emission spectroscopies, Raman spectroscopy and computational modeling (density functional theory. From these studies it was found that extending the bridge length allowed the lowest energy transition to be systematically red shifted by 0.12 eV, allowing for limited tuning of the absorption of dyes using this structural motif. Using the aforementioned techniques we demonstrate that this transition is charge transfer in nature. Furthermore, the extent of charge transfer between donor and acceptor decreases with increasing bridge length and the bridge plays a smaller role in electronically mixing with the acceptor as it is extended.

  6. Comparison of progressive addition lenses for general purpose and for computer vision: an office field study.

    Science.gov (United States)

    Jaschinski, Wolfgang; König, Mirjam; Mekontso, Tiofil M; Ohlendorf, Arne; Welscher, Monique

    2015-05-01

    Two types of progressive addition lenses (PALs) were compared in an office field study: 1. General purpose PALs with continuous clear vision between infinity and near reading distances and 2. Computer vision PALs with a wider zone of clear vision at the monitor and in near vision but no clear distance vision. Twenty-three presbyopic participants wore each type of lens for two weeks in a double-masked four-week quasi-experimental procedure that included an adaptation phase (Weeks 1 and 2) and a test phase (Weeks 3 and 4). Questionnaires on visual and musculoskeletal conditions as well as preferences regarding the type of lenses were administered. After eight more weeks of free use of the spectacles, the preferences were assessed again. The ergonomic conditions were analysed from photographs. Head inclination when looking at the monitor was significantly lower by 2.3 degrees with the computer vision PALs than with the general purpose PALs. Vision at the monitor was judged significantly better with computer PALs, while distance vision was judged better with general purpose PALs; however, the reported advantage of computer vision PALs differed in extent between participants. Accordingly, 61 per cent of the participants preferred the computer vision PALs, when asked without information about lens design. After full information about lens characteristics and additional eight weeks of free spectacle use, 44 per cent preferred the computer vision PALs. On average, computer vision PALs were rated significantly better with respect to vision at the monitor during the experimental part of the study. In the final forced-choice ratings, approximately half of the participants preferred either the computer vision PAL or the general purpose PAL. Individual factors seem to play a role in this preference and in the rated advantage of computer vision PALs. © 2015 The Authors. Clinical and Experimental Optometry © 2015 Optometry Australia.

  7. Hanford general employee training: Computer-based training instructor's manual

    Energy Technology Data Exchange (ETDEWEB)

    1990-10-01

    The Computer-Based Training portion of the Hanford General Employee Training course is designed to be used in a classroom setting with a live instructor. Future references to this course'' refer only to the computer-based portion of the whole. This course covers the basic Safety, Security, and Quality issues that pertain to all employees of Westinghouse Hanford Company. The topics that are covered were taken from the recommendations and requirements for General Employee Training as set forth by the Institute of Nuclear Power Operations (INPO) in INPO 87-004, Guidelines for General Employee Training, applicable US Department of Energy orders, and Westinghouse Hanford Company procedures and policy. Besides presenting fundamental concepts, this course also contains information on resources that are available to assist students. It does this using Interactive Videodisk technology, which combines computer-generated text and graphics with audio and video provided by a videodisk player.

  8. Exploring gender differences on general and specific computer self-efficacy in mobile learning adoption

    OpenAIRE

    Bao, Yukun; Xiong, Tao; Hu, Zhongyi; Kibelloh, Mboni

    2014-01-01

    Reasons for contradictory findings regarding the gender moderate effect on computer self-efficacy in the adoption of e-learning/mobile learning are limited. Recognizing the multilevel nature of the computer self-efficacy (CSE), this study attempts to explore gender differences in the adoption of mobile learning, by extending the Technology Acceptance Model (TAM) with general and specific CSE. Data collected from 137 university students were tested against the research model using the structur...

  9. Some Matrix Iterations for Computing Generalized Inverses and Balancing Chemical Equations

    OpenAIRE

    Soleimani, Farahnaz; Stanimirovi´c, Predrag; Soleymani, Fazlollah

    2015-01-01

    An application of iterative methods for computing the Moore–Penrose inverse in balancing chemical equations is considered. With the aim to illustrate proposed algorithms, an improved high order hyper-power matrix iterative method for computing generalized inverses is introduced and applied. The improvements of the hyper-power iterative scheme are based on its proper factorization, as well as on the possibility to accelerate the iterations in the initial phase of the convergence. Although the ...

  10. Differential Control of Heme Reactivity in Alpha and Beta Subunits of Hemoglobin: A Combined Raman Spectroscopic and Computational Study

    Science.gov (United States)

    2015-01-01

    The use of hybrid hemoglobin (Hb), with mesoheme substituted for protoheme, allows separate monitoring of the α or β hemes along the allosteric pathway. Using resonance Raman (rR) spectroscopy in silica gel, which greatly slows protein motions, we have observed that the Fe–histidine stretching frequency, νFeHis, which is a monitor of heme reactivity, evolves between frequencies characteristic of the R and T states, for both α or β chains, prior to the quaternary R–T and T–R shifts. Computation of νFeHis, using QM/MM and the conformational search program PELE, produced remarkable agreement with experiment. Analysis of the PELE structures showed that the νFeHis shifts resulted from heme distortion and, in the α chain, Fe–His bond tilting. These results support the tertiary two-state model of ligand binding (Henry et al., Biophys. Chem.2002, 98, 149). Experimentally, the νFeHis evolution is faster for β than for α chains, and pump–probe rR spectroscopy in solution reveals an inflection in the νFeHis time course at 3 μs for β but not for α hemes, an interval previously shown to be the first step in the R–T transition. In the α chain νFeHis dropped sharply at 20 μs, the final step in the R–T transition. The time courses are fully consistent with recent computational mapping of the R–T transition via conjugate peak refinement by Karplus and co-workers (Fischer et al., Proc. Natl. Acad. Sci. U. S. A.2011, 108, 5608). The effector molecule IHP was found to lower νFeHis selectively for α chains within the R state, and a binding site in the α1α2 cleft is suggested. PMID:24991732

  11. STICK: Spike Time Interval Computational Kernel, a Framework for General Purpose Computation Using Neurons, Precise Timing, Delays, and Synchrony.

    Science.gov (United States)

    Lagorce, Xavier; Benosman, Ryad

    2015-11-01

    There has been significant research over the past two decades in developing new platforms for spiking neural computation. Current neural computers are primarily developed to mimic biology. They use neural networks, which can be trained to perform specific tasks to mainly solve pattern recognition problems. These machines can do more than simulate biology; they allow us to rethink our current paradigm of computation. The ultimate goal is to develop brain-inspired general purpose computation architectures that can breach the current bottleneck introduced by the von Neumann architecture. This work proposes a new framework for such a machine. We show that the use of neuron-like units with precise timing representation, synaptic diversity, and temporal delays allows us to set a complete, scalable compact computation framework. The framework provides both linear and nonlinear operations, allowing us to represent and solve any function. We show usability in solving real use cases from simple differential equations to sets of nonlinear differential equations leading to chaotic attractors.

  12. Computer-based, Jeopardy™-like game in general chemistry for engineering majors

    Science.gov (United States)

    Ling, S. S.; Saffre, F.; Kadadha, M.; Gater, D. L.; Isakovic, A. F.

    2013-03-01

    We report on the design of Jeopardy™-like computer game for enhancement of learning of general chemistry for engineering majors. While we examine several parameters of student achievement and attitude, our primary concern is addressing the motivation of students, which tends to be low in a traditionally run chemistry lectures. The effect of the game-playing is tested by comparing paper-based game quiz, which constitutes a control group, and computer-based game quiz, constituting a treatment group. Computer-based game quizzes are Java™-based applications that students run once a week in the second part of the last lecture of the week. Overall effectiveness of the semester-long program is measured through pretest-postest conceptual testing of general chemistry. The objective of this research is to determine to what extent this ``gamification'' of the course delivery and course evaluation processes may be beneficial to the undergraduates' learning of science in general, and chemistry in particular. We present data addressing gender-specific difference in performance, as well as background (pre-college) level of general science and chemistry preparation. We outline the plan how to extend such approach to general physics courses and to modern science driven electives, and we offer live, in-lectures examples of our computer gaming experience. We acknowledge support from Khalifa University, Abu Dhabi

  13. Spectroscopic data

    CERN Document Server

    Melzer, J

    1976-01-01

    During the preparation of this compilation, many people contributed; the compilers wish to thank all of them. In particular they appreciate the efforts of V. Gilbertson, the manuscript typist, and those of K. C. Bregand, J. A. Kiley, and W. H. McPherson, who gave editorial assistance. They would like to thank Dr. J. R. Schwartz for his cooperation and encouragement. In addition, they extend their grati­ tude to Dr. L. Wilson of the Air Force Weapons Laboratory, who gave the initial impetus to this project. v Contents I. I ntroduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 . . . . . . . . . . . . . . . . 11. Organization ofthe Spectroscopic Table. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 Methods of Production and Experimental Technique . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 Band Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2...

  14. Computational and variable-temperature infrared spectroscopic studies on carbon monoxide adsorption on zeolite Ca-A.

    Science.gov (United States)

    Pulido, Angeles; Nachtigall, Petr; Rodríguez Delgado, Montserrat; Otero Areán, Carlos

    2009-05-11

    Carbon monoxide adsorption on LTA (Linde type 5A) zeolite Ca-A is studied by using a combination of variable-temperature infrared spectroscopy and computational methods involving periodic density functional calculations and the correlation between stretching frequency and bond length of adsorbed CO species (nu(CO)/r(CO) correlation). Based on the agreement between calculated and experimental results, the main adsorption species can be identified as bridged Ca(2+)...CO...Ca(2+) complexes formed on dual-cation sites constituted by a pair of nearby Ca(2+) cations. Two types of such species can be formed: One of them has the two Ca(2+) ions located on six-membered rings of the zeolite framework and is characterized by a C-O stretching frequency in the range of 2174-2179 cm(-1) and an adsorption enthalpy of -31 to -33 kJ mol(-1), whereas the other bridged CO species is formed between a Ca(2+) ion located on an eight-membered ring and another one on a nearby six-membered ring and is characterized by nu(CO) in the range 2183-2188 cm(-1) and an adsorption enthalpy of -46 to -50 kJ mol(-1). Ca(2+)...CO monocarbonyl complexes are also identified, and at a relatively high CO equilibrium pressure, dicarbonyl species can also be formed.

  15. Diagnostic spectroscopic and computer-aided evaluation of malignancy from UV/VIS spectra of clear pleural effusions

    Science.gov (United States)

    Jevtić, Dubravka R.; Avramov Ivić, Milka L.; Reljin, Irini S.; Reljin, Branimir D.; Plavec, Goran I.; Petrović, Slobodan D.; Mijin, Dušan Ž.

    2014-06-01

    The automated, computer-aided method for differentiation and classification of malignant (M) from benign (B) cases, by analyzing the UV/VIS spectra of pleural effusions is described. It was shown that by two independent objective features, the maximum of Katz fractal dimension (KFDmax) and the area under normalized UV/VIS absorbance curve (Area), highly reliable M-B classification is possible. In the Area-KFDmax space M and B samples are linearly separable permitting thus the use of linear support vector machine as a classification tool. By analyzing 104 samples of UV/VIS spectra of pleural effusions (88 M and 16 B) collected from patients at the Clinic for Lung Diseases and Tuberculosis, Military Medical Academy in Belgrade, the accuracy of 95.45% for M cases and 100% for B cases are obtained by using the proposed method. It was shown that by applying some modifications, which are suggested in the paper, the accuracy of 100% for M cases can be reached.

  16. The Panchromatic High-Resolution Spectroscopic Survey of Local Group Star Clusters. I. General data reduction procedures for the VLT/X-shooter UVB and VIS arm

    NARCIS (Netherlands)

    Schönebeck, Frederik; Puzia, Thomas H.; Pasquali, Anna; Grebel, Eva K.; Kissler-Patig, Markus; Kuntschner, Harald; Lyubenova, Mariya; Perina, Sibilla

    2014-01-01

    Aims: Our dataset contains spectroscopic observations of 29 globular clusters in the Magellanic Clouds and the Milky Way performed with VLT/X-shooter over eight full nights. To derive robust results instrument and pipeline systematics have to be well understood and properly modeled. We aim at a

  17. Proposal of computation chart for general use for diffusion prediction of discharged warm water

    International Nuclear Information System (INIS)

    Wada, Akira; Kadoyu, Masatake

    1976-01-01

    The authors have developed the unique simulation analysis method using the numerical models for the prediction of discharged warm water diffusion. At the present stage, the method is adopted for the precise analysis computation in order to make the prediction of the diffusion of discharged warm water at each survey point, but instead of this method, it is strongly requested that some simple and easy prediction methods should be established. For the purpose of meeting this demand, in this report, the computation chart for general use is given to predict simply the diffusion range of discharged warm water, after classifying the semi-infinite sea region into several flow patterns according to the sea conditions and conducting the systematic simulation analysis with the numerical model of each pattern, respectively. (1) Establishment of the computation conditions: The special sea region was picked up as the area to be investigated, which is semi-infinite facing the outer sea and along the rectilineal coast line from many sea regions surrounding Japan, and from the viewpoint of the flow and the diffusion characteristics, the sea region was classified into three patterns. 51 cases in total various parameters were obtained, and finally the simulation analysis was performed. (2) Drawing up the general use chart: 28 sheets of the computation chart for general use were drawn, which are available for computing the approximate temperature rise caused by the discharged warm water diffusion. The example of Anegasaki Thermal Power Station is given. (Kako, I.)

  18. TH-AB-209-05: Validating Hemoglobin Saturation and Dissolved Oxygen in Tumors Using Photoacoustic Computed Tomographic Spectroscopic Imaging

    Energy Technology Data Exchange (ETDEWEB)

    Burnett, J; Sick, J; Liu, B [Purdue University, West Lafayette, IN (United States); Cao, N [University of Washington Medical Center, Seattle, WA (United States); Nakshatri, H; Mendonca, M [Indiana University - Purdue University Indianapolis, Indianapolis, IN (United States); Stantz, K [Purdue University, West Lafayette, IN (United States); Indiana University - Purdue University Indianapolis, Indianapolis, IN (United States)

    2016-06-15

    Purpose: Photoacoustic computed tomographic spectroscopy (PCT-S) provides intra-tumor measurements of oxygenation with high spatial resolution (0.2mm) and temporal fidelity (1–2 minutes) without the need for exogenous agents or ionizing radiation, thus providing a unique in vivo assay to measure SaO{sub 2} and investigate acute and chronic forms of hypoxia. The goal of this study is to validate in vivo SaO{sub 2} levels within tail artery of mice and the relationship between SaO{sub 2} and pO{sub 2} within subcutaneous breast tumors using PCT-S imaging, pulse oximetry and an OxyLite probe. Methods: A closed circuit phantom was fabricated to control blood oxygenation levels, where SaO{sub 2} was measured using a co-oximeter and pO{sub 2} using an Oxylite probe. Next, SaO{sub 2} levels within the tail arteries of mice (n=3) were measured using PCT-S and pulse oximetry while breathing high-to-low oxygen levels (6-cycles). Finally, PCT-S was used to measure SaO{sub 2} levels in MCF-7, MCF-7-VEGF165, and MDA-MB-231 xenograft breast tumors and compared to Oxylite pO{sub 2} levels values. Results: SaO{sub 2} and pO{sub 2} data obtained from the calibration phantom was fit to Hill’s equation: aO{sub 2} levels between 88 and 52% demonstrated a linear relationship (r2=0.96) and a 3.2% uncertainty between PCT-S values relative to pulse oximetry. Scatter plots of localized PCT-S measured SaO2 and Oxylite pO{sub 2} levels in MCF-7/MCF-7-VEGF165 and MDA-MD-231 breast tumors were fit to Hill’s equation: P50=17.2 and 20.7mmHg, and n=1.76 and 1.63. These results are consistent with sigmoidal form of Hill’s equation, where the lower P{sub 50} value is indicative of an acidic tumor microenvironment. Conclusion: The results demonstrate photoacoustic imaging can be used to measure SaO{sub 2} cycling and intra-tumor oxygenation, and provides a powerful in vivo assay to investigate the role of hypoxia in radiation, anti-angiogenic, and immunotherapies.

  19. Report on the operation and utilization of general purpose use computer system 2001

    Energy Technology Data Exchange (ETDEWEB)

    Watanabe, Kunihiko; Watanabe, Reiko; Tsugawa, Kazuko; Tsuda, Kenzo; Yamamoto, Takashi; Nakamura, Osamu; Kamimura, Tetsuo [National Inst. for Fusion Science, Toki, Gifu (Japan)

    2001-09-01

    The General Purpose Use Computer System of National Institute for Fusion Science was replaced in January, 2001. The System is almost fully used after the first three months operation. Reported here is the process of the introduction of the new system and the state of the operation and utilization of the System between January and March, 2001, especially the detailed utilization of March. (author)

  20. A Computer Algebra Approach to Solving Chemical Equilibria in General Chemistry

    Science.gov (United States)

    Kalainoff, Melinda; Lachance, Russ; Riegner, Dawn; Biaglow, Andrew

    2012-01-01

    In this article, we report on a semester-long study of the incorporation into our general chemistry course, of advanced algebraic and computer algebra techniques for solving chemical equilibrium problems. The method presented here is an alternative to the commonly used concentration table method for describing chemical equilibria in general…

  1. 77 FR 13388 - Treasury Inspector General for Tax Administration; Privacy Act of 1974: Computer Matching Program

    Science.gov (United States)

    2012-03-06

    ... DEPARTMENT OF THE TREASURY Treasury Inspector General for Tax Administration; Privacy Act of 1974...: Notice. SUMMARY: Pursuant to 5 U.S.C. 552a, the Privacy Act of 1974, as amended, notice is hereby given... Administration. Beginning and Completion Dates: This program of computer matches is expected to commence on March...

  2. Regional disaster impact analysis: comparing Input-Output and Computable General Equilibrium models

    NARCIS (Netherlands)

    Koks, E.E.; Carrera, L.; Jonkeren, O.; Aerts, J.C.J.H.; Husby, T.G.; Thissen, M.; Standardi, G.; Mysiak, J.

    2016-01-01

    A variety of models have been applied to assess the economic losses of disasters, of which the most common ones are input-output (IO) and computable general equilibrium (CGE) models. In addition, an increasing number of scholars have developed hybrid approaches: one that combines both or either of

  3. Performing an Environmental Tax Reform in a regional Economy. A Computable General Equilibrium

    NARCIS (Netherlands)

    Andre, F.J.; Cardenete, M.A.; Velazquez, E.

    2003-01-01

    We use a Computable General Equilibrium model to simulate the effects of an Environmental Tax Reform in a regional economy (Andalusia, Spain).The reform involves imposing a tax on CO2 or SO2 emissions and reducing either the Income Tax or the payroll tax of employers to Social Security, and

  4. Implementation of generalized measurements with minimal disturbance on a quantum computer

    International Nuclear Information System (INIS)

    Decker, T.; Grassl, M.

    2006-01-01

    We consider the problem of efficiently implementing a generalized measurement on a quantum computer. Using methods from representation theory, we exploit symmetries of the states we want to identify respectively symmetries of the measurement operators. In order to allow the information to be extracted sequentially, the disturbance of the quantum state due to the measurement should be minimal. (Abstract Copyright [2006], Wiley Periodicals, Inc.)

  5. Computer-assisted semen analysis parameters as predictors for fertility of men from the general population

    DEFF Research Database (Denmark)

    Larsen, L; Scheike, Thomas Harder; Jensen, Tina Kold

    2000-01-01

    The predictive value of sperm motility parameters obtained by computer-assisted semen analysis (CASA) was evaluated for the fertility of men from general population. In a prospective study with couples stopping use of contraception in order to try to conceive, CASA was performed on semen samples...

  6. SACRD: a data base for fast reactor safety computer codes, general description

    International Nuclear Information System (INIS)

    Greene, N.M.; Forsberg, V.M.; Raiford, G.B.; Arwood, J.W.; Simpson, D.B.; Flanagan, G.F.

    1979-01-01

    SACRD is a data base of material properties and other handbook data needed in computer codes used for fast reactor safety studies. Data are available in the thermodynamics, heat transfer, fluid mechanics, structural mechanics, aerosol transport, meteorology, neutronics, and dosimetry areas. Tabular, graphical and parameterized data are provided in many cases. A general description of the SACRD system is presented in the report

  7. The Panchromatic High-Resolution Spectroscopic Survey of Local Group Star Clusters. I. General data reduction procedures for the VLT/X-shooter UVB and VIS arm

    Science.gov (United States)

    Schönebeck, Frederik; Puzia, Thomas H.; Pasquali, Anna; Grebel, Eva K.; Kissler-Patig, Markus; Kuntschner, Harald; Lyubenova, Mariya; Perina, Sibilla

    2014-12-01

    Aims: Our dataset contains spectroscopic observations of 29 globular clusters in the Magellanic Clouds and the Milky Way performed with VLT/X-shooter over eight full nights. To derive robust results instrument and pipeline systematics have to be well understood and properly modeled. We aim at a consistent data reduction procedure with an accurate understanding of the measurement accuracy limitations. Here we present detailed data reduction procedures for the VLT/X-shooter UVB and VIS arm. These are not restricted to our particular dataset, but are generally applicable to different kinds of X-shooter data without major limitation on the astronomical object of interest. Methods: ESO's X-shooter pipeline (v1.5.0) performs well and reliably for the wavelength calibration and the associated rectification procedure, yet we find several weaknesses in the reduction cascade that are addressed with additional calibration steps, such as bad pixel interpolation, flat fielding, and slit illumination corrections. Furthermore, the instrumental PSF is analytically modeled and used to reconstruct flux losses at slit transit. This also forms the basis for an optimal extraction of point sources out of the two-dimensional pipeline product. Regular observations of spectrophotometric standard stars obtained from the X-shooter archive allow us to detect instrumental variability, which needs to be understood if a reliable absolute flux calibration is desired. Results: A cascade of additional custom calibration steps is presented that allows for an absolute flux calibration uncertainty of ≲10% under virtually every observational setup, provided that the signal-to-noise ratio is sufficiently high. The optimal extraction increases the signal-to-noise ratio typically by a factor of 1.5, while simultaneously correcting for resulting flux losses. The wavelength calibration is found to be accurate to an uncertainty level of Δλ ≃ 0.02 Å. Conclusions: We find that most of the X

  8. Spectroscopic study

    International Nuclear Information System (INIS)

    Flores, M.; Rodriguez, R.; Arroyo, R.

    1999-01-01

    This work is focused about the spectroscopic properties of a polymer material which consists of Polyacrylic acid (Paa) doped at different concentrations of Europium ions (Eu 3+ ). They show that to stay chemically joined with the polymer by a study of Nuclear Magnetic Resonance (NMR) of 1 H, 13 C and Fourier Transform Infrared Spectroscopy (Ft-IR) they present changes in the intensity of signals, just as too when this material is irradiated at λ = 394 nm. In according with the results obtained experimentally in this type of materials it can say that is possible to unify chemically the polymer with this type of cations, as well as, varying the concentration of them, since that these are distributed homogeneously inside the matrix maintaining its optical properties. These materials can be obtained more quickly and easy in solid or liquid phase and they have the best conditions for to make a quantitative analysis. (Author)

  9. Gender, general theory of crime and computer crime: an empirical test.

    Science.gov (United States)

    Moon, Byongook; McCluskey, John D; McCluskey, Cynthia P; Lee, Sangwon

    2013-04-01

    Regarding the gender gap in computer crime, studies consistently indicate that boys are more likely than girls to engage in various types of computer crime; however, few studies have examined the extent to which traditional criminology theories account for gender differences in computer crime and the applicability of these theories in explaining computer crime across gender. Using a panel of 2,751 Korean youths, the current study tests the applicability of the general theory of crime in explaining the gender gap in computer crime and assesses the theory's utility in explaining computer crime across gender. Analyses show that self-control theory performs well in predicting illegal use of others' resident registration number (RRN) online for both boys and girls, as predicted by the theory. However, low self-control, a dominant criminogenic factor in the theory, fails to mediate the relationship between gender and computer crime and is inadequate in explaining illegal downloading of software in both boy and girl models. Theoretical implication of the findings and the directions for future research are discussed.

  10. Periodicity computation of generalized mathematical biology problems involving delay differential equations.

    Science.gov (United States)

    Jasim Mohammed, M; Ibrahim, Rabha W; Ahmad, M Z

    2017-03-01

    In this paper, we consider a low initial population model. Our aim is to study the periodicity computation of this model by using neutral differential equations, which are recognized in various studies including biology. We generalize the neutral Rayleigh equation for the third-order by exploiting the model of fractional calculus, in particular the Riemann-Liouville differential operator. We establish the existence and uniqueness of a periodic computational outcome. The technique depends on the continuation theorem of the coincidence degree theory. Besides, an example is presented to demonstrate the finding.

  11. Some Matrix Iterations for Computing Generalized Inverses and Balancing Chemical Equations

    Directory of Open Access Journals (Sweden)

    Farahnaz Soleimani

    2015-11-01

    Full Text Available An application of iterative methods for computing the Moore–Penrose inverse in balancing chemical equations is considered. With the aim to illustrate proposed algorithms, an improved high order hyper-power matrix iterative method for computing generalized inverses is introduced and applied. The improvements of the hyper-power iterative scheme are based on its proper factorization, as well as on the possibility to accelerate the iterations in the initial phase of the convergence. Although the effectiveness of our approach is confirmed on the basis of the theoretical point of view, some numerical comparisons in balancing chemical equations, as well as on randomly-generated matrices are furnished.

  12. Permitted decompilation of a computer program in order to protect the general interest

    Directory of Open Access Journals (Sweden)

    Radovanović Sanja M.

    2015-01-01

    Full Text Available Computer program is an intellectual creation protected by copyright. However, unlike other items with equivalent legal protection, a computer program has a strong technical functionality, which is, in nowadays' society, an indispensable factor in everyday business activities, exchange of information, entertainment or achieving other similar purposes. Precisely because of this feature, computer program can rarely be seen in isolation from the hardware and software environment. In other words, the functionality of a computer program reaches its full scope only in interaction with other computer program or device. Bearing in mind the fact that this intellectual creation is in the focus of technological, and thus social, development, legislators are trying to provide a legal framework in which these interactions take place unhindered. In fact, considering that each aspect of the use of a computer program presents the exclusive right of the author, relying on his or her consent to undertake certain perpetration which would provide the necessary connectivity of the various components, could put in risk further technological development. Therefore, the lawmakers provide that, in certain cases and under certain conditions, the author's exclusive right could be restricted or excluded. This paper aims to analyze a normative contribution in achieving, technical and technological needed, and therefore, in terms of general interest justified, interactions.

  13. Computer local construction of a general solution for the Chew-Low equations

    International Nuclear Information System (INIS)

    Gerdt, V.P.

    1980-01-01

    General solution of the dynamic form of the Chew-Low equations in the vicinity of the restpoint is considered. A method for calculating coefficients of series being members of such solution is suggested. The results of calculations, coefficients of power series and expansions carried out by means of the SCHOONSCHIP and SYMBAL systems are given. It is noted that the suggested procedure of the Chew-Low equation solutions basing on using an electronic computer as an instrument for analytical calculations permits to obtain detail information on the local structure of general solution

  14. A sub-cubic time algorithm for computing the quartet distance between two general trees

    DEFF Research Database (Denmark)

    Nielsen, Jesper; Kristensen, Anders Kabell; Mailund, Thomas

    2011-01-01

    Background When inferring phylogenetic trees different algorithms may give different trees. To study such effects a measure for the distance between two trees is useful. Quartet distance is one such measure, and is the number of quartet topologies that differ between two trees. Results We have...... derived a new algorithm for computing the quartet distance between a pair of general trees, i.e. trees where inner nodes can have any degree ≥ 3. The time and space complexity of our algorithm is sub-cubic in the number of leaves and does not depend on the degree of the inner nodes. This makes...... it the fastest algorithm so far for computing the quartet distance between general trees independent of the degree of the inner nodes. Conclusions We have implemented our algorithm and two of the best competitors. Our new algorithm is significantly faster than the competition and seems to run in close...

  15. General features of the retinal connectome determine the computation of motion anticipation

    Science.gov (United States)

    Johnston, Jamie; Lagnado, Leon

    2015-01-01

    Motion anticipation allows the visual system to compensate for the slow speed of phototransduction so that a moving object can be accurately located. This correction is already present in the signal that ganglion cells send from the retina but the biophysical mechanisms underlying this computation are not known. Here we demonstrate that motion anticipation is computed autonomously within the dendritic tree of each ganglion cell and relies on feedforward inhibition. The passive and non-linear interaction of excitatory and inhibitory synapses enables the somatic voltage to encode the actual position of a moving object instead of its delayed representation. General rather than specific features of the retinal connectome govern this computation: an excess of inhibitory inputs over excitatory, with both being randomly distributed, allows tracking of all directions of motion, while the average distance between inputs determines the object velocities that can be compensated for. DOI: http://dx.doi.org/10.7554/eLife.06250.001 PMID:25786068

  16. Why Enforcing its UNCAC Commitments Would be Good for Russia: A Computable General Equilibrium Model

    Directory of Open Access Journals (Sweden)

    Michael P. BARRY

    2010-05-01

    Full Text Available Russia has ratified the UN Convention Against Corruption but has not successfully enforced it. This paper uses updated GTAP data to reconstruct a computable general equilibrium (CGE model to quantify the macroeconomic effects of corruption in Russia. Corruption is found to cost the Russian economy billions of dollars a year. A conclusion of the paper is that implementing and enforcing the UNCAC would be of significant economic benefit to Russia and its people.

  17. Uniform physical theory of diffraction equivalent edge currents for implementation in general computer codes

    DEFF Research Database (Denmark)

    Johansen, Peter Meincke

    1996-01-01

    New uniform closed-form expressions for physical theory of diffraction equivalent edge currents are derived for truncated incremental wedge strips. In contrast to previously reported expressions, the new expressions are well-behaved for all directions of incidence and observation and take a finite...... value for zero strip length. Consequently, the new equivalent edge currents are, to the knowledge of the author, the first that are well-suited for implementation in general computer codes...

  18. Discharge Fee Policy Analysis: A Computable General Equilibrium (CGE) Model of Water Resources and Water Environments

    OpenAIRE

    Guohua Fang; Ting Wang; Xinyi Si; Xin Wen; Yu Liu

    2016-01-01

    To alleviate increasingly serious water pollution and shortages in developing countries, various kinds of policies have been implemented by local governments. It is vital to quantify and evaluate the performance and potential economic impacts of these policies. This study develops a Computable General Equilibrium (CGE) model to simulate the regional economic and environmental effects of discharge fees. Firstly, water resources and water environment factors are separated from the input and out...

  19. Can Migrants Save Greece From Ageing? A Computable General Equilibrium Approach Using G-AMOS.

    OpenAIRE

    Nikos Pappas

    2008-01-01

    The population of Greece is projected to age in the course of the next three decades. This paper combines demographic projections with a multi-period economic Computable General Equilibrium (CGE) modelling framework to assess the macroeconomic impact of these future demographic trends. The simulation strategy adopted in Lisenkova et. al. (2008) is also employed here. The size and age composition of the population in the future depends on current and future values of demographic parameters suc...

  20. A Comprehensive Toolset for General-Purpose Private Computing and Outsourcing

    Science.gov (United States)

    2016-12-08

    contexts businesses are also hesitant to make their proprietary available to the cloud [1]. While in general sensitive data can be protected by the...data sources, gathering and maintaining the data needed , and completing and reviewing the collection of information. Send comments regarding this...project and scientific advances made towards each of the research thrusts throughout the project duration. 1 Project Objectives Cloud computing enables

  1. FISPRO: a simplified computer program for general fission product formation and decay calculations

    International Nuclear Information System (INIS)

    Jiacoletti, R.J.; Bailey, P.G.

    1979-08-01

    This report describes a computer program that solves a general form of the fission product formation and decay equations over given time steps for arbitrary decay chains composed of up to three nuclides. All fission product data and operational history data are input through user-defined input files. The program is very useful in the calculation of fission product activities of specific nuclides for various reactor operational histories and accident consequence calculations

  2. What Representations and Computations Underpin the Contribution of the Hippocampus to Generalization and Inference?

    Directory of Open Access Journals (Sweden)

    Dharshan eKumaran

    2012-06-01

    Full Text Available Empirical research and theoretical accounts have traditionally emphasized the function of the hippocampus in episodic memory. Here we draw attention to the importance of the hippocampus to generalization, and focus on the neural representations and computations that might underpin its role in tasks such as the paired associate inference paradigm. We make a principal distinction between two different mechanisms by which the hippocampus may support generalization: an encoding-based mechanism that creates overlapping representations that capture higher-order relationships between different items (e.g. TCM – and a retrieval-based model (REMERGE that effectively computes these relationships at the point of retrieval, through a recurrent mechanism that allows the dynamic interaction of multiple pattern separated episodic codes. We also discuss what we refer to as transfer effects - a more abstract example of generalization that has also been linked to the function of the hippocampus. We consider how this phenomenon poses inherent challenges for models such as TCM and REMERGE, and outline the potential applicability of a separate class of models - hierarchical bayesian models (HBMs in this context. Our hope is that this article will provide a basic framework within which to consider the theoretical mechanisms underlying the role of the hippocampus in generalization, and at a minimum serve as a stimulus for future work addressing issues that go to the heart of the function of the hippocampus.

  3. Accurate Computation of Periodic Regions' Centers in the General M-Set with Integer Index Number

    Directory of Open Access Journals (Sweden)

    Wang Xingyuan

    2010-01-01

    Full Text Available This paper presents two methods for accurately computing the periodic regions' centers. One method fits for the general M-sets with integer index number, the other fits for the general M-sets with negative integer index number. Both methods improve the precision of computation by transforming the polynomial equations which determine the periodic regions' centers. We primarily discuss the general M-sets with negative integer index, and analyze the relationship between the number of periodic regions' centers on the principal symmetric axis and in the principal symmetric interior. We can get the centers' coordinates with at least 48 significant digits after the decimal point in both real and imaginary parts by applying the Newton's method to the transformed polynomial equation which determine the periodic regions' centers. In this paper, we list some centers' coordinates of general M-sets' k-periodic regions (k=3,4,5,6 for the index numbers α=−25,−24,…,−1 , all of which have highly numerical accuracy.

  4. An efficient and general numerical method to compute steady uniform vortices

    Science.gov (United States)

    Luzzatto-Fegiz, Paolo; Williamson, Charles H. K.

    2011-07-01

    Steady uniform vortices are widely used to represent high Reynolds number flows, yet their efficient computation still presents some challenges. Existing Newton iteration methods become inefficient as the vortices develop fine-scale features; in addition, these methods cannot, in general, find solutions with specified Casimir invariants. On the other hand, available relaxation approaches are computationally inexpensive, but can fail to converge to a solution. In this paper, we overcome these limitations by introducing a new discretization, based on an inverse-velocity map, which radically increases the efficiency of Newton iteration methods. In addition, we introduce a procedure to prescribe Casimirs and remove the degeneracies in the steady vorticity equation, thus ensuring convergence for general vortex configurations. We illustrate our methodology by considering several unbounded flows involving one or two vortices. Our method enables the computation, for the first time, of steady vortices that do not exhibit any geometric symmetry. In addition, we discover that, as the limiting vortex state for each flow is approached, each family of solutions traces a clockwise spiral in a bifurcation plot consisting of a velocity-impulse diagram. By the recently introduced "IVI diagram" stability approach [Phys. Rev. Lett. 104 (2010) 044504], each turn of this spiral is associated with a loss of stability for the steady flows. Such spiral structure is suggested to be a universal feature of steady, uniform-vorticity flows.

  5. A general digital computer procedure for synthesizing linear automatic control systems

    International Nuclear Information System (INIS)

    Cummins, J.D.

    1961-10-01

    The fundamental concepts required for synthesizing a linear automatic control system are considered. A generalized procedure for synthesizing automatic control systems is demonstrated. This procedure has been programmed for the Ferranti Mercury and the IBM 7090 computers. Details of the programmes are given. The procedure uses the linearized set of equations which describe the plant to be controlled as the starting point. Subsequent computations determine the transfer functions between any desired variables. The programmes also compute the root and phase loci for any linear (and some non-linear) configurations in the complex plane, the open loop and closed loop frequency responses of a system, the residues of a function of the complex variable 's' and the time response corresponding to these residues. With these general programmes available the design of 'one point' automatic control systems becomes a routine scientific procedure. Also dynamic assessments of plant may be carried out. Certain classes of multipoint automatic control problems may also be solved with these procedures. Autonomous systems, invariant systems and orthogonal systems may also be studied. (author)

  6. General surface reconstruction for cone-beam multislice spiral computed tomography

    International Nuclear Information System (INIS)

    Chen Laigao; Liang Yun; Heuscher, Dominic J.

    2003-01-01

    A new family of cone-beam reconstruction algorithm, the General Surface Reconstruction (GSR), is proposed and formulated in this paper for multislice spiral computed tomography (CT) reconstructions. It provides a general framework to allow the reconstruction of planar or nonplanar surfaces on a set of rebinned short-scan parallel beam projection data. An iterative surface formation method is proposed as an example to show the possibility to form nonplanar reconstruction surfaces to minimize the adverse effect between the collected cone-beam projection data and the reconstruction surfaces. The improvement in accuracy of the nonplanar surfaces over planar surfaces in the two-dimensional approximate cone-beam reconstructions is mathematically proved and demonstrated using numerical simulations. The proposed GSR algorithm is evaluated by the computer simulation of cone-beam spiral scanning geometry and various mathematical phantoms. The results demonstrate that the GSR algorithm generates much better image quality compared to conventional multislice reconstruction algorithms. For a table speed up to 100 mm per rotation, GSR demonstrates good image quality for both the low-contrast ball phantom and thorax phantom. All other performance parameters are comparable to the single-slice 180 deg. LI (linear interpolation) algorithm, which is considered the 'gold standard'. GSR also achieves high computing efficiency and good temporal resolution, making it an attractive alternative for the reconstruction of next generation multislice spiral CT data

  7. Cellular computational generalized neuron network for frequency situational intelligence in a multi-machine power system.

    Science.gov (United States)

    Wei, Yawei; Venayagamoorthy, Ganesh Kumar

    2017-09-01

    To prevent large interconnected power system from a cascading failure, brownout or even blackout, grid operators require access to faster than real-time information to make appropriate just-in-time control decisions. However, the communication and computational system limitations of currently used supervisory control and data acquisition (SCADA) system can only deliver delayed information. However, the deployment of synchrophasor measurement devices makes it possible to capture and visualize, in near-real-time, grid operational data with extra granularity. In this paper, a cellular computational network (CCN) approach for frequency situational intelligence (FSI) in a power system is presented. The distributed and scalable computing unit of the CCN framework makes it particularly flexible for customization for a particular set of prediction requirements. Two soft-computing algorithms have been implemented in the CCN framework: a cellular generalized neuron network (CCGNN) and a cellular multi-layer perceptron network (CCMLPN), for purposes of providing multi-timescale frequency predictions, ranging from 16.67 ms to 2 s. These two developed CCGNN and CCMLPN systems were then implemented on two different scales of power systems, one of which installed a large photovoltaic plant. A real-time power system simulator at weather station within the Real-Time Power and Intelligent Systems (RTPIS) laboratory at Clemson, SC, was then used to derive typical FSI results. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Chest computed tomography in children under general anesthesia - cases of an atelectasis

    International Nuclear Information System (INIS)

    Laskowska, K.; Lasek, W.; Drewa, S.; Karolkiewicz, M.; Pogorzala, M.; Wysocki, M.

    2003-01-01

    Computed tomography is a routine examination in children with diagnosed or suspected cancer. Despite the procedure is painless, it requires stillness for some time. Thus, general anesthesia is provided in selected cases. An aim of this paper was an evaluation of an atelectasis incidence in children referred to CT examination under general anesthesia. Material consisted of 11 children aged 2-61 months with neoplasmatic disease diagnosed or suspected. All of them had a regular chest CT exam under general anesthesia with lungs parenchyma, mediastinum and chest wall analyzed. In 4 of 11 children (36%) atelectasis was seen, located in supradiaphragmatic and paravertebral segments of the lungs. None of the children had clinical symptoms of atelectasis. In two of them control chest radiograms did not show any changes. In some patients general anesthesia may reduce the lungs pneumatization which can hide metastases in lungs. It could be summarized that in infants and young children sedation instead of general anesthesia in chest CT should be considered, which could improve the quality of the imaging and the safety of the examination. (author)

  9. Solubility of magnetite in high temperature water and an approach to generalized solubility computations

    International Nuclear Information System (INIS)

    Dinov, K.; Ishigure, K.; Matsuura, C.; Hiroishi, D.

    1993-01-01

    Magnetite solubility in pure water was measured at 423 K in a fully teflon-covered autoclave system. A fairly good agreement was found to exist between the experimental data and calculation results obtained from the thermodynamical model, based on the assumption of Fe 3 O 4 dissolution and Fe 2 O 3 deposition reactions. A generalized thermodynamical approach to the solubility computations under complex conditions on the basis of minimization of the total system Gibbs free energy was proposed. The forms of the chemical equilibria were obtained for various systems initially defined and successfully justified by the subsequent computations. A [Fe 3+ ] T -[Fe 2+ ] T phase diagram was introduced as a tool for systematic understanding of the magnetite dissolution phenomena in pure water and under oxidizing and reducing conditions. (orig.)

  10. Optimal Background Attenuation for Fielded Spectroscopic Detection Systems

    International Nuclear Information System (INIS)

    Robinson, Sean M.; Ashbaker, Eric D.; Schweppe, John E.; Siciliano, Edward R.

    2007-01-01

    Radiation detectors are often placed in positions difficult to shield from the effects of terrestrial background gamma radiation. This is particularly true in the case of Radiation Portal Monitor (RPM) systems, as their wide viewing angle and outdoor installations make them susceptible to radiation from the surrounding area. Reducing this source of background can improve gross-count detection capabilities in the current generation of non-spectroscopic RPM's as well as source identification capabilities in the next generation of spectroscopic RPM's. To provide guidance for designing such systems, the problem of shielding a general spectroscopic-capable RPM system from terrestrial gamma radiation is considered. This analysis is carried out by template matching algorithms, to determine and isolate a set of non-threat isotopes typically present in the commerce stream. Various model detector and shielding scenarios are calculated using the Monte-Carlo N Particle (MCNP) computer code. Amounts of nominal-density shielding needed to increase the probability of detection for an ensemble of illicit sources are given. Common shielding solutions such as steel plating are evaluated based on the probability of detection for 3 particular illicit sources of interest, and the benefits are weighed against the incremental cost of shielding. Previous work has provided optimal shielding scenarios for RPMs based on gross-counting measurements, and those same solutions (shielding the internal detector cavity, direct shielding of the ground between the detectors, and the addition of collimators) are examined with respect to their utility to improving spectroscopic detection

  11. Analysis of random point images with the use of symbolic computation codes and generalized Catalan numbers

    Science.gov (United States)

    Reznik, A. L.; Tuzikov, A. V.; Solov'ev, A. A.; Torgov, A. V.

    2016-11-01

    Original codes and combinatorial-geometrical computational schemes are presented, which are developed and applied for finding exact analytical formulas that describe the probability of errorless readout of random point images recorded by a scanning aperture with a limited number of threshold levels. Combinatorial problems encountered in the course of the study and associated with the new generalization of Catalan numbers are formulated and solved. An attempt is made to find the explicit analytical form of these numbers, which is, on the one hand, a necessary stage of solving the basic research problem and, on the other hand, an independent self-consistent problem.

  12. Stabilization of emission of CO2: A computable general equilibrium assessment

    International Nuclear Information System (INIS)

    Glomsroed, S.; Vennemo, H.; Johnsen, T.

    1992-01-01

    A multisector computable general equilibrium model is used to study economic development perspectives in Norway if CO 2 emissions were stabilized. The effects discussed include impacts on main macroeconomic indicators and economic growth, sectoral allocation of production, and effects on the market for energy. The impact of other pollutants than CO 2 on emissions is assessed along with the related impact on noneconomic welfare. The results indicate that CO 2 emissions might be stabilized in Norway without dramatically reducing economic growth. Sectoral allocation effects are much larger. A substantial reduction in emissions to air other than CO 2 is found, yielding considerable gains in noneconomic welfare. 25 refs., 6 tabs., 2 figs

  13. Description of a general method to compute the fluid-structure interaction

    International Nuclear Information System (INIS)

    Jeanpierre, F.; Gibert, R.J.; Hoffmann, A.; Livolant, M.

    1979-01-01

    The vibrational characteristics of a structure in air may be considerably modified when the structure is immersed in a dense fluid. Such fluid structure interaction effects are important for the seismic or flow induced vibrational studies of various nuclear equipments, as for example the PWR internals, the fast reactor vessels, heat exchangers and fuel elements. In some simple situations, the fluid effects can be simulate by added masses, but in general, they are much more complicated. A general formulation to calculate precisely the vibrational behaviour of structures containing dense fluids is presented in this paper. That formulation can be easily introduced in finite elements computer codes, the fluid being described by special fluid elements. Its use is in principle limited to the linear range: small movements of structures, small pressure fluctuations. (orig.)

  14. Scalar and configuration traces of operators in large spectroscopic spaces

    International Nuclear Information System (INIS)

    Chang, B.D.; Wong, S.S.M.

    1978-01-01

    In statistical spectroscopic calculations, the primary input is the trace of products of powers of Hamiltonian and excitation operators. The lack of a systematic approach to trace evaluation has been in the past one of the major difficulties in the applications of statistical spectroscopic methods. A general method with a simple derivation is described here to evaluate the scalar and configuration traces for operators expressed either in the m-scheme or fully coupled JT scheme. It is shown to be an effective method by actually programming it on a computer. Implications on the future applications of statistical spectroscopy in the area of level density, strength function and perturbation theory are also briefly discussed. (Auth.)

  15. A statistical mechanical approach for the computation of the climatic response to general forcings

    Directory of Open Access Journals (Sweden)

    V. Lucarini

    2011-01-01

    Full Text Available The climate belongs to the class of non-equilibrium forced and dissipative systems, for which most results of quasi-equilibrium statistical mechanics, including the fluctuation-dissipation theorem, do not apply. In this paper we show for the first time how the Ruelle linear response theory, developed for studying rigorously the impact of perturbations on general observables of non-equilibrium statistical mechanical systems, can be applied with great success to analyze the climatic response to general forcings. The crucial value of the Ruelle theory lies in the fact that it allows to compute the response of the system in terms of expectation values of explicit and computable functions of the phase space averaged over the invariant measure of the unperturbed state. We choose as test bed a classical version of the Lorenz 96 model, which, in spite of its simplicity, has a well-recognized prototypical value as it is a spatially extended one-dimensional model and presents the basic ingredients, such as dissipation, advection and the presence of an external forcing, of the actual atmosphere. We recapitulate the main aspects of the general response theory and propose some new general results. We then analyze the frequency dependence of the response of both local and global observables to perturbations having localized as well as global spatial patterns. We derive analytically several properties of the corresponding susceptibilities, such as asymptotic behavior, validity of Kramers-Kronig relations, and sum rules, whose main ingredient is the causality principle. We show that all the coefficients of the leading asymptotic expansions as well as the integral constraints can be written as linear function of parameters that describe the unperturbed properties of the system, such as its average energy. Some newly obtained empirical closure equations for such parameters allow to define such properties as an explicit function of the unperturbed forcing

  16. A computer program for two-particle generalized coefficients of fractional parentage

    Science.gov (United States)

    Deveikis, A.; Juodagalvis, A.

    2008-10-01

    We present a FORTRAN90 program GCFP for the calculation of the generalized coefficients of fractional parentage (generalized CFPs or GCFP). The approach is based on the observation that the multi-shell CFPs can be expressed in terms of single-shell CFPs, while the latter can be readily calculated employing a simple enumeration scheme of antisymmetric A-particle states and an efficient method of construction of the idempotent matrix eigenvectors. The program provides fast calculation of GCFPs for a given particle number and produces results possessing numerical uncertainties below the desired tolerance. A single j-shell is defined by four quantum numbers, (e,l,j,t). A supplemental C++ program parGCFP allows calculation to be done in batches and/or in parallel. Program summaryProgram title:GCFP, parGCFP Catalogue identifier: AEBI_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEBI_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 17 199 No. of bytes in distributed program, including test data, etc.: 88 658 Distribution format: tar.gz Programming language: FORTRAN 77/90 ( GCFP), C++ ( parGCFP) Computer: Any computer with suitable compilers. The program GCFP requires a FORTRAN 77/90 compiler. The auxiliary program parGCFP requires GNU-C++ compatible compiler, while its parallel version additionally requires MPI-1 standard libraries Operating system: Linux (Ubuntu, Scientific) (all programs), also checked on Windows XP ( GCFP, serial version of parGCFP) RAM: The memory demand depends on the computation and output mode. If this mode is not 4, the program GCFP demands the following amounts of memory on a computer with Linux operating system. It requires around 2 MB of RAM for the A=12 system at E⩽2. Computation of the A=50 particle system requires around 60 MB of

  17. General approach to the computation of local transport coefficients with finite Larmor effects in the collision contribution

    International Nuclear Information System (INIS)

    Ghendrih, P.

    1986-10-01

    We expand the distribution functions on a basis of Hermite functions and obtain a general scheme to compute the local transport coefficients. The magnetic field dependence due to finite Larmor radius effects during the collision process is taken into account

  18. Research and Teaching: Computational Methods in General Chemistry--Perceptions of Programming, Prior Experience, and Student Outcomes

    Science.gov (United States)

    Wheeler, Lindsay B.; Chiu, Jennie L.; Grisham, Charles M.

    2016-01-01

    This article explores how integrating computational tools into a general chemistry laboratory course can influence student perceptions of programming and investigates relationships among student perceptions, prior experience, and student outcomes.

  19. Tourism Contribution to Poverty Alleviation in Kenya: A Dynamic Computable General Equilibrium Analysis

    Science.gov (United States)

    Njoya, Eric Tchouamou; Seetaram, Neelu

    2017-01-01

    The aim of this article is to investigate the claim that tourism development can be the engine for poverty reduction in Kenya using a dynamic, microsimulation computable general equilibrium model. The article improves on the common practice in the literature by using the more comprehensive Foster-Greer-Thorbecke (FGT) index to measure poverty instead of headcount ratios only. Simulations results from previous studies confirm that expansion of the tourism industry will benefit different sectors unevenly and will only marginally improve poverty headcount. This is mainly due to the contraction of the agricultural sector caused the appreciation of the real exchange rates. This article demonstrates that the effect on poverty gap and poverty severity is, nevertheless, significant for both rural and urban areas with higher impact in the urban areas. Tourism expansion enables poorer households to move closer to the poverty line. It is concluded that the tourism industry is pro-poor. PMID:29595836

  20. Social incidence and economic costs of carbon limits; A computable general equilibrium analysis for Switzerland

    Energy Technology Data Exchange (ETDEWEB)

    Stephan, G.; Van Nieuwkoop, R.; Wiedmer, T. (Institute for Applied Microeconomics, Univ. of Bern (Switzerland))

    1992-01-01

    Both distributional and allocational effects of limiting carbon dioxide emissions in a small and open economy are discussed. It starts from the assumption that Switzerland attempts to stabilize its greenhouse gas emissions over the next 25 years, and evaluates costs and benefits of the respective reduction programme. From a methodological viewpoint, it is illustrated how a computable general equilibrium approach can be adopted for identifying economic effects of cutting greenhouse gas emissions on the national level. From a political economy point of view it considers the social incidence of a greenhouse policy. It shows in particular that public acceptance can be increased and economic costs of greenhouse policies can be reduced, if carbon taxes are accompanied by revenue redistribution. 8 tabs., 1 app., 17 refs.

  1. Zero-rating food in South Africa: A computable general equilibrium analysis

    Directory of Open Access Journals (Sweden)

    M Kearney

    2004-04-01

    Full Text Available Zero-rating food is considered to alleviate poverty of poor households who spend the largest proportion of their income on food.  However, this will result in a loss of revenue for government.  A Computable General Equilibrium (CGE model is used to analyze the combined effects on zero-rating food and using alternative revenue sources to compensate for the loss in revenue.  To prohibit excessively high increases in the statutory VAT rates of business and financial services, increasing direct taxes or increasing VAT to 16 per cent, is investigated.  Increasing direct taxes is the most successful option when creating a more progressive tax structure, and still generating a positive impact on GDP.  The results indicate that zero-rating food combined with a proportional percentage increase in direct taxes can improve the welfare of poor households.

  2. Hurricane Sandy Economic Impacts Assessment: A Computable General Equilibrium Approach and Validation

    Energy Technology Data Exchange (ETDEWEB)

    Boero, Riccardo [Los Alamos National Laboratory; Edwards, Brian Keith [Los Alamos National Laboratory

    2017-08-07

    Economists use computable general equilibrium (CGE) models to assess how economies react and self-organize after changes in policies, technology, and other exogenous shocks. CGE models are equation-based, empirically calibrated, and inspired by Neoclassical economic theory. The focus of this work was to validate the National Infrastructure Simulation and Analysis Center (NISAC) CGE model and apply it to the problem of assessing the economic impacts of severe events. We used the 2012 Hurricane Sandy event as our validation case. In particular, this work first introduces the model and then describes the validation approach and the empirical data available for studying the event of focus. Shocks to the model are then formalized and applied. Finally, model results and limitations are presented and discussed, pointing out both the model degree of accuracy and the assessed total damage caused by Hurricane Sandy.

  3. SIMULACIÓN DE UN MODELO DE EQUILIBRIO GENERAL COMPUTABLE PARA VENEZUELA

    Directory of Open Access Journals (Sweden)

    Luis Enrique Pedauga

    2012-01-01

    Full Text Available Este artículo presenta los resultados de la simulación de un modelo de equilibrio general computable construido para Venezuela. El uso de este modelo se ejempli-fica mediante la calibración y simulación de una economía abierta con tres agentes institucionales (hogares, empresas y gobierno y tres sectores productivos (petró-leo, manufactura y resto, para una economía abierta. Se considera diferentes re-glas de política. En cada caso se muestra el proceso de calibración y los resultados de las simulaciones utilizando información proveniente de la serie de matrices de contabilidad social para Venezuela entre 1997 y 2005. Se aporta simulaciones de la economía hasta 2009, tanto en los parámetros pertinentes y como algunos ejerci-cios de sensibilidad

  4. Simulation of the preliminary General Electric SP-100 space reactor concept using the ATHENA computer code

    International Nuclear Information System (INIS)

    Fletcher, C.D.

    1986-01-01

    The capability to perform thermal-hydraulic analyses of a space reactor using the ATHENA computer code is demonstrated. The fast reactor, liquid-lithium coolant loops, and lithium-filled heat pipes of the preliminary General electric SP-100 design were modeled with ATHENA. Two demonstration transient calculations were performed simulating accident conditions. Calculated results are available for display using the Nuclear Plant Analyzer color graphics analysis tool in addition to traditional plots. ATHENA-calculated results appear reasonable, both for steady state full power conditions, and for the two transients. This analysis represents the first known transient thermal-hydraulic simulation using an integral space reactor system model incorporating heat pipes. 6 refs., 17 figs., 1 tab

  5. Assessing economic impacts of China's water pollution mitigation measures through a dynamic computable general equilibrium analysis

    International Nuclear Information System (INIS)

    Qin Changbo; Jia Yangwen; Wang Hao; Bressers, Hans T A; Su, Z

    2011-01-01

    In this letter, we apply an extended environmental dynamic computable general equilibrium model to assess the economic consequences of implementing a total emission control policy. On the basis of emission levels in 2007, we simulate different emission reduction scenarios, ranging from 20 to 50% emission reduction, up to the year 2020. The results indicate that a modest total emission reduction target in 2020 can be achieved at low macroeconomic cost. As the stringency of policy targets increases, the macroeconomic cost will increase at a rate faster than linear. Implementation of a tradable emission permit system can counterbalance the economic costs affecting the gross domestic product and welfare. We also find that a stringent environmental policy can lead to an important shift in production, consumption and trade patterns from dirty sectors to relatively clean sectors.

  6. Tourism Contribution to Poverty Alleviation in Kenya: A Dynamic Computable General Equilibrium Analysis.

    Science.gov (United States)

    Njoya, Eric Tchouamou; Seetaram, Neelu

    2018-04-01

    The aim of this article is to investigate the claim that tourism development can be the engine for poverty reduction in Kenya using a dynamic, microsimulation computable general equilibrium model. The article improves on the common practice in the literature by using the more comprehensive Foster-Greer-Thorbecke (FGT) index to measure poverty instead of headcount ratios only. Simulations results from previous studies confirm that expansion of the tourism industry will benefit different sectors unevenly and will only marginally improve poverty headcount. This is mainly due to the contraction of the agricultural sector caused the appreciation of the real exchange rates. This article demonstrates that the effect on poverty gap and poverty severity is, nevertheless, significant for both rural and urban areas with higher impact in the urban areas. Tourism expansion enables poorer households to move closer to the poverty line. It is concluded that the tourism industry is pro-poor.

  7. Hydrogen bonding between the QB site ubisemiquinone and Ser-L223 in the bacterial reaction centre – a combined spectroscopic and computational perspective^

    OpenAIRE

    Martin, Erik; Baldansuren, Amgalanbaatar; Lin, Tzu-Jen; Samoilova, Rimma I.; Wraight, Colin A.; Dikanov, Sergei A.; O’Malley, Patrick J.

    2012-01-01

    In the QB site of the Rba. sphaeroides photosynthetic reaction centre the donation of a hydrogen bond from the hydroxyl group of Ser-L223 to the ubisemiquinone formed after the first flash is debatable. In this study we use a combination of spectroscopy and quantum mechanics/molecular mechanics (QM/MM) calculations to comprehensively explore this topic. We show that ENDOR, ESEEM and HYSCORE spectroscopic differences between the mutant L223SA and the wild type sample (WT) are negligible, indic...

  8. Towards a general theory of neural computation based on prediction by single neurons.

    Directory of Open Access Journals (Sweden)

    Christopher D Fiorillo

    Full Text Available Although there has been tremendous progress in understanding the mechanics of the nervous system, there has not been a general theory of its computational function. Here I present a theory that relates the established biophysical properties of single generic neurons to principles of Bayesian probability theory, reinforcement learning and efficient coding. I suggest that this theory addresses the general computational problem facing the nervous system. Each neuron is proposed to mirror the function of the whole system in learning to predict aspects of the world related to future reward. According to the model, a typical neuron receives current information about the state of the world from a subset of its excitatory synaptic inputs, and prior information from its other inputs. Prior information would be contributed by synaptic inputs representing distinct regions of space, and by different types of non-synaptic, voltage-regulated channels representing distinct periods of the past. The neuron's membrane voltage is proposed to signal the difference between current and prior information ("prediction error" or "surprise". A neuron would apply a Hebbian plasticity rule to select those excitatory inputs that are the most closely correlated with reward but are the least predictable, since unpredictable inputs provide the neuron with the most "new" information about future reward. To minimize the error in its predictions and to respond only when excitation is "new and surprising," the neuron selects amongst its prior information sources through an anti-Hebbian rule. The unique inputs of a mature neuron would therefore result from learning about spatial and temporal patterns in its local environment, and by extension, the external world. Thus the theory describes how the structure of the mature nervous system could reflect the structure of the external world, and how the complexity and intelligence of the system might develop from a population of

  9. Development of a computational environment for the General Curvilinear Ocean Model

    International Nuclear Information System (INIS)

    Thomas, Mary P; Castillo, Jose E

    2009-01-01

    The General Curvilinear Ocean Model (GCOM) differs significantly from the traditional approach, where the use of Cartesian coordinates forces the model to simulate terrain as a series of steps. GCOM utilizes a full three-dimensional curvilinear transformation, which has been shown to have greater accuracy than similar models and to achieve results more efficiently. The GCOM model has been validated for several types of water bodies, different coastlines and bottom shapes, including the Alarcon Seamount, Southern California Coastal Region, the Valencia Lake in Venezuela, and more recently the Monterey Bay. In this paper, enhancements to the GCOM model and an overview of the computational environment (GCOM-CE) are presented. Model improvements include migration from F77 to F90; approach to a component design; and initial steps towards parallelization of the model. Through the use of the component design, new models are being incorporated including biogeochemical, pollution, and sediment transport. The computational environment is designed to allow various client interactions via secure Web applications (portal, Web services, and Web 2.0 gadgets). Features include building jobs, managing and interacting with long running jobs; managing input and output files; quick visualization of results; publishing of Web services to be used by other systems such as larger climate models. The CE is based mainly on Python tools including a grid-enabled Pylons Web application Framework for Web services, pyWSRF (python-Web Services-Resource Framework), pyGlobus based web services, SciPy, and Google code tools.

  10. TRIO-EF a general thermal hydraulics computer code applied to the Avlis process

    International Nuclear Information System (INIS)

    Magnaud, J.P.; Claveau, M.; Coulon, N.; Yala, P.; Guilbaud, D.; Mejane, A.

    1993-01-01

    TRIO(EF is a general purpose Fluid Mechanics 3D Finite Element Code. The system capabilities cover areas such as steady state or transient, laminar or turbulent, isothermal or temperature dependent fluid flows; it is applicable to the study of coupled thermo-fluid problems involving heat conduction and possibly radiative heat transfer. It has been used to study the thermal behaviour of the AVLIS process separation module. In this process, a linear electron beam impinges the free surface of a uranium ingot, generating a two dimensional curtain emission of vapour from a water-cooled crucible. The energy transferred to the metal causes its partial melting, forming a pool where strong convective motion increases heat transfer towards the crucible. In the upper part of the Separation Module, the internal structures are devoted to two main functions: vapor containment and reflux, irradiation and physical separation. They are subjected to very high temperature levels and heat transfer occurs mainly by radiation. Moreover, special attention has to be paid to electron backscattering. These two major points have been simulated numerically with TRIO-EF and the paper presents and comments the results of such a computation, for each of them. After a brief overview of the computer code, two examples of the TRIO-EF capabilities are given: a crucible thermal hydraulics model, a thermal analysis of the internal structures

  11. Generalized state spaces and nonlocality in fault-tolerant quantum-computing schemes

    International Nuclear Information System (INIS)

    Ratanje, N.; Virmani, S.

    2011-01-01

    We develop connections between generalized notions of entanglement and quantum computational devices where the measurements available are restricted, either because they are noisy and/or because by design they are only along Pauli directions. By considering restricted measurements one can (by considering the dual positive operators) construct single-particle-state spaces that are different to the usual quantum-state space. This leads to a modified notion of entanglement that can be very different to the quantum version (for example, Bell states can become separable). We use this approach to develop alternative methods of classical simulation that have strong connections to the study of nonlocal correlations: we construct noisy quantum computers that admit operations outside the Clifford set and can generate some forms of multiparty quantum entanglement, but are otherwise classical in that they can be efficiently simulated classically and cannot generate nonlocal statistics. Although the approach provides new regimes of noisy quantum evolution that can be efficiently simulated classically, it does not appear to lead to significant reductions of existing upper bounds to fault tolerance thresholds for common noise models.

  12. Computable General Equilibrium Model Fiscal Year 2013 Capability Development Report - April 2014

    Energy Technology Data Exchange (ETDEWEB)

    Edwards, Brian Keith [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). National Infrastructure Simulation and Analysis Center (NISAC); Rivera, Michael K. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). National Infrastructure Simulation and Analysis Center (NISAC); Boero, Riccardo [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). National Infrastructure Simulation and Analysis Center (NISAC)

    2014-04-01

    This report documents progress made on continued developments of the National Infrastructure Simulation and Analysis Center (NISAC) Computable General Equilibrium Model (NCGEM), developed in fiscal year 2012. In fiscal year 2013, NISAC the treatment of the labor market and tests performed with the model to examine the properties of the solutions computed by the model. To examine these, developers conducted a series of 20 simulations for 20 U.S. States. Each of these simulations compared an economic baseline simulation with an alternative simulation that assumed a 20-percent reduction in overall factor productivity in the manufacturing industries of each State. Differences in the simulation results between the baseline and alternative simulations capture the economic impact of the reduction in factor productivity. While not every State is affected in precisely the same way, the reduction in manufacturing industry productivity negatively affects the manufacturing industries in each State to an extent proportional to the reduction in overall factor productivity. Moreover, overall economic activity decreases when manufacturing sector productivity is reduced. Developers ran two additional simulations: (1) a version of the model for the State of Michigan, with manufacturing divided into two sub-industries (automobile and other vehicle manufacturing as one sub-industry and the rest of manufacturing as the other subindustry); and (2) a version of the model for the United States, divided into 30 industries. NISAC conducted these simulations to illustrate the flexibility of industry definitions in NCGEM and to examine the simulation properties of in more detail.

  13. Computing OpenSURF on OpenCL and General Purpose GPU

    Directory of Open Access Journals (Sweden)

    Wanglong Yan

    2013-10-01

    Full Text Available Speeded-Up Robust Feature (SURF algorithm is widely used for image feature detecting and matching in computer vision area. Open Computing Language (OpenCL is a framework for writing programs that execute across heterogeneous platforms consisting of CPUs, GPUs, and other processors. This paper introduces how to implement an open-sourced SURF program, namely OpenSURF, on general purpose GPU by OpenCL, and discusses the optimizations in terms of the thread architectures and memory models in detail. Our final OpenCL implementation of OpenSURF is on average 37% and 64% faster than the OpenCV SURF v2.4.5 CUDA implementation on NVidia's GTX660 and GTX460SE GPUs, repectively. Our OpenCL program achieved real-time performance (>25 Frames Per Second for almost all the input images with different sizes from 320*240 to 1024*768 on NVidia's GTX660 GPU, NVidia's GTX460SE GPU and AMD's Radeon HD 6850 GPU. Our OpenCL approach on NVidia's GTX660 GPU is more than 22.8 times faster than its original CPU version on Intel's Dual-Core E5400 2.7G on average.

  14. Energy, economy and equity interactions in a CGE [Computable General Equilibrium] model for Pakistan

    International Nuclear Information System (INIS)

    Naqvi, Farzana

    1997-01-01

    In the last three decades, Computable General Equilibrium modelling has emerged as an established field of applied economics. This book presents a CGE model developed for Pakistan with the hope that it will lay down a foundation for application of general equilibrium modelling for policy formation in Pakistan. As the country is being driven swiftly to become an open market economy, it becomes vital to found out the policy measures that can foster the objectives of economic planning, such as social equity, with the minimum loss of the efficiency gains from the open market resource allocations. It is not possible to build a model for practical use that can do justice to all sectors of the economy in modelling of their peculiar features. The CGE model developed in this book focuses on the energy sector. Energy is considered as one of the basic needs and an essential input to economic growth. Hence, energy policy has multiple criteria to meet. In this book, a case study has been carried out to analyse energy pricing policy in Pakistan using this CGE model of energy, economy and equity interactions. Hence, the book also demonstrates how researchers can model the fine details of one sector given the core structure of a CGE model. (UK)

  15. Introductory Molecular Orbital Theory: An Honors General Chemistry Computational Lab as Implemented Using Three-Dimensional Modeling Software

    Science.gov (United States)

    Ruddick, Kristie R.; Parrill, Abby L.; Petersen, Richard L.

    2012-01-01

    In this study, a computational molecular orbital theory experiment was implemented in a first-semester honors general chemistry course. Students used the GAMESS (General Atomic and Molecular Electronic Structure System) quantum mechanical software (as implemented in ChemBio3D) to optimize the geometry for various small molecules. Extended Huckel…

  16. Generalization of Spatial Channel Theory to Three-Dimensional x-y-z Transport Computations

    International Nuclear Information System (INIS)

    Abu-Shumays, I. K.; Hunter, M. A.; Martz, R. L.; Risner, J. M.

    2002-01-01

    Spatial channel theory, initially introduced in 1977 by M. L. Williams and colleagues at ORNL, is a powerful tool for shield design optimization. It focuses on so called ''contributon'' flux and current of particles (a fraction of the total of neutrons, photons, etc.) which contribute directly or through their progeny to a pre-specified response, such as a detector reading, dose rate, reaction rate, etc., at certain locations of interest. Particles that do not contribute directly or indirectly to the pre-specified response, such as particles that are absorbed or leak out, are ignored. Contributon fluxes and currents are computed based on combined forward and adjoint transport solutions. The initial concepts were considerably improved by Abu-Shumays, Selva, and Shure by introducing steam functions and response flow functions. Plots of such functions provide both qualitative and quantitative information on dominant particle flow paths and identify locations within a shield configuration that are important in contributing to the response of interest. Previous work was restricted to two dimensional (2-D) x-y rectangular and r-z cylindrical geometries. This paper generalizes previous work to three-dimensional x-y-z geometry, since it is now practical to solve realistic 3-D problems with multidimensional transport programs. As in previous work, new analytic expressions are provided for folding spherical harmonics representations of forward and adjoint transport flux solutions. As a result, the main integrals involve in spatial channel theory are computed exactly and more efficiently than by numerical quadrature. The analogy with incompressible fluid flow is also applied to obtain visual qualitative and quantitative measures of important streaming paths that could prove vital for shield design optimization. Illustrative examples are provided. The connection between the current paper and the excellent work completed by M. L. Williams in 1991 is also discussed

  17. VHS-tape system for general purpose computer. For next generation mass storage system

    International Nuclear Information System (INIS)

    Ukai, K.; Takano, M.; Shinohara, M.; Niki, K.; Suzuki, Y.; Hamada, T.; Ogawa, M.

    1994-07-01

    Mass storage is one of the key technology of next generation computer system. A huge amount of data is produced on a field of particle and nuclear physics. These data are raw data of experiments, analysis data, Monte Carlo simulations data, etc. We search a storage device for these data at the point of view of capacity, price, size, transfer speed, etc. We have selected a VHS-tape (12.7 mm-tape, helical scan) from many storage devices. Characteristics of the VHS-tape are as follows; capacity of 14.5 GB, size of 460 cm 3 , price of 1,000 yen (S-VHS tape for video use), and 1.996 MB/sec transfer speed at a sustained mode. Last year, we succeeded to operate the VHS-tape system on a workstation as a I/O device with read/write speed of 1.5 MB/sec. We have tested a VHS-tape system by connecting to the channel of the general purpose computer (Fujitsu M-780/10S) in our institute. We obtained a read and write speeds of 1.07 MB/sec and 1.72 MB/sec by FORTRAN test programs, respectively. Read speeds of an open reel tape and a 3480 type cassete tape by the same test programs are 1.13 MB/sec and 2.54 MB/sec, respectively. Speeds of write operation are 1.09 MB/sec and 2.54 MB/sec for the open reel and 3480 cassete tape, respectively. A start motion of the VHS-tape for read/write operations needs about 60 seconds. (author)

  18. Utilizing General Purpose Graphics Processing Units to Improve Performance of Computer Modelling and Visualization

    Science.gov (United States)

    Monk, J.; Zhu, Y.; Koons, P. O.; Segee, B. E.

    2009-12-01

    With the introduction of the G8X series of cards by nVidia an architecture called CUDA was released, virtually all subsequent video cards have had CUDA support. With this new architecture nVidia provided extensions for C/C++ that create an Application Programming Interface (API) allowing code to be executed on the GPU. Since then the concept of GPGPU (general purpose graphics processing unit) has been growing, this is the concept that the GPU is very good a algebra and running things in parallel so we should take use of that power for other applications. This is highly appealing in the area of geodynamic modeling, as multiple parallel solutions of the same differential equations at different points in space leads to a large speedup in simulation speed. Another benefit of CUDA is a programmatic method of transferring large amounts of data between the computer's main memory and the dedicated GPU memory located on the video card. In addition to being able to compute and render on the video card, the CUDA framework allows for a large speedup in the situation, such as with a tiled display wall, where the rendered pixels are to be displayed in a different location than where they are rendered. A CUDA extension for VirtualGL was developed allowing for faster read back at high resolutions. This paper examines several aspects of rendering OpenGL graphics on large displays using VirtualGL and VNC. It demonstrates how performance can be significantly improved in rendering on a tiled monitor wall. We present a CUDA enhanced version of VirtualGL as well as the advantages to having multiple VNC servers. It will discuss restrictions caused by read back and blitting rates and how they are affected by different sizes of virtual displays being rendered.

  19. A Dataset and a Technique for Generalized Nuclear Segmentation for Computational Pathology.

    Science.gov (United States)

    Kumar, Neeraj; Verma, Ruchika; Sharma, Sanuj; Bhargava, Surabhi; Vahadane, Abhishek; Sethi, Amit

    2017-07-01

    Nuclear segmentation in digital microscopic tissue images can enable extraction of high-quality features for nuclear morphometrics and other analysis in computational pathology. Conventional image processing techniques, such as Otsu thresholding and watershed segmentation, do not work effectively on challenging cases, such as chromatin-sparse and crowded nuclei. In contrast, machine learning-based segmentation can generalize across various nuclear appearances. However, training machine learning algorithms requires data sets of images, in which a vast number of nuclei have been annotated. Publicly accessible and annotated data sets, along with widely agreed upon metrics to compare techniques, have catalyzed tremendous innovation and progress on other image classification problems, particularly in object recognition. Inspired by their success, we introduce a large publicly accessible data set of hematoxylin and eosin (H&E)-stained tissue images with more than 21000 painstakingly annotated nuclear boundaries, whose quality was validated by a medical doctor. Because our data set is taken from multiple hospitals and includes a diversity of nuclear appearances from several patients, disease states, and organs, techniques trained on it are likely to generalize well and work right out-of-the-box on other H&E-stained images. We also propose a new metric to evaluate nuclear segmentation results that penalizes object- and pixel-level errors in a unified manner, unlike previous metrics that penalize only one type of error. We also propose a segmentation technique based on deep learning that lays a special emphasis on identifying the nuclear boundaries, including those between the touching or overlapping nuclei, and works well on a diverse set of test images.

  20. The mineral sector and economic development in Ghana: A computable general equilibrium analysis

    Science.gov (United States)

    Addy, Samuel N.

    A computable general equilibrium model (CGE) model is formulated for conducting mineral policy analysis in the context of national economic development for Ghana. The model, called GHANAMIN, places strong emphasis on production, trade, and investment. It can be used to examine both micro and macro economic impacts of policies associated with mineral investment, taxation, and terms of trade changes, as well as mineral sector performance impacts due to technological change or the discovery of new deposits. Its economywide structure enables the study of broader development policy with a focus on individual or multiple sectors, simultaneously. After going through a period of contraction for about two decades, mining in Ghana has rebounded significantly and is currently the main foreign exchange earner. Gold alone contributed 44.7 percent of 1994 total export earnings. GHANAMIN is used to investigate the economywide impacts of mineral tax policies, world market mineral prices changes, mining investment, and increased mineral exports. It is also used for identifying key sectors for economic development. Various simulations were undertaken with the following results: Recently implemented mineral tax policies are welfare increasing, but have an accompanying decrease in the output of other export sectors. World mineral price rises stimulate an increase in real GDP; however, this increase is less than real GDP decreases associated with price declines. Investment in the non-gold mining sector increases real GDP more than investment in gold mining, because of the former's stronger linkages to the rest of the economy. Increased mineral exports are very beneficial to the overall economy. Foreign direct investment (FDI) in mining increases welfare more so than domestic capital, which is very limited. Mining investment and the increased mineral exports since 1986 have contributed significantly to the country's economic recovery, with gold mining accounting for 95 percent of the

  1. Development of a global computable general equilibrium model coupled with detailed energy end-use technology

    International Nuclear Information System (INIS)

    Fujimori, Shinichiro; Masui, Toshihiko; Matsuoka, Yuzuru

    2014-01-01

    Highlights: • Detailed energy end-use technology information is considered within a CGE model. • Aggregated macro results of the detailed model are similar to traditional model. • The detailed model shows unique characteristics in the household sector. - Abstract: A global computable general equilibrium (CGE) model integrating detailed energy end-use technologies is developed in this paper. The paper (1) presents how energy end-use technologies are treated within the model and (2) analyzes the characteristics of the model’s behavior. Energy service demand and end-use technologies are explicitly considered, and the share of technologies is determined by a discrete probabilistic function, namely a Logit function, to meet the energy service demand. Coupling with detailed technology information enables the CGE model to have more realistic representation in the energy consumption. The proposed model in this paper is compared with the aggregated traditional model under the same assumptions in scenarios with and without mitigation roughly consistent with the two degree climate mitigation target. Although the results of aggregated energy supply and greenhouse gas emissions are similar, there are three main differences between the aggregated and the detailed technologies models. First, GDP losses in mitigation scenarios are lower in the detailed technology model (2.8% in 2050) as compared with the aggregated model (3.2%). Second, price elasticity and autonomous energy efficiency improvement are heterogeneous across regions and sectors in the detailed technology model, whereas the traditional aggregated model generally utilizes a single value for each of these variables. Third, the magnitude of emissions reduction and factors (energy intensity and carbon factor reduction) related to climate mitigation also varies among sectors in the detailed technology model. The household sector in the detailed technology model has a relatively higher reduction for both energy

  2. Computable general equilibrium modelling in the context of trade and environmental policy

    Energy Technology Data Exchange (ETDEWEB)

    Koesler, Simon Tobias

    2014-10-14

    This thesis is dedicated to the evaluation of environmental policies in the context of climate change. Its objectives are twofold. Its first part is devoted to the development of potent instruments for quantitative impact analysis of environmental policy. In this context, the main contributions include the development of a new computable general equilibrium (CGE) model which makes use of the new comprehensive and coherent World Input-Output Dataset (WIOD) and which features a detailed representation of bilateral and bisectoral trade flows. Moreover it features an investigation of input substitutability to provide modellers with adequate estimates for key elasticities as well as a discussion and amelioration of the standard base year calibration procedure of most CGE models. Building on these tools, the second part applies the improved modelling framework and studies the economic implications of environmental policy. This includes an analysis of so called rebound effects, which are triggered by energy efficiency improvements and reduce their net benefit, an investigation of how firms restructure their production processes in the presence of carbon pricing mechanisms, and an analysis of a regional maritime emission trading scheme as one of the possible options to reduce emissions of international shipping in the EU context.

  3. Energy from sugarcane bagasse under electricity rationing in Brazil: a computable general equilibrium model

    International Nuclear Information System (INIS)

    Scaramucci, Jose A.; Perin, Clovis; Pulino, Petronio; Bordoni, Orlando F.J.G.; Cunha, Marcelo P. da; Cortez, Luis A.B.

    2006-01-01

    In the midst of the institutional reforms of the Brazilian electric sectors initiated in the 1990s, a serious electricity shortage crisis developed in 2001. As an alternative to blackout, the government instituted an emergency plan aimed at reducing electricity consumption. From June 2001 to February 2002, Brazilians were compelled to curtail electricity use by 20%. Since the late 1990s, but especially after the electricity crisis, energy policy in Brazil has been directed towards increasing thermoelectricity supply and promoting further gains in energy conservation. Two main issues are addressed here. Firstly, we estimate the economic impacts of constraining the supply of electric energy in Brazil. Secondly, we investigate the possible penetration of electricity generated from sugarcane bagasse. A computable general equilibrium (CGE) model is used. The traditional sector of electricity and the remainder of the economy are characterized by a stylized top-down representation as nested CES (constant elasticity of substitution) production functions. The electricity production from sugarcane bagasse is described through a bottom-up activity analysis, with a detailed representation of the required inputs based on engineering studies. The model constructed is used to study the effects of the electricity shortage in the preexisting sector through prices, production and income changes. It is shown that installing capacity to generate electricity surpluses by the sugarcane agroindustrial system could ease the economic impacts of an electric energy shortage crisis on the gross domestic product (GDP)

  4. Computable general equilibrium models for sustainability impact assessment: Status quo and prospects

    International Nuclear Information System (INIS)

    Boehringer, Christoph; Loeschel, Andreas

    2006-01-01

    Sustainability Impact Assessment (SIA) of economic, environmental, and social effects triggered by governmental policies has become a central requirement for policy design. The three dimensions of SIA are inherently intertwined and subject to trade-offs. Quantification of trade-offs for policy decision support requires numerical models in order to assess systematically the interference of complex interacting forces that affect economic performance, environmental quality, and social conditions. This paper investigates the use of computable general equilibrium (CGE) models for measuring the impacts of policy interference on policy-relevant economic, environmental, and social (institutional) indicators. We find that operational CGE models used for energy-economy-environment (E3) analyses have a good coverage of central economic indicators. Environmental indicators such as energy-related emissions with direct links to economic activities are widely covered, whereas indicators with complex natural science background such as water stress or biodiversity loss are hardly represented. Social indicators stand out for very weak coverage, mainly because they are vaguely defined or incommensurable. Our analysis identifies prospects for future modeling in the field of integrated assessment that link standard E3-CGE-models to themespecific complementary models with environmental and social focus. (author)

  5. Discharge Fee Policy Analysis: A Computable General Equilibrium (CGE Model of Water Resources and Water Environments

    Directory of Open Access Journals (Sweden)

    Guohua Fang

    2016-09-01

    Full Text Available To alleviate increasingly serious water pollution and shortages in developing countries, various kinds of policies have been implemented by local governments. It is vital to quantify and evaluate the performance and potential economic impacts of these policies. This study develops a Computable General Equilibrium (CGE model to simulate the regional economic and environmental effects of discharge fees. Firstly, water resources and water environment factors are separated from the input and output sources of the National Economic Production Department. Secondly, an extended Social Accounting Matrix (SAM of Jiangsu province is developed to simulate various scenarios. By changing values of the discharge fees (increased by 50%, 100% and 150%, three scenarios are simulated to examine their influence on the overall economy and each industry. The simulation results show that an increased fee will have a negative impact on Gross Domestic Product (GDP. However, waste water may be effectively controlled. Also, this study demonstrates that along with the economic costs, the increase of the discharge fee will lead to the upgrading of industrial structures from a situation of heavy pollution to one of light pollution which is beneficial to the sustainable development of the economy and the protection of the environment.

  6. Computable general equilibrium modelling in the context of trade and environmental policy

    International Nuclear Information System (INIS)

    Koesler, Simon Tobias

    2014-01-01

    This thesis is dedicated to the evaluation of environmental policies in the context of climate change. Its objectives are twofold. Its first part is devoted to the development of potent instruments for quantitative impact analysis of environmental policy. In this context, the main contributions include the development of a new computable general equilibrium (CGE) model which makes use of the new comprehensive and coherent World Input-Output Dataset (WIOD) and which features a detailed representation of bilateral and bisectoral trade flows. Moreover it features an investigation of input substitutability to provide modellers with adequate estimates for key elasticities as well as a discussion and amelioration of the standard base year calibration procedure of most CGE models. Building on these tools, the second part applies the improved modelling framework and studies the economic implications of environmental policy. This includes an analysis of so called rebound effects, which are triggered by energy efficiency improvements and reduce their net benefit, an investigation of how firms restructure their production processes in the presence of carbon pricing mechanisms, and an analysis of a regional maritime emission trading scheme as one of the possible options to reduce emissions of international shipping in the EU context.

  7. Essays on environmental policy analysis: Computable general equilibrium approaches applied to Sweden

    International Nuclear Information System (INIS)

    Hill, M.

    2001-01-01

    This thesis consists of three essays within the field of applied environmental economics, with the common basic aim of analyzing effects of Swedish environmental policy. Starting out from Swedish environmental goals, the thesis assesses a range of policy-related questions. The objective is to quantify policy outcomes by constructing and applying numerical models especially designed for environmental policy analysis. Static and dynamic multi-sectoral computable general equilibrium models are developed in order to analyze the following issues. The costs and benefits of a domestic carbon dioxide (CO 2 ) tax reform. Special attention is given to how these costs and benefits depend on the structure of the tax system and, furthermore, how they depend on policy-induced changes in 'secondary' pollutants. The effects of allowing for emission permit trading through time when the domestic long-term domestic environmental goal is specified in CO 2 stock terms. The effects on long-term projected economic growth and welfare that are due to damages from emission flow and accumulation of 'local' pollutants (nitrogen oxides and sulfur dioxide), as well as the outcome of environmental policy when costs and benefits are considered in an integrated environmental-economic framework

  8. Transition towards a low carbon economy: A computable general equilibrium analysis for Poland

    International Nuclear Information System (INIS)

    Böhringer, Christoph; Rutherford, Thomas F.

    2013-01-01

    In the transition to sustainable economic structures the European Union assumes a leading role with its climate and energy package which sets ambitious greenhouse gas emission reduction targets by 2020. Among EU Member States, Poland with its heavy energy system reliance on coal is particularly worried on the pending trade-offs between emission regulation and economic growth. In our computable general equilibrium analysis of the EU climate and energy package we show that economic adjustment cost for Poland hinge crucially on restrictions to where-flexibility of emission abatement, revenue recycling, and technological options in the power system. We conclude that more comprehensive flexibility provisions at the EU level and a diligent policy implementation at the national level could achieve the transition towards a low carbon economy at little cost thereby broadening societal support. - Highlights: ► Economic impact assessment of the EU climate and energy package for Poland. ► Sensitivity analysis on where-flexibility, revenue recycling and technology choice. ► Application of a hybrid bottom-up, top-down CGE model

  9. Spectroscopic classification of transients

    DEFF Research Database (Denmark)

    Stritzinger, M. D.; Fraser, M.; Hummelmose, N. N.

    2017-01-01

    We report the spectroscopic classification of several transients based on observations taken with the Nordic Optical Telescope (NOT) equipped with ALFOSC, over the nights 23-25 August 2017.......We report the spectroscopic classification of several transients based on observations taken with the Nordic Optical Telescope (NOT) equipped with ALFOSC, over the nights 23-25 August 2017....

  10. Online retrieval of patient information by asynchronous communication between general purpose computer and stand-alone personal computer

    International Nuclear Information System (INIS)

    Tsutsumi, Reiko; Takahashi, Kazuei; Sato, Toshiko; Komatani, Akio; Yamaguchi, Koichi

    1988-01-01

    Asynchronous communication was made between host (FACOM M-340) and personal computer (OLIBETTIE S-2250) to get patient's information required for RIA test registration. The retrieval system consists of a keyboad input of six numeric codes, patient's ID, and a real time reply containing six parameters for the patient. Their identified parameters are patient's name, sex, date of birth (include area), department, and out- or inpatient. Linking this program to RIA registration program for individual patient, then, operator can input name of RIA test requested. Our simple retrieval program made a useful data network between different types of host and stand-alone personal computers, and enabled us accurate and labor-saving registration for RIA test. (author)

  11. CO2, energy and economy interactions: A multisectoral, dynamic, computable general equilibrium model for Korea

    Science.gov (United States)

    Kang, Yoonyoung

    While vast resources have been invested in the development of computational models for cost-benefit analysis for the "whole world" or for the largest economies (e.g. United States, Japan, Germany), the remainder have been thrown together into one model for the "rest of the world." This study presents a multi-sectoral, dynamic, computable general equilibrium (CGE) model for Korea. This research evaluates the impacts of controlling COsb2 emissions using a multisectoral CGE model. This CGE economy-energy-environment model analyzes and quantifies the interactions between COsb2, energy and economy. This study examines interactions and influences of key environmental policy components: applied economic instruments, emission targets, and environmental tax revenue recycling methods. The most cost-effective economic instrument is the carbon tax. The economic effects discussed include impacts on main macroeconomic variables (in particular, economic growth), sectoral production, and the energy market. This study considers several aspects of various COsb2 control policies, such as the basic variables in the economy: capital stock and net foreign debt. The results indicate emissions might be stabilized in Korea at the expense of economic growth and with dramatic sectoral allocation effects. Carbon dioxide emissions stabilization could be achieved to the tune of a 600 trillion won loss over a 20 year period (1990-2010). The average annual real GDP would decrease by 2.10% over the simulation period compared to the 5.87% increase in the Business-as-Usual. This model satisfies an immediate need for a policy simulation model for Korea and provides the basic framework for similar economies. It is critical to keep the central economic question at the forefront of any discussion regarding environmental protection. How much will reform cost, and what does the economy stand to gain and lose? Without this model, the policy makers might resort to hesitation or even blind speculation. With

  12. Ideal observer estimation and generalized ROC analysis for computer-aided diagnosis

    International Nuclear Information System (INIS)

    Edwards, Darrin C.

    2004-01-01

    The research presented in this dissertation represents an innovative application of computer-aided diagnosis and signal detection theory to the specific task of early detection of breast cancer in the context of screening mammography. A number of automated schemes have been developed in our laboratory to detect masses and clustered microcalcifications in digitized mammograms, on the one hand, and to classify known lesions as malignant or benign, on the other. The development of fully automated classification schemes is difficult, because the output of a detection scheme will contain false-positive detections in addition to detected malignant and benign lesions, resulting in a three-class classification task. Researchers have so far been unable to extend successful tools for analyzing two-class classification tasks, such as receiver operating characteristic (ROC) analysis, to three-class classification tasks. The goals of our research were to use Bayesian artificial neural networks to estimate ideal observer decision variables to both detect and classify clustered microcalcifications and mass lesions in mammograms, and to derive substantial theoretical results indicating potential avenues of approach toward the three-class classification task. Specifically, we have shown that an ideal observer in an N-class classification task achieves an optimal ROC hypersurface, just as the two-class ideal observer achieves an optimal ROC curve; and that an obvious generalization of a well-known two-class performance metric, the area under the ROC curve, is not useful as a performance metric in classification tasks with more than two classes. This work is significant for three reasons. First, it involves the explicit estimation of feature-based (as opposed to image-based) ideal observer decision variables in the tasks of detecting and classifying mammographic lesions. Second, it directly addresses the three-class classification task of distinguishing malignant lesions, benign

  13. A parallelization study of the general purpose Monte Carlo code MCNP4 on a distributed memory highly parallel computer

    International Nuclear Information System (INIS)

    Yamazaki, Takao; Fujisaki, Masahide; Okuda, Motoi; Takano, Makoto; Masukawa, Fumihiro; Naito, Yoshitaka

    1993-01-01

    The general purpose Monte Carlo code MCNP4 has been implemented on the Fujitsu AP1000 distributed memory highly parallel computer. Parallelization techniques developed and studied are reported. A shielding analysis function of the MCNP4 code is parallelized in this study. A technique to map a history to each processor dynamically and to map control process to a certain processor was applied. The efficiency of parallelized code is up to 80% for a typical practical problem with 512 processors. These results demonstrate the advantages of a highly parallel computer to the conventional computers in the field of shielding analysis by Monte Carlo method. (orig.)

  14. Quantum computational studies, spectroscopic (FT-IR, FT-Raman and UV-Vis) profiling, natural hybrid orbital and molecular docking analysis on 2,4 Dibromoaniline

    Science.gov (United States)

    Abraham, Christina Susan; Prasana, Johanan Christian; Muthu, S.; Rizwana B, Fathima; Raja, M.

    2018-05-01

    The research exploration will comprise of investigating the molecular structure, vibrational assignments, bonding and anti-bonding nature, nonlinear optical, electronic and thermodynamic nature of the molecule. The research is conducted at two levels: First level employs the spectroscopic techniques - FT-IR, FT-Raman and UV-Vis characterizing techniques; at second level the data attained experimentally is analyzed through theoretical methods using and Density Function Theories which involves the basic principle of solving the Schrodinger equation for many body systems. A comparison is drawn between the two levels and discussed. The probability of the title molecule being bio-active theoretically proved by the electrophilicity index leads to further property analyzes of the molecule. The target molecule is found to fit well with Centromere associated protein inhibitor using molecular docking techniques. Higher basis set 6-311++G(d,p) is used to attain results more concurrent to the experimental data. The results of the organic amine 2, 4 Dibromoaniline is analyzed and discussed.

  15. Computer templates in chronic disease management: ethnographic case study in general practice.

    Science.gov (United States)

    Swinglehurst, Deborah; Greenhalgh, Trisha; Roberts, Celia

    2012-01-01

    To investigate how electronic templates shape, enable and constrain consultations about chronic diseases. Ethnographic case study, combining field notes, video-recording, screen capture with a microanalysis of talk, body language and data entry-an approach called linguistic ethnography. Two general practices in England. Ethnographic observation of administrative areas and 36 nurse-led consultations was done. Twenty-four consultations were directly observed and 12 consultations were video-recorded alongside computer screen capture. Consultations were transcribed using conversation analysis conventions, with notes on body language and the electronic record. The analysis involved repeated rounds of viewing video, annotating field notes, transcription and microanalysis to identify themes. The data was interpreted using discourse analysis, with attention to the sociotechnical theory. Consultations centred explicitly or implicitly on evidence-based protocols inscribed in templates. Templates did not simply identify tasks for completion, but contributed to defining what chronic diseases were, how care was being delivered and what it meant to be a patient or professional in this context. Patients' stories morphed into data bytes; the particular became generalised; the complex was made discrete, simple and manageable; and uncertainty became categorised and contained. Many consultations resembled bureaucratic encounters, primarily oriented to completing data fields. We identified a tension, sharpened by the template, between different framings of the patient-as 'individual' or as 'one of a population'. Some clinicians overcame this tension, responding creatively to prompts within a dialogue constructed around the patient's narrative. Despite their widespread implementation, little previous research has examined how templates are actually used in practice. Templates do not simply document the tasks of chronic disease management but profoundly change the nature of this work

  16. A general-purpose development environment for intelligent computer-aided training systems

    Science.gov (United States)

    Savely, Robert T.

    1990-01-01

    Space station training will be a major task, requiring the creation of large numbers of simulation-based training systems for crew, flight controllers, and ground-based support personnel. Given the long duration of space station missions and the large number of activities supported by the space station, the extension of space shuttle training methods to space station training may prove to be impractical. The application of artificial intelligence technology to simulation training can provide the ability to deliver individualized training to large numbers of personnel in a distributed workstation environment. The principal objective of this project is the creation of a software development environment which can be used to build intelligent training systems for procedural tasks associated with the operation of the space station. Current NASA Johnson Space Center projects and joint projects with other NASA operational centers will result in specific training systems for existing space shuttle crew, ground support personnel, and flight controller tasks. Concurrently with the creation of these systems, a general-purpose development environment for intelligent computer-aided training systems will be built. Such an environment would permit the rapid production, delivery, and evolution of training systems for space station crew, flight controllers, and other support personnel. The widespread use of such systems will serve to preserve task and training expertise, support the training of many personnel in a distributed manner, and ensure the uniformity and verifiability of training experiences. As a result, significant reductions in training costs can be realized while safety and the probability of mission success can be enhanced.

  17. Modeling the economic costs of disasters and recovery: analysis using a dynamic computable general equilibrium model

    Science.gov (United States)

    Xie, W.; Li, N.; Wu, J.-D.; Hao, X.-L.

    2014-04-01

    Disaster damages have negative effects on the economy, whereas reconstruction investment has positive effects. The aim of this study is to model economic causes of disasters and recovery involving the positive effects of reconstruction activities. Computable general equilibrium (CGE) model is a promising approach because it can incorporate these two kinds of shocks into a unified framework and furthermore avoid the double-counting problem. In order to factor both shocks into the CGE model, direct loss is set as the amount of capital stock reduced on the supply side of the economy; a portion of investments restores the capital stock in an existing period; an investment-driven dynamic model is formulated according to available reconstruction data, and the rest of a given country's saving is set as an endogenous variable to balance the fixed investment. The 2008 Wenchuan Earthquake is selected as a case study to illustrate the model, and three scenarios are constructed: S0 (no disaster occurs), S1 (disaster occurs with reconstruction investment) and S2 (disaster occurs without reconstruction investment). S0 is taken as business as usual, and the differences between S1 and S0 and that between S2 and S0 can be interpreted as economic losses including reconstruction and excluding reconstruction, respectively. The study showed that output from S1 is found to be closer to real data than that from S2. Economic loss under S2 is roughly 1.5 times that under S1. The gap in the economic aggregate between S1 and S0 is reduced to 3% at the end of government-led reconstruction activity, a level that should take another four years to achieve under S2.

  18. Econometrically calibrated computable general equilibrium models: Applications to the analysis of energy and climate politics

    Science.gov (United States)

    Schu, Kathryn L.

    Economy-energy-environment models are the mainstay of economic assessments of policies to reduce carbon dioxide (CO2) emissions, yet their empirical basis is often criticized as being weak. This thesis addresses these limitations by constructing econometrically calibrated models in two policy areas. The first is a 35-sector computable general equilibrium (CGE) model of the U.S. economy which analyzes the uncertain impacts of CO2 emission abatement. Econometric modeling of sectors' nested constant elasticity of substitution (CES) cost functions based on a 45-year price-quantity dataset yields estimates of capital-labor-energy-material input substitution elasticities and biases of technical change that are incorporated into the CGE model. I use the estimated standard errors and variance-covariance matrices to construct the joint distribution of the parameters of the economy's supply side, which I sample to perform Monte Carlo baseline and counterfactual runs of the model. The resulting probabilistic abatement cost estimates highlight the importance of the uncertainty in baseline emissions growth. The second model is an equilibrium simulation of the market for new vehicles which I use to assess the response of vehicle prices, sales and mileage to CO2 taxes and increased corporate average fuel economy (CAFE) standards. I specify an econometric model of a representative consumer's vehicle preferences using a nested CES expenditure function which incorporates mileage and other characteristics in addition to prices, and develop a novel calibration algorithm to link this structure to vehicle model supplies by manufacturers engaged in Bertrand competition. CO2 taxes' effects on gasoline prices reduce vehicle sales and manufacturers' profits if vehicles' mileage is fixed, but these losses shrink once mileage can be adjusted. Accelerated CAFE standards induce manufacturers to pay fines for noncompliance rather than incur the higher costs of radical mileage improvements

  19. Towards passive brain-computer interfaces: applying brain-computer interface technology to human-machine systems in general.

    Science.gov (United States)

    Zander, Thorsten O; Kothe, Christian

    2011-04-01

    Cognitive monitoring is an approach utilizing realtime brain signal decoding (RBSD) for gaining information on the ongoing cognitive user state. In recent decades this approach has brought valuable insight into the cognition of an interacting human. Automated RBSD can be used to set up a brain-computer interface (BCI) providing a novel input modality for technical systems solely based on brain activity. In BCIs the user usually sends voluntary and directed commands to control the connected computer system or to communicate through it. In this paper we propose an extension of this approach by fusing BCI technology with cognitive monitoring, providing valuable information about the users' intentions, situational interpretations and emotional states to the technical system. We call this approach passive BCI. In the following we give an overview of studies which utilize passive BCI, as well as other novel types of applications resulting from BCI technology. We especially focus on applications for healthy users, and the specific requirements and demands of this user group. Since the presented approach of combining cognitive monitoring with BCI technology is very similar to the concept of BCIs itself we propose a unifying categorization of BCI-based applications, including the novel approach of passive BCI.

  20. Estimation of the transboundary economic impacts of the Grand Ethiopia Renaissance Dam: A Computable General Equilibrium Analysis

    NARCIS (Netherlands)

    Kahsay, T.N.; Kuik, O.J.; Brouwer, R.; van der Zaag, P.

    2015-01-01

    Employing a multi-region multi-sector computable general equilibrium (CGE) modeling framework, this study estimates the direct and indirect economic impacts of the Grand Ethiopian Renaissance Dam (GERD) on the Eastern Nile economies. The study contributes to the existing literature by evaluating the

  1. A general method for computing the total solar radiation force on complex spacecraft structures

    Science.gov (United States)

    Chan, F. K.

    1981-01-01

    The method circumvents many of the existing difficulties in computational logic presently encountered in the direct analytical or numerical evaluation of the appropriate surface integral. It may be applied to complex spacecraft structures for computing the total force arising from either specular or diffuse reflection or even from non-Lambertian reflection and re-radiation.

  2. Computer templates in chronic disease management: ethnographic case study in general practice

    Science.gov (United States)

    Swinglehurst, Deborah; Greenhalgh, Trisha; Roberts, Celia

    2012-01-01

    Objective To investigate how electronic templates shape, enable and constrain consultations about chronic diseases. Design Ethnographic case study, combining field notes, video-recording, screen capture with a microanalysis of talk, body language and data entry—an approach called linguistic ethnography. Setting Two general practices in England. Participants and methods Ethnographic observation of administrative areas and 36 nurse-led consultations was done. Twenty-four consultations were directly observed and 12 consultations were video-recorded alongside computer screen capture. Consultations were transcribed using conversation analysis conventions, with notes on body language and the electronic record. The analysis involved repeated rounds of viewing video, annotating field notes, transcription and microanalysis to identify themes. The data was interpreted using discourse analysis, with attention to the sociotechnical theory. Results Consultations centred explicitly or implicitly on evidence-based protocols inscribed in templates. Templates did not simply identify tasks for completion, but contributed to defining what chronic diseases were, how care was being delivered and what it meant to be a patient or professional in this context. Patients’ stories morphed into data bytes; the particular became generalised; the complex was made discrete, simple and manageable; and uncertainty became categorised and contained. Many consultations resembled bureaucratic encounters, primarily oriented to completing data fields. We identified a tension, sharpened by the template, between different framings of the patient—as ‘individual’ or as ‘one of a population’. Some clinicians overcame this tension, responding creatively to prompts within a dialogue constructed around the patient's narrative. Conclusions Despite their widespread implementation, little previous research has examined how templates are actually used in practice. Templates do not simply document the

  3. A General Cross-Layer Cloud Scheduling Framework for Multiple IoT Computer Tasks.

    Science.gov (United States)

    Wu, Guanlin; Bao, Weidong; Zhu, Xiaomin; Zhang, Xiongtao

    2018-05-23

    The diversity of IoT services and applications brings enormous challenges to improving the performance of multiple computer tasks' scheduling in cross-layer cloud computing systems. Unfortunately, the commonly-employed frameworks fail to adapt to the new patterns on the cross-layer cloud. To solve this issue, we design a new computer task scheduling framework for multiple IoT services in cross-layer cloud computing systems. Specifically, we first analyze the features of the cross-layer cloud and computer tasks. Then, we design the scheduling framework based on the analysis and present detailed models to illustrate the procedures of using the framework. With the proposed framework, the IoT services deployed in cross-layer cloud computing systems can dynamically select suitable algorithms and use resources more effectively to finish computer tasks with different objectives. Finally, the algorithms are given based on the framework, and extensive experiments are also given to validate its effectiveness, as well as its superiority.

  4. Probing structural homogeneity of La{sub 1-x}Gd{sub x}PO{sub 4} monazite-type solid solutions by combined spectroscopic and computational studies

    Energy Technology Data Exchange (ETDEWEB)

    Huittinen, N., E-mail: n.huittinen@hzdr.de [Helmholtz-Zentrum Dresden - Rossendorf, Institute of Resource Ecology, Bautzner Landstraße 400, 01328 Dresden (Germany); Arinicheva, Y. [Forschungszentrum Jülich GmbH, Institute of Energy and Climate Research, Nuclear Waste Management and Reactor Safety (IEK-6), 52425 Jülich (Germany); Kowalski, P.M.; Vinograd, V.L. [Forschungszentrum Jülich GmbH, Institute of Energy and Climate Research, Nuclear Waste Management and Reactor Safety (IEK-6), 52425 Jülich (Germany); JARA High-Performance Computing, Schinkelstraße 2, 52062 Aachen (Germany); Neumeier, S. [Forschungszentrum Jülich GmbH, Institute of Energy and Climate Research, Nuclear Waste Management and Reactor Safety (IEK-6), 52425 Jülich (Germany); Bosbach, D. [Forschungszentrum Jülich GmbH, Institute of Energy and Climate Research, Nuclear Waste Management and Reactor Safety (IEK-6), 52425 Jülich (Germany); JARA High-Performance Computing, Schinkelstraße 2, 52062 Aachen (Germany)

    2017-04-01

    Here we study the homogeneity of Eu{sup 3+}-doped La{sub 1-x}Gd{sub x}PO{sub 4} (x = 0, 0.11, 0.33, 0.55, 0.75, 0.92, 1) monazite-type solid solutions by a combination of Raman and time-resolved laser fluorescence spectroscopies (TRLFS) with complementary quasi-random structure-based atomistic modeling studies. For the intermediate La{sub 0.45}Gd{sub 0.55}PO{sub 4} composition we detected a significant broadening of the Raman bands corresponding to the lattice vibrations of the LnO{sub 9} polyhedron, indicating much stronger distortion of the lanthanide cation site than the PO{sub 4} tetrahedron. A distortion of the crystal lattice around the dopant site was also confirmed in our TRLFS measurements of Eu{sup 3+} doped samples, where both the half width (FWHM) of the excitation peaks and the {sup 7}F{sub 2}/{sup 7}F{sub 1} ratio derived from the emission spectra increase for intermediate solid-solution compositions. The observed variation in FWHM correlates well with the simulated distribution of Eu···O bond distances within the investigated monazites. The combined results imply that homogenous Eu{sup 3+}-doped La{sub 1-x}Gd{sub x}PO{sub 4} monazite-type solid solutions are formed over the entire composition range, which is of importance in the context of using these ceramics for immobilization of radionuclides. - Highlights: •Homogenous Eu{sup 3+}-doped La{sub 1-x}Gd{sub x}PO{sub 4} monazite-type solid solutions have been synthesized. •Solid solution formation is accompanied by slight distortion of the LnO{sub 9} polyhedron. •Raman and laser spectroscopic trends are observed within the monazite series. •Results are explained with atomistic simulations of Eu-O bond distance distribution.

  5. Supramolecular architecture of 5-bromo-7-methoxy-1-methyl-1H-benzoimidazole.3H2O: Synthesis, spectroscopic investigations, DFT computation, MD simulations and docking studies

    Science.gov (United States)

    Murthy, P. Krishna; Smitha, M.; Sheena Mary, Y.; Armaković, Stevan; Armaković, Sanja J.; Rao, R. Sreenivasa; Suchetan, P. A.; Giri, L.; Pavithran, Rani; Van Alsenoy, C.

    2017-12-01

    Crystal and molecular structure of newly synthesized compound 5-bromo-7-methoxy-1-methyl-1H-benzoimidazole (BMMBI) has been authenticated by single crystal X-ray diffraction, FT-IR, FT-Raman, 1H NMR, 13C NMR and UV-Visible spectroscopic techniques; compile both experimental and theoretical results which are performed by DFT/B3LYP/6-311++G(d,p) method at ground state in gas phase. Visualize nature and type of intermolecular interactions and crucial role of these interactions in supra-molecular architecture has been investigated by use of a set of graphical tools 3D-Hirshfeld surfaces and 2D-fingerprint plots analysis. The title compound stabilized by strong intermolecular hydrogen bonds N⋯Hsbnd O and O⋯Hsbnd O, which are envisaged by dark red spots on dnorm mapped surfaces and weak Br⋯Br contacts envisaged by red spot on dnorm mapped surface. The detailed fundamental vibrational assignments of wavenumbers were aid by with help of Potential Energy distribution (PED) analysis by using GAR2PED program and shows good agreement with experimental values. Besides frontier orbitals analysis, global reactivity descriptors, natural bond orbitals and Mullikan charges analysis were performed by same basic set at ground state in gas phase. Potential reactive sites of the title compound have been identified by ALIE, Fukui functions and MEP, which are mapped to the electron density surfaces. Stability of BMMBI have been investigated from autoxidation process and pronounced interaction with water (hydrolysis) by using bond dissociation energies (BDE) and radial distribution functions (RDF), respectively after MD simulations. In order to identify molecule's most important reactive spots we have used a combination of DFT calculations and MD simulations. Reactivity study encompassed calculations of a set of quantities such as: HOMO-LUMO gap, MEP and ALIE surfaces, Fukui functions, bond dissociation energies and radial distribution functions. To confirm the potential

  6. General, Interactive Computer Program for the Solution of the Schrodinger Equation

    Science.gov (United States)

    Griffin, Donald C.; McGhie, James B.

    1973-01-01

    Discusses an interactive computer algorithm which allows beginning students to solve one- and three-dimensional quantum problems. Included is an example of the Thomas-Fermi-Dirac central field approximation. (CC)

  7. Computing Confidence Bounds for Power and Sample Size of the General Linear Univariate Model

    OpenAIRE

    Taylor, Douglas J.; Muller, Keith E.

    1995-01-01

    The power of a test, the probability of rejecting the null hypothesis in favor of an alternative, may be computed using estimates of one or more distributional parameters. Statisticians frequently fix mean values and calculate power or sample size using a variance estimate from an existing study. Hence computed power becomes a random variable for a fixed sample size. Likewise, the sample size necessary to achieve a fixed power varies randomly. Standard statistical practice requires reporting ...

  8. Development of a THz spectroscopic imaging system

    International Nuclear Information System (INIS)

    Usami, M; Iwamoto, T; Fukasawa, R; Tani, M; Watanabe, M; Sakai, K

    2002-01-01

    We have developed a real-time THz imaging system based on the two-dimensional (2D) electro-optic (EO) sampling technique. Employing the 2D EO-sampling technique, we can obtain THz images using a CCD camera at a video rate of up to 30 frames per second. A spatial resolution of 1.4 mm was achieved. This resolution was reasonably close to the theoretical limit determined by diffraction. We observed not only static objects but also moving ones. To acquire spectroscopic information, time-domain images were collected. By processing these images on a computer, we can obtain spectroscopic images. Spectroscopy for silicon wafers was demonstrated

  9. Spectroscopic investigation and computational analysis of charge transfer hydrogen bonded reaction between 3-aminoquinoline with chloranilic acid in 1:1 stoichiometric ratio

    Science.gov (United States)

    Al-Ahmary, Khairia M.; Alenezi, Maha S.; Habeeb, Moustafa M.

    2015-10-01

    Charge transfer hydrogen bonded reaction between the electron donor (proton acceptor) 3-aminoquinoline with the electron acceptor (proton donor) chloranilic acid (H2CA) has been investigated experimentally and theoretically. The experimental work included the application of UV-vis spectroscopy to identify the charge transfer band of the formed complex, its molecular composition as well as estimating its formation constants in different solvent included acetonitrile (AN), methanol (MeOH), ethanol (EtOH) and chloroform (CHL). It has been recorded the presence of new absorption bands in the range 500-550 nm attributing to the formed complex. The molecular composition of the HBCT complex was found to be 1:1 (donor:acceptor) in all studied solvents based on continuous variation and photometric titration methods. In addition, the calculated formation constants from Benesi-Hildebrand equation recorded high values, especially in chloroform referring to the formation of stable HBCT complex. Infrared spectroscopy has been applied for the solid complex where formation of charge and proton transfer was proven in it. Moreover, 1H and 13C NMR spectroscopies were used to characterize the formed complex where charge and proton transfers were reconfirmed. Computational analysis included the use of GAMESS computations as a package of ChemBio3D Ultr12 program were applied for energy minimization and estimation of the stabilization energy for the produced complex. Also, geometrical parameters (bond lengths and bond angles) of the formed HBCT complex were computed and analyzed. Furthermore, Mullikan atomic charges, molecular potential energy surface, HOMO and LUMO molecular orbitals as well as assignment of the electronic spectra of the formed complex were presented. A full agreement between experimental and computational analysis has been found especially in the existence of the charge and proton transfers and the assignment of HOMO and LUMO molecular orbitals in the formed complex as

  10. Impact of a carbon tax on the Chilean economy: A computable general equilibrium analysis

    International Nuclear Information System (INIS)

    García Benavente, José Miguel

    2016-01-01

    In 2009, the government of Chile announced their official commitment to reduce national greenhouse gas emissions by 20% below a business-as-usual projection by 2020. Due to the fact that an effective way to reduce emissions is to implement a national carbon tax, the goal of this article is to quantify the value of a carbon tax that will allow the achievement of the emission reduction target and to assess its impact on the economy. The approach used in this work is to compare the economy before and after the implementation of the carbon tax by creating a static computable general equilibrium model of the Chilean economy. The model developed here disaggregates the economy in 23 industries and 23 commodities, and it uses four consumer agents (households, government, investment, and the rest of the world). By setting specific production and consumptions functions, the model can assess the variation in commodity prices, industrial production, and agent consumption, allowing a cross-sectoral analysis of the impact of the carbon tax. The benchmark of the economy, upon which the analysis is based, came from a social accounting matrix specially constructed for this model, based on the year 2010. The carbon tax was modeled as an ad valorem tax under two scenarios: tax on emissions from fossil fuels burned only by producers and tax on emissions from fossil fuels burned by producers and households. The abatement cost curve has shown that it is more cost-effective to tax only producers, rather than to tax both producers and households. This is due to the fact that when compared to the emission level observed in 2010, a 20% emission reduction will cause a loss in GDP of 2% and 2.3% respectively. Under the two scenarios, the tax value that could lead to that emission reduction is around 26 US dollars per ton of CO_2-equivalent. The most affected productive sectors are oil refinery, transport, and electricity — having a contraction between 7% and 9%. Analyzing the electricity

  11. Computational design of new molecular scaffolds for medicinal chemistry, part II: generalization of analog series-based scaffolds

    Science.gov (United States)

    Dimova, Dilyana; Stumpfe, Dagmar; Bajorath, Jürgen

    2018-01-01

    Aim: Extending and generalizing the computational concept of analog series-based (ASB) scaffolds. Materials & methods: Methodological modifications were introduced to further increase the coverage of analog series (ASs) and compounds by ASB scaffolds. From bioactive compounds, ASs were systematically extracted and second-generation ASB scaffolds isolated. Results: More than 20,000 second-generation ASB scaffolds with single or multiple substitution sites were extracted from active compounds, achieving more than 90% coverage of ASs. Conclusion: Generalization of the ASB scaffold approach has yielded a large knowledge base of scaffold-capturing compound series and target information. PMID:29379641

  12. Coping with Computer Viruses: General Discussion and Review of Symantec Anti-Virus for the Macintosh.

    Science.gov (United States)

    Primich, Tracy

    1992-01-01

    Discusses computer viruses that attack the Macintosh and describes Symantec AntiVirus for Macintosh (SAM), a commercial program designed to detect and eliminate viruses; sample screen displays are included. SAM is recommended for use in library settings as well as two public domain virus protection programs. (four references) (MES)

  13. Computer-Based Instruction for Improving Student Nurses' General Numeracy: Is It Effective? Two Randomised Trials

    Science.gov (United States)

    Ainsworth, Hannah; Gilchrist, Mollie; Grant, Celia; Hewitt, Catherine; Ford, Sue; Petrie, Moira; Torgerson, Carole J.; Torgerson, David J.

    2012-01-01

    In response to concern over the numeracy skills deficit displayed by student nurses, an online computer programme, "Authentic World[R]", which aims to simulate a real-life clinical environment and improve the medication dosage calculation skills of users, was developed (Founded in 2004 Authentic World Ltd is a spin out company of…

  14. Computer Series, 82. The Application of Expert Systems in the General Chemistry Laboratory.

    Science.gov (United States)

    Settle, Frank A., Jr.

    1987-01-01

    Describes the construction of expert computer systems using artificial intelligence technology and commercially available software, known as an expert system shell. Provides two applications; a simple one, the identification of seven white substances, and a more complicated one involving the qualitative analysis of six metal ions. (TW)

  15. Interaction of water, alkyl hydroperoxide, and allylic alcohol with a single-site homogeneous Ti-Si epoxidation catalyst: A spectroscopic and computational study.

    Science.gov (United States)

    Urakawa, Atsushi; Bürgi, Thomas; Skrabal, Peter; Bangerter, Felix; Baiker, Alfons

    2005-02-17

    Tetrakis(trimethylsiloxy)titanium (TTMST, Ti(OSiMe3)4) possesses an isolated Ti center and is a highly active homogeneous catalyst in epoxidation of various olefins. The structure of TTMST resembles that of the active sites in some heterogeneous Ti-Si epoxidation catalysts, especially silylated titania-silica mixed oxides. Water cleaves the Ti-O-Si bond and deactivates the catalyst. An alkyl hydroperoxide, TBHP (tert-butyl hydroperoxide), does not cleave the Ti-O-Si bond, but interacts via weak hydrogen-bonding as supported by NMR, DOSY, IR, and computational studies. ATR-IR spectroscopy combined with computational investigations shows that more than one, that is, up to four, TBHP can undergo hydrogen-bonding with TTMST, leading to the activation of the O-O bond of TBHP. The greater the number of TBHP molecules that form hydrogen bonds to TTMST, the more electrophilic the O-O bond becomes, and the more active the complex is for epoxidation. An allylic alcohol, 2-cyclohexen-1-ol, does not interact strongly with TTMST, but the interaction is prominent when it interacts with the TTMST-TBHP complex. On the basis of the experimental and theoretical findings, a hydrogen-bond-assisted epoxidation mechanism of TTMST is suggested.

  16. Low Overhead Real-Time Computing With General Purpose Operating Systems

    National Research Council Canada - National Science Library

    Raymond, Michael

    2004-01-01

    .... In larger systems and more recently, general-purpose operating systems such as SGI IRIX and Linux are used for new projects because they already have multiprocessor and device driver support as well a large user base...

  17. On Computation of Generalized Derivatives of the Normal-Cone Mapping and Their Applications

    Czech Academy of Sciences Publication Activity Database

    Gfrerer, H.; Outrata, Jiří

    2016-01-01

    Roč. 41, č. 4 (2016), s. 1535-1556 ISSN 0364-765X R&D Projects: GA ČR GAP402/12/1309 Institutional support: RVO:67985556 Keywords : parameterized generalized equation * graphical derivative * regular coderivative * mathematical program with equilibrium constraints Subject RIV: BA - General Mathematics Impact factor: 1.157, year: 2016 http://library.utia.cas.cz/separaty/2016/MTR/outrata-0463357.pdf

  18. Synthesis, X-ray diffraction method, spectroscopic characterization (FT-IR, 1H and 13C NMR), antimicrobial activity, Hirshfeld surface analysis and DFT computations of novel sulfonamide derivatives

    Science.gov (United States)

    Demircioğlu, Zeynep; Özdemir, Fethi Ahmet; Dayan, Osman; Şerbetçi, Zafer; Özdemir, Namık

    2018-06-01

    Synthesized compounds of N-(2-aminophenyl)benzenesulfonamide 1 and (Z)-N-(2-((2-nitrobenzylidene)amino)phenyl)benzenesulfonamide 2 were characterized by antimicrobial activity, FT-IR, 1H and 13C NMR. Two new Schiff base ligands containing aromatic sulfonamide fragment of (Z)-N-(2-((3-nitrobenzylidene)amino)phenyl)benzenesulfonamide 3 and (Z)-N-(2-((4-nitrobenzylidene)amino)phenyl)benzenesulfonamide 4 were synthesized and investigated by spectroscopic techniques including 1H and 13C NMR, FT-IR, single crystal X-ray diffraction, Hirshfeld surface, theoretical method analyses and by antimicrobial activity. The molecular geometry obtained from the X-ray structure determination was optimized Density Functional Theory (DFT/B3LYP) method with the 6-311++G(d,p) basis set in ground state. From the optimized geometry of the molecules of 3 and 4, the geometric parameters, vibrational wavenumbers and chemical shifts were computed. The optimized geometry results, which were well represented the X-ray data, were shown that the chosen of DFT/B3LYP 6-311G++(d,p) was a successful choice. After a successful optimization, frontier molecular orbitals, chemical activity, non-linear optical properties (NLO), molecular electrostatic mep (MEP), Mulliken population method, natural population analysis (NPA) and natural bond orbital analysis (NBO), which cannot be obtained experimentally, were calculated and investigated.

  19. General rigid motion correction for computed tomography imaging based on locally linear embedding

    Science.gov (United States)

    Chen, Mianyi; He, Peng; Feng, Peng; Liu, Baodong; Yang, Qingsong; Wei, Biao; Wang, Ge

    2018-02-01

    The patient motion can damage the quality of computed tomography images, which are typically acquired in cone-beam geometry. The rigid patient motion is characterized by six geometric parameters and are more challenging to correct than in fan-beam geometry. We extend our previous rigid patient motion correction method based on the principle of locally linear embedding (LLE) from fan-beam to cone-beam geometry and accelerate the computational procedure with the graphics processing unit (GPU)-based all scale tomographic reconstruction Antwerp toolbox. The major merit of our method is that we need neither fiducial markers nor motion-tracking devices. The numerical and experimental studies show that the LLE-based patient motion correction is capable of calibrating the six parameters of the patient motion simultaneously, reducing patient motion artifacts significantly.

  20. A general class of preconditioners for statistical iterative reconstruction of emission computed tomography

    International Nuclear Information System (INIS)

    Chinn, G.; Huang, S.C.

    1997-01-01

    A major drawback of statistical iterative image reconstruction for emission computed tomography is its high computational cost. The ill-posed nature of tomography leads to slow convergence for standard gradient-based iterative approaches such as the steepest descent or the conjugate gradient algorithm. In this paper new theory and methods for a class of preconditioners are developed for accelerating the convergence rate of iterative reconstruction. To demonstrate the potential of this class of preconditioners, a preconditioned conjugate gradient (PCG) iterative algorithm for weighted least squares reconstruction (WLS) was formulated for emission tomography. Using simulated positron emission tomography (PET) data of the Hoffman brain phantom, it was shown that the convergence rate of the PCG can reduce the number of iterations of the standard conjugate gradient algorithm by a factor of 2--8 times depending on the convergence criterion

  1. [Realistic possibilities of utilization of a personal computer in the office of a general practitioner].

    Science.gov (United States)

    Masopust, V

    1991-04-01

    In May 1990 work on the programme "Computer system of the health community doctor Mic DOKI was" completed which resolves more than 70 basic tasks pertaining to the keeping of health documentation by health community doctors; it resolves automatically the entire administrative work in the health community, makes it possible to evaluate the activity of doctors and nurses it will facilitate the work of control organs of future health insurance companies and contribute to investigations of the health status of the population. Despite some problems ensuing from the contemporary economic situation of the country, the validity of contemporary health regulations and minimal training of our health personnel in the use of personal computers computerization of the health community system can be considered an asset to the reform of the health services which is under way.

  2. Molecular modeling of the human eukaryotic translation initiation factor 5A (eIF5A) based on spectroscopic and computational analyses

    International Nuclear Information System (INIS)

    Costa-Neto, Claudio M.; Parreiras-e-Silva, Lucas T.; Ruller, Roberto; Oliveira, Eduardo B.; Miranda, Antonio; Oliveira, Laerte; Ward, Richard J.

    2006-01-01

    The eukaryotic translation initiation factor 5A (eIF5A) is a protein ubiquitously present in archaea and eukarya, which undergoes a unique two-step post-translational modification called hypusination. Several studies have shown that hypusination is essential for a variety of functional roles for eIF5A, including cell proliferation and synthesis of proteins involved in cell cycle control. Up to now neither a totally selective inhibitor of hypusination nor an inhibitor capable of directly binding to eIF5A has been reported in the literature. The discovery of such an inhibitor might be achieved by computer-aided drug design based on the 3D structure of the human eIF5A. In this study, we present a molecular model for the human eIF5A protein based on the crystal structure of the eIF5A from Leishmania brasiliensis, and compare the modeled conformation of the loop bearing the hypusination site with circular dichroism data obtained with a synthetic peptide of this loop. Furthermore, analysis of amino acid variability between different human eIF5A isoforms revealed peculiar structural characteristics that are of functional relevance

  3. Using Computational Chemistry Activities to Promote Learning and Retention in a Secondary School General Chemistry Setting

    Science.gov (United States)

    Ochterski, Joseph W.

    2014-01-01

    This article describes the results of using state-of-the-art, research-quality software as a learning tool in a general chemistry secondary school classroom setting. I present three activities designed to introduce fundamental chemical concepts regarding molecular shape and atomic orbitals to students with little background in chemistry, such as…

  4. Spectroscopic surveys of LAMOST

    International Nuclear Information System (INIS)

    Zhao Yongheng

    2015-01-01

    The Large Sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST), a new type of reflecting Schmidt telescope, has been designed and produced in China. It marks a breakthrough for large scale spectroscopic survey observation in that both large aperture and wide field of view have been achieved. LAMOST has the highest spectrum acquisition rate, and from October 2011 to June 2014 it has obtained 4.13 million spectra of celestial objects, of which 3.78 million are spectra of stars, with the stellar parameters of 2.20 million stars included. (author)

  5. A technique for integrating remote minicomputers into a general computer's file system

    CERN Document Server

    Russell, R D

    1976-01-01

    This paper describes a simple technique for interfacing remote minicomputers used for real-time data acquisition into the file system of a central computer. Developed as part of the ORION system at CERN, this 'File Manager' subsystem enables a program in the minicomputer to access and manipulate files of any type as if they resided on a storage device attached to the minicomputer. Yet, completely transparent to the program, the files are accessed from disks on the central system via high-speed data links, with response times comparable to local storage devices. (6 refs).

  6. The role in thanatogenesis of generalized brain edema in ischemic cerebral infarction (computer-morphometric research

    Directory of Open Access Journals (Sweden)

    E. A. Dyadyk

    2012-12-01

    Full Text Available This work presents the results of computer-morphometric study of perivascular and pericellular free (oedematous spaces in brain cortex at death from the ischemic cerebral infarction and from reasons unconnected directly with cerebral pathology. It was revealed, that the mean area of perivascular spaces (vasogenic edema index at brain infarction in 13 times exceeds such at extracerebral pathology, and mean area of pericellular spaces (cytotoxic edema index – almost in 12 times, but also it substantially differs on the degree of variation (in 2,5 times higher, than area of perivascular spaces.

  7. A general method for generating bathymetric data for hydrodynamic computer models

    Science.gov (United States)

    Burau, J.R.; Cheng, R.T.

    1989-01-01

    To generate water depth data from randomly distributed bathymetric data for numerical hydrodymamic models, raw input data from field surveys, water depth data digitized from nautical charts, or a combination of the two are sorted to given an ordered data set on which a search algorithm is used to isolate data for interpolation. Water depths at locations required by hydrodynamic models are interpolated from the bathymetric data base using linear or cubic shape functions used in the finite-element method. The bathymetric database organization and preprocessing, the search algorithm used in finding the bounding points for interpolation, the mathematics of the interpolation formulae, and the features of the automatic generation of water depths at hydrodynamic model grid points are included in the analysis. This report includes documentation of two computer programs which are used to: (1) organize the input bathymetric data; and (2) to interpolate depths for hydrodynamic models. An example of computer program operation is drawn from a realistic application to the San Francisco Bay estuarine system. (Author 's abstract)

  8. Importance sampling with imperfect cloning for the computation of generalized Lyapunov exponents

    Science.gov (United States)

    Anteneodo, Celia; Camargo, Sabrina; Vallejos, Raúl O.

    2017-12-01

    We revisit the numerical calculation of generalized Lyapunov exponents, L (q ) , in deterministic dynamical systems. The standard method consists of adding noise to the dynamics in order to use importance sampling algorithms. Then L (q ) is obtained by taking the limit noise-amplitude → 0 after the calculation. We focus on a particular method that involves periodic cloning and pruning of a set of trajectories. However, instead of considering a noisy dynamics, we implement an imperfect (noisy) cloning. This alternative method is compared with the standard one and, when possible, with analytical results. As a workbench we use the asymmetric tent map, the standard map, and a system of coupled symplectic maps. The general conclusion of this study is that the imperfect-cloning method performs as well as the standard one, with the advantage of preserving the deterministic dynamics.

  9. Analyzing the Effects of Technological Change: A Computable General Equilibrium Approach

    Science.gov (United States)

    1988-09-01

    present important simplifying assumptions about the nature of consumer preferences and production possibility sets. If a general equilibrium model...important assumptions are in such areas as consumer preferences , the actions of the government, and the financial structure of the model. Each of these is...back in the future. 4.3.2 Consumer demand Consumer preferences are a second important modeling assumption affecting the results of the study. The PILOT

  10. Implications of the Biofuels Boom for the Global Livestock Industry: A Computable General Equilibrium Analysis

    OpenAIRE

    Taheripour, Farzad; Hertel, Thomas W.; Tyner, Wallace E.

    2009-01-01

    In this paper, we offer a general equilibrium analysis of the impacts of US and EU biofuel mandates for the global livestock sector. Our simulation boosts biofuel production in the US and EU from 2006 levels to mandated 2015 levels. We show that mandates will encourage crop production in both biofuel and non biofuel producing regions, while reducing livestock and livestock production in most regions of the world. The non-ruminant industry curtails its production more than other livestock indu...

  11. Test-retest reliability of computer-based video analysis of general movements in healthy term-born infants.

    Science.gov (United States)

    Valle, Susanne Collier; Støen, Ragnhild; Sæther, Rannei; Jensenius, Alexander Refsum; Adde, Lars

    2015-10-01

    A computer-based video analysis has recently been presented for quantitative assessment of general movements (GMs). This method's test-retest reliability, however, has not yet been evaluated. The aim of the current study was to evaluate the test-retest reliability of computer-based video analysis of GMs, and to explore the association between computer-based video analysis and the temporal organization of fidgety movements (FMs). Test-retest reliability study. 75 healthy, term-born infants were recorded twice the same day during the FMs period using a standardized video set-up. The computer-based movement variables "quantity of motion mean" (Qmean), "quantity of motion standard deviation" (QSD) and "centroid of motion standard deviation" (CSD) were analyzed, reflecting the amount of motion and the variability of the spatial center of motion of the infant, respectively. In addition, the association between the variable CSD and the temporal organization of FMs was explored. Intraclass correlation coefficients (ICC 1.1 and ICC 3.1) were calculated to assess test-retest reliability. The ICC values for the variables CSD, Qmean and QSD were 0.80, 0.80 and 0.86 for ICC (1.1), respectively; and 0.80, 0.86 and 0.90 for ICC (3.1), respectively. There were significantly lower CSD values in the recordings with continual FMs compared to the recordings with intermittent FMs (ptest-retest reliability of computer-based video analysis of GMs, and a significant association between our computer-based video analysis and the temporal organization of FMs. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  12. General-purpose computer networks and resource sharing in ERDA. Volume 3. Remote resource-sharing experience and findings

    Energy Technology Data Exchange (ETDEWEB)

    1977-07-15

    The investigation focused on heterogeneous networks in which a variety of dissimilar computers and operating systems were interconnected nationwide. Homogeneous networks, such as MFE net and SACNET, were not considered since they could not be used for general purpose resource sharing. Issues of privacy and security are of concern in any network activity. However, consideration of privacy and security of sensitive data arise to a much lesser degree in unclassified scientific research than in areas involving personal or proprietary information. Therefore, the existing mechanisms at individual sites for protecting sensitive data were relied on, and no new protection mechanisms to prevent infringement of privacy and security were attempted. Further development of ERDA networking will need to incorporate additional mechanisms to prevent infringement of privacy. The investigation itself furnishes an excellent example of computational resource sharing through a heterogeneous network. More than twenty persons, representing seven ERDA computing sites, made extensive use of both ERDA and non-ERDA computers in coordinating, compiling, and formatting the data which constitute the bulk of this report. Volume 3 analyzes the benefits and barriers encountered in actual resource sharing experience, and provides case histories of typical applications.

  13. ACUTRI a computer code for assessing doses to the general public due to acute tritium releases

    CERN Document Server

    Yokoyama, S; Noguchi, H; Ryufuku, S; Sasaki, T

    2002-01-01

    Tritium, which is used as a fuel of a D-T burning fusion reactor, is the most important radionuclide for the safety assessment of a nuclear fusion experimental reactor such as ITER. Thus, a computer code, ACUTRI, which calculates the radiological impact of tritium released accidentally to the atmosphere, has been developed, aiming to be of use in a discussion of licensing of a fusion experimental reactor and an environmental safety evaluation method in Japan. ACUTRI calculates an individual tritium dose based on transfer models specific to tritium in the environment and ICRP dose models. In this calculation it is also possible to analyze statistically on meteorology in the same way as a conventional dose assessment method according to the meteorological guide of the Nuclear Safety Commission of Japan. A Gaussian plume model is used for calculating the atmospheric dispersion of tritium gas (HT) and/or tritiated water (HTO). The environmental pathway model in ACUTRI considers the following internal exposures: i...

  14. General method and thermodynamic tables for computation of equilibrium composition and temperature of chemical reactions

    Science.gov (United States)

    Huff, Vearl N; Gordon, Sanford; Morrell, Virginia E

    1951-01-01

    A rapidly convergent successive approximation process is described that simultaneously determines both composition and temperature resulting from a chemical reaction. This method is suitable for use with any set of reactants over the complete range of mixture ratios as long as the products of reaction are ideal gases. An approximate treatment of limited amounts of liquids and solids is also included. This method is particularly suited to problems having a large number of products of reaction and to problems that require determination of such properties as specific heat or velocity of sound of a dissociating mixture. The method presented is applicable to a wide variety of problems that include (1) combustion at constant pressure or volume; and (2) isentropic expansion to an assigned pressure, temperature, or Mach number. Tables of thermodynamic functions needed with this method are included for 42 substances for convenience in numerical computations.

  15. A contribution to the physically and geometrically nonlinear computer analysis of general reinforced concrete shells

    International Nuclear Information System (INIS)

    Zahlten, W.

    1990-02-01

    Starting from a Kirchhoff-Love type shell theory of finite rotations a layered shell element for reinforced concrete is developed. The plastic-fracturing theory due to Bazant/Kim is used to describe the uncracked concrete. Tension cracking is controlled by a principle tensile stress criterion. An elasto-plastic law with kinematic hardening models the reinforcing steel. The tension stiffening concept of Gilbert/Warner allows an averaged consideration of the concrete between cracks. By discretization of the displacement field the element matrices are obtained which are derived via tensor notation. The nonlinear structural response is computed by incremental-iterative path-tracing algorithms. The range of applicability of the model is finally be proven by several examples with time-invariant and time-dependent loading. (orig.) [de

  16. Shell model and spectroscopic factors

    International Nuclear Information System (INIS)

    Poves, P.

    2007-01-01

    In these lectures, I introduce the notion of spectroscopic factor in the shell model context. A brief review is given of the present status of the large scale applications of the Interacting Shell Model. The spectroscopic factors and the spectroscopic strength are discussed for nuclei in the vicinity of magic closures and for deformed nuclei. (author)

  17. ALAM/CLAM and some applications of computer algebra systems to problems in general relativity

    International Nuclear Information System (INIS)

    Russell-Clark, R.A.

    1973-01-01

    This paper is divided into three parts. Part A presents a historical survey of the development of the system, a brief description of its features and, finally, a critical assessment. ALAM and CLAM have been used in many problems in General Relativity; the vast majority of these belong to a set of standard calculations termed ''metric applications''. However, four large non-standard applications have been attempted successfully and these are described in Part B. CAMAL is the only other system which has been used extensively for work in relativity. CAMAL has played an important role in two research projects and details of these are given in Part C

  18. Simulations of axisymmetric, Newtonian star clusters - prelude to 2 + 1 general relativistic computations

    International Nuclear Information System (INIS)

    Shapiro, S.L.; Teukolsky, S.A.

    1987-01-01

    The dynamical behavior of nonspherical systems in general relativity is analyzed, allowing for rotation and the emission of gravitational waves. An axisymmetric code for solving the Vlasov equation in the Newtonian limit based on a mean-field particle simulation scheme is constructed and tested by reproducing the known evolution of homogeneous spheroids with and without rotation, including the Lin-Kestel-Shu instability. Results for the collapse of homogeneous, nonequilbrium spheroids are described, and stability studies of homogeneous, equilibrium spheroids are summarized. Finally, the code is used to follow the evolution of inhomogeneous, centrally condensed spheroids, and the results are compared with those for homogeneous collapse. 22 references

  19. Theory and computation of general force balance in non-axisymmetric tokamak equilibria

    Science.gov (United States)

    Park, Jong-Kyu; Logan, Nikolas; Wang, Zhirui; Kim, Kimin; Boozer, Allen; Liu, Yueqiang; Menard, Jonathan

    2014-10-01

    Non-axisymmetric equilibria in tokamaks can be effectively described by linearized force balance. In addition to the conventional isotropic pressure force, there are three important components that can strongly contribute to the force balance; rotational, anisotropic tensor pressure, and externally given forces, i.e. ∇ --> p + ρv-> . ∇ --> v-> + ∇ --> . Π + f-> = j-> × B-> , especially in, but not limited to, high β and rotating plasmas. Within the assumption of nested flux surfaces, Maxwell equations and energy minimization lead to the modified-generalized Newcomb equation for radial displacements with simple algebraic relations for perpendicular and parallel displacements, including an inhomogeneous term if any of the forces are not explicitly dependent on displacements. The general perturbed equilibrium code (GPEC) solves this force balance consistent with energy and torque given by external perturbations. Local and global behaviors of solutions will be discussed when ∇ --> . Π is solved by the semi-analytic code PENT and will be compared with MARS-K. Any first-principle transport code calculating ∇ --> . Π or f-> , e.g. POCA, can also be incorporated without demanding iterations. This work was supported by DOE Contract DE-AC02-09CH11466.

  20. A computational account of the development of the generalization of shape information.

    Science.gov (United States)

    Doumas, Leonidas A A; Hummel, John E

    2010-05-01

    Abecassis, Sera, Yonas, and Schwade (2001) showed that young children represent shapes more metrically, and perhaps more holistically, than do older children and adults. How does a child transition from representing objects and events as undifferentiated wholes to representing them explicitly in terms of their attributes? According to RBC (Recognition-by-Components theory; Biederman, 1987), objects are represented as collections of categorical geometric parts ("geons") in particular categorical spatial relations. We propose that the transition from holistic to more categorical visual shape processing is a function of the development of geon-like representations via a process of progressive intersection discovery. We present an account of this transition in terms of DORA (Doumas, Hummel, & Sandhofer, 2008), a model of the discovery of relational concepts. We demonstrate that DORA can learn representations of single geons by comparing objects composed of multiple geons. In addition, as DORA is learning it follows the same performance trajectory as children, originally generalizing shape more metrically/holistically and eventually generalizing categorically. Copyright © 2010 Cognitive Science Society, Inc.

  1. Predicting oropharyngeal tumor volume throughout the course of radiation therapy from pretreatment computed tomography data using general linear models.

    Science.gov (United States)

    Yock, Adam D; Rao, Arvind; Dong, Lei; Beadle, Beth M; Garden, Adam S; Kudchadker, Rajat J; Court, Laurence E

    2014-05-01

    The purpose of this work was to develop and evaluate the accuracy of several predictive models of variation in tumor volume throughout the course of radiation therapy. Nineteen patients with oropharyngeal cancers were imaged daily with CT-on-rails for image-guided alignment per an institutional protocol. The daily volumes of 35 tumors in these 19 patients were determined and used to generate (1) a linear model in which tumor volume changed at a constant rate, (2) a general linear model that utilized the power fit relationship between the daily and initial tumor volumes, and (3) a functional general linear model that identified and exploited the primary modes of variation between time series describing the changing tumor volumes. Primary and nodal tumor volumes were examined separately. The accuracy of these models in predicting daily tumor volumes were compared with those of static and linear reference models using leave-one-out cross-validation. In predicting the daily volume of primary tumors, the general linear model and the functional general linear model were more accurate than the static reference model by 9.9% (range: -11.6%-23.8%) and 14.6% (range: -7.3%-27.5%), respectively, and were more accurate than the linear reference model by 14.2% (range: -6.8%-40.3%) and 13.1% (range: -1.5%-52.5%), respectively. In predicting the daily volume of nodal tumors, only the 14.4% (range: -11.1%-20.5%) improvement in accuracy of the functional general linear model compared to the static reference model was statistically significant. A general linear model and a functional general linear model trained on data from a small population of patients can predict the primary tumor volume throughout the course of radiation therapy with greater accuracy than standard reference models. These more accurate models may increase the prognostic value of information about the tumor garnered from pretreatment computed tomography images and facilitate improved treatment management.

  2. RosettaAntibodyDesign (RAbD): A general framework for computational antibody design.

    Science.gov (United States)

    Adolf-Bryfogle, Jared; Kalyuzhniy, Oleks; Kubitz, Michael; Weitzner, Brian D; Hu, Xiaozhen; Adachi, Yumiko; Schief, William R; Dunbrack, Roland L

    2018-04-01

    A structural-bioinformatics-based computational methodology and framework have been developed for the design of antibodies to targets of interest. RosettaAntibodyDesign (RAbD) samples the diverse sequence, structure, and binding space of an antibody to an antigen in highly customizable protocols for the design of antibodies in a broad range of applications. The program samples antibody sequences and structures by grafting structures from a widely accepted set of the canonical clusters of CDRs (North et al., J. Mol. Biol., 406:228-256, 2011). It then performs sequence design according to amino acid sequence profiles of each cluster, and samples CDR backbones using a flexible-backbone design protocol incorporating cluster-based CDR constraints. Starting from an existing experimental or computationally modeled antigen-antibody structure, RAbD can be used to redesign a single CDR or multiple CDRs with loops of different length, conformation, and sequence. We rigorously benchmarked RAbD on a set of 60 diverse antibody-antigen complexes, using two design strategies-optimizing total Rosetta energy and optimizing interface energy alone. We utilized two novel metrics for measuring success in computational protein design. The design risk ratio (DRR) is equal to the frequency of recovery of native CDR lengths and clusters divided by the frequency of sampling of those features during the Monte Carlo design procedure. Ratios greater than 1.0 indicate that the design process is picking out the native more frequently than expected from their sampled rate. We achieved DRRs for the non-H3 CDRs of between 2.4 and 4.0. The antigen risk ratio (ARR) is the ratio of frequencies of the native amino acid types, CDR lengths, and clusters in the output decoys for simulations performed in the presence and absence of the antigen. For CDRs, we achieved cluster ARRs as high as 2.5 for L1 and 1.5 for H2. For sequence design simulations without CDR grafting, the overall recovery for the native

  3. RosettaAntibodyDesign (RAbD): A general framework for computational antibody design

    Science.gov (United States)

    Adolf-Bryfogle, Jared; Kalyuzhniy, Oleks; Kubitz, Michael; Hu, Xiaozhen; Adachi, Yumiko; Schief, William R.

    2018-01-01

    A structural-bioinformatics-based computational methodology and framework have been developed for the design of antibodies to targets of interest. RosettaAntibodyDesign (RAbD) samples the diverse sequence, structure, and binding space of an antibody to an antigen in highly customizable protocols for the design of antibodies in a broad range of applications. The program samples antibody sequences and structures by grafting structures from a widely accepted set of the canonical clusters of CDRs (North et al., J. Mol. Biol., 406:228–256, 2011). It then performs sequence design according to amino acid sequence profiles of each cluster, and samples CDR backbones using a flexible-backbone design protocol incorporating cluster-based CDR constraints. Starting from an existing experimental or computationally modeled antigen-antibody structure, RAbD can be used to redesign a single CDR or multiple CDRs with loops of different length, conformation, and sequence. We rigorously benchmarked RAbD on a set of 60 diverse antibody–antigen complexes, using two design strategies—optimizing total Rosetta energy and optimizing interface energy alone. We utilized two novel metrics for measuring success in computational protein design. The design risk ratio (DRR) is equal to the frequency of recovery of native CDR lengths and clusters divided by the frequency of sampling of those features during the Monte Carlo design procedure. Ratios greater than 1.0 indicate that the design process is picking out the native more frequently than expected from their sampled rate. We achieved DRRs for the non-H3 CDRs of between 2.4 and 4.0. The antigen risk ratio (ARR) is the ratio of frequencies of the native amino acid types, CDR lengths, and clusters in the output decoys for simulations performed in the presence and absence of the antigen. For CDRs, we achieved cluster ARRs as high as 2.5 for L1 and 1.5 for H2. For sequence design simulations without CDR grafting, the overall recovery for the

  4. A combined experimental and computational study of 3-bromo-5-(2,5-difluorophenyl) pyridine and 3,5-bis(naphthalen-1-yl)pyridine: Insight into the synthesis, spectroscopic, single crystal XRD, electronic, nonlinear optical and biological properties

    Science.gov (United States)

    Ghiasuddin; Akram, Muhammad; Adeel, Muhammad; Khalid, Muhammad; Tahir, Muhammad Nawaz; Khan, Muhammad Usman; Asghar, Muhammad Adnan; Ullah, Malik Aman; Iqbal, Muhammad

    2018-05-01

    Carbon-carbon coupling play a vital role in the synthetic field of organic chemistry. Two novel pyridine derivatives: 3-bromo-5-(2,5-difluorophenyl)pyridine (1) and 3,5-bis(naphthalen-1-yl)pyridine (2) were synthesized via carbon-carbon coupling, characterized by XRD, spectroscopic techniques and also investigated by using density functional theory (DFT). XRD data and optimized DFT studies are found to be in good correspondence with each other. The UV-Vis analysis of compounds under study i.e. (1) and (2) was obtained by using "TD-DFT/B3LYP/6-311 + G(d,p)" level of theory to explain the vertical transitions. Calculated FT-IR and UV-Vis results are found to be in good agreement with experimental FT-IR and UV-Vis findings. Natural bond orbital (NBO) study was performed using B3LYP/6-311 + G(d,p) level to find the most stable molecular structure of the compounds. Frontier molecular orbital (FMO) analysis were performed at B3LYP/6-311 + G(d,p) level of theory, which indicates that the molecules might be bioactive. Moreover, the bioactivity of compounds (1) and (2) have been confirmed by the experimental activity in terms of zones of inhibition against bacteria and fungus. Chemical reactivity of compounds (1) and (2) was indicated by mapping molecular electrostatic potential (MEP) over the entire stabilized geometries of the compounds under study. The nonlinear optical properties were computed with B3LYP/6-311 + G(d,p) level of theory which are found greater than the value of urea due to conjugation effect. Two state model has been further employed to explain the nonlinear optical properties of compounds under investigation.

  5. On the Computational Complexity of the Languages of General Symbolic Dynamical Systems and Beta-Shifts

    DEFF Research Database (Denmark)

    Simonsen, Jakob Grue

    2009-01-01

    We consider the computational complexity of languages of symbolic dynamical systems. In particular, we study complexity hierarchies and membership of the non-uniform class P/poly. We prove: 1.For every time-constructible, non-decreasing function t(n)=@w(n), there is a symbolic dynamical system...... with language decidable in deterministic time O(n^2t(n)), but not in deterministic time o(t(n)). 2.For every space-constructible, non-decreasing function s(n)=@w(n), there is a symbolic dynamical system with language decidable in deterministic space O(s(n)), but not in deterministic space o(s(n)). 3.There...... are symbolic dynamical systems having hard and complete languages under @?"m^l^o^g^s- and @?"m^p-reduction for every complexity class above LOGSPACE in the backbone hierarchy (hence, P-complete, NP-complete, coNP-complete, PSPACE-complete, and EXPTIME-complete sets). 4.There are decidable languages of symbolic...

  6. China’s Rare Earths Supply Forecast in 2025: A Dynamic Computable General Equilibrium Analysis

    Directory of Open Access Journals (Sweden)

    Jianping Ge

    2016-09-01

    Full Text Available The supply of rare earths in China has been the focus of significant attention in recent years. Due to changes in regulatory policies and the development of strategic emerging industries, it is critical to investigate the scenario of rare earth supplies in 2025. To address this question, this paper constructed a dynamic computable equilibrium (DCGE model to forecast the production, domestic supply, and export of China’s rare earths in 2025. Based on our analysis, production will increase by 10.8%–12.6% and achieve 116,335–118,260 tons of rare-earth oxide (REO in 2025, based on recent extraction control during 2011–2016. Moreover, domestic supply and export will be 75,081–76,800 tons REO and 38,797–39,400 tons REO, respectively. The technological improvements on substitution and recycling will significantly decrease the supply and mining activities of rare earths. From a policy perspective, we found that the elimination of export regulations, including export quotas and export taxes, does have a negative impact on China’s future domestic supply of rare earths. The policy conflicts between the increase in investment in strategic emerging industries, and the increase in resource and environmental taxes on rare earths will also affect China’s rare earths supply in the future.

  7. A computational code for resolution of general compartment models applied to internal dosimetry

    Energy Technology Data Exchange (ETDEWEB)

    Claro, Thiago R.; Todo, Alberto S., E-mail: claro@usp.br, E-mail: astodo@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2011-07-01

    The dose resulting from internal contamination can be estimated with the use of biokinetic models combined with experimental results obtained from bio analysis and the knowledge of the incorporation time. The biokinetics models can be represented by a set of compartments expressing the transportation, retention and elimination of radionuclides from the body. The ICRP publications, number 66, 78 and 100, present compartmental models for the respiratory tract, gastrointestinal tract and for systemic distribution for an array of radionuclides of interest for the radiological protection. The objective of this work is to develop a computational code for designing, visualization and resolution of compartmental models of any nature. There are available four different techniques for the resolution of system of differential equations, including semi-analytical and numerical methods. The software was developed in C{ne} programming, using a Microsoft Access database and XML standards for file exchange with other applications. Compartmental models for uranium, thorium and iodine radionuclides were generated for the validation of the CBT software. The models were subsequently solved by SSID software and the results compared with the values published in the issue 78 of ICRP. In all cases the system is in accordance with the values published by ICRP. (author)

  8. Tools for Brain-Computer Interaction: a general concept for a hybrid BCI (hBCI

    Directory of Open Access Journals (Sweden)

    Gernot R. Mueller-Putz

    2011-11-01

    Full Text Available The aim of this work is to present the development of a hybrid Brain-Computer Interface (hBCI which combines existing input devices with a BCI. Thereby, the BCI should be available if the user wishes to extend the types of inputs available to an assistive technology system, but the user can also choose not to use the BCI at all; the BCI is active in the background. The hBCI might decide on the one hand which input channel(s offer the most reliable signal(s and switch between input channels to improve information transfer rate, usability, or other factors, or on the other hand fuse various input channels. One major goal therefore is to bring the BCI technology to a level where it can be used in a maximum number of scenarios in a simple way. To achieve this, it is of great importance that the hBCI is able to operate reliably for long periods, recognizing and adapting to changes as it does so. This goal is only possible if many different subsystems in the hBCI can work together. Since one research institute alone cannot provide such different functionality, collaboration between institutes is necessary. To allow for such a collaboration, a common software framework was investigated.

  9. Computational integration of homolog and pathway gene module expression reveals general stemness signatures.

    Directory of Open Access Journals (Sweden)

    Martina Koeva

    Full Text Available The stemness hypothesis states that all stem cells use common mechanisms to regulate self-renewal and multi-lineage potential. However, gene expression meta-analyses at the single gene level have failed to identify a significant number of genes selectively expressed by a broad range of stem cell types. We hypothesized that stemness may be regulated by modules of homologs. While the expression of any single gene within a module may vary from one stem cell type to the next, it is possible that the expression of the module as a whole is required so that the expression of different, yet functionally-synonymous, homologs is needed in different stem cells. Thus, we developed a computational method to test for stem cell-specific gene expression patterns from a comprehensive collection of 49 murine datasets covering 12 different stem cell types. We identified 40 individual genes and 224 stemness modules with reproducible and specific up-regulation across multiple stem cell types. The stemness modules included families regulating chromatin remodeling, DNA repair, and Wnt signaling. Strikingly, the majority of modules represent evolutionarily related homologs. Moreover, a score based on the discovered modules could accurately distinguish stem cell-like populations from other cell types in both normal and cancer tissues. This scoring system revealed that both mouse and human metastatic populations exhibit higher stemness indices than non-metastatic populations, providing further evidence for a stem cell-driven component underlying the transformation to metastatic disease.

  10. A computational code for resolution of general compartment models applied to internal dosimetry

    International Nuclear Information System (INIS)

    Claro, Thiago R.; Todo, Alberto S.

    2011-01-01

    The dose resulting from internal contamination can be estimated with the use of biokinetic models combined with experimental results obtained from bio analysis and the knowledge of the incorporation time. The biokinetics models can be represented by a set of compartments expressing the transportation, retention and elimination of radionuclides from the body. The ICRP publications, number 66, 78 and 100, present compartmental models for the respiratory tract, gastrointestinal tract and for systemic distribution for an array of radionuclides of interest for the radiological protection. The objective of this work is to develop a computational code for designing, visualization and resolution of compartmental models of any nature. There are available four different techniques for the resolution of system of differential equations, including semi-analytical and numerical methods. The software was developed in C≠ programming, using a Microsoft Access database and XML standards for file exchange with other applications. Compartmental models for uranium, thorium and iodine radionuclides were generated for the validation of the CBT software. The models were subsequently solved by SSID software and the results compared with the values published in the issue 78 of ICRP. In all cases the system is in accordance with the values published by ICRP. (author)

  11. Transportable GPU (General Processor Units) chip set technology for standard computer architectures

    Science.gov (United States)

    Fosdick, R. E.; Denison, H. C.

    1982-11-01

    The USAFR-developed GPU Chip Set has been utilized by Tracor to implement both USAF and Navy Standard 16-Bit Airborne Computer Architectures. Both configurations are currently being delivered into DOD full-scale development programs. Leadless Hermetic Chip Carrier packaging has facilitated implementation of both architectures on single 41/2 x 5 substrates. The CMOS and CMOS/SOS implementations of the GPU Chip Set have allowed both CPU implementations to use less than 3 watts of power each. Recent efforts by Tracor for USAF have included the definition of a next-generation GPU Chip Set that will retain the application-proven architecture of the current chip set while offering the added cost advantages of transportability across ISO-CMOS and CMOS/SOS processes and across numerous semiconductor manufacturers using a newly-defined set of common design rules. The Enhanced GPU Chip Set will increase speed by an approximate factor of 3 while significantly reducing chip counts and costs of standard CPU implementations.

  12. ACUTRI: a computer code for assessing doses to the general public due to acute tritium releases

    Energy Technology Data Exchange (ETDEWEB)

    Yokoyama, Sumi; Noguchi, Hiroshi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Ryufuku, Susumu; Sasaki, Toshihisa; Kurosawa, Naohiro [Visible Information Center, Inc., Tokai, Ibaraki (Japan)

    2002-11-01

    Tritium, which is used as a fuel of a D-T burning fusion reactor, is the most important radionuclide for the safety assessment of a nuclear fusion experimental reactor such as ITER. Thus, a computer code, ACUTRI, which calculates the radiological impact of tritium released accidentally to the atmosphere, has been developed, aiming to be of use in a discussion of licensing of a fusion experimental reactor and an environmental safety evaluation method in Japan. ACUTRI calculates an individual tritium dose based on transfer models specific to tritium in the environment and ICRP dose models. In this calculation it is also possible to analyze statistically on meteorology in the same way as a conventional dose assessment method according to the meteorological guide of the Nuclear Safety Commission of Japan. A Gaussian plume model is used for calculating the atmospheric dispersion of tritium gas (HT) and/or tritiated water (HTO). The environmental pathway model in ACUTRI considers the following internal exposures: inhalation from a primary plume (HT and/or HTO) released from the facilities and inhalation from a secondary plume (HTO) reemitted from the ground following deposition of HT and HTO. This report describes an outline of the ACUTRI code, a user guide and the results of test calculation. (author)

  13. Feasibility and impact of a computer-guided consultation on guideline-based management of COPD in general practice.

    Science.gov (United States)

    Angus, Robert M; Thompson, Elizabeth B; Davies, Lisa; Trusdale, Ann; Hodgson, Chris; McKnight, Eddie; Davies, Andrew; Pearson, Mike G

    2012-12-01

    Applying guidelines is a universal challenge that is often not met. Intelligent software systems that facilitate real-time management during a clinical interaction may offer a solution. To determine if the use of a computer-guided consultation that facilitates the National Institute for Health and Clinical Excellence-based chronic obstructive pulmonary disease (COPD) guidance and prompts clinical decision-making is feasible in primary care and to assess its impact on diagnosis and management in reviews of COPD patients. Practice nurses, one-third of whom had no specific respiratory training, undertook a computer-guided review in the usual consulting room setting using a laptop computer with the screen visible to them and to the patient. A total of 293 patients (mean (SD) age 69.7 (10.1) years, 163 (55.6%) male) with a diagnosis of COPD were randomly selected from GP databases in 16 practices and assessed. Of 236 patients who had spirometry, 45 (19%) did not have airflow obstruction and the guided clinical history changed the primary diagnosis from COPD in a further 24 patients. In the 191 patients with confirmed COPD, the consultations prompted management changes including 169 recommendations for altered prescribing of inhalers (addition or discontinuation, inhaler dose or device). In addition, 47% of the 55 current smokers were referred for smoking cessation support, 12 (6%) for oxygen assessment, and 47 (24%) for pulmonary rehabilitation. Computer-guided consultations are practicable in general practice. Primary care COPD databases were confirmed to contain a significant proportion of incorrectly assigned patients. They resulted in interventions and the rationalisation of prescribing in line with recommendations. Only in 22 (12%) of those fully assessed was no management change suggested. The introduction of a computer-guided consultation offers the prospect of comprehensive guideline quality management.

  14. On the computation of the higher-order statistics of the channel capacity over generalized fading channels

    KAUST Repository

    Yilmaz, Ferkan

    2012-12-01

    The higher-order statistics (HOS) of the channel capacity μn=E[logn (1+γ end)], where n ∈ N denotes the order of the statistics, has received relatively little attention in the literature, due in part to the intractability of its analysis. In this letter, we propose a novel and unified analysis, which is based on the moment generating function (MGF) technique, to exactly compute the HOS of the channel capacity. More precisely, our mathematical formalism can be readily applied to maximal-ratio-combining (MRC) receivers operating in generalized fading environments. The mathematical formalism is illustrated by some numerical examples focusing on the correlated generalized fading environments. © 2012 IEEE.

  15. On the computation of the higher-order statistics of the channel capacity over generalized fading channels

    KAUST Repository

    Yilmaz, Ferkan; Alouini, Mohamed-Slim

    2012-01-01

    The higher-order statistics (HOS) of the channel capacity μn=E[logn (1+γ end)], where n ∈ N denotes the order of the statistics, has received relatively little attention in the literature, due in part to the intractability of its analysis. In this letter, we propose a novel and unified analysis, which is based on the moment generating function (MGF) technique, to exactly compute the HOS of the channel capacity. More precisely, our mathematical formalism can be readily applied to maximal-ratio-combining (MRC) receivers operating in generalized fading environments. The mathematical formalism is illustrated by some numerical examples focusing on the correlated generalized fading environments. © 2012 IEEE.

  16. A general method for assessing brain-computer interface performance and its limitations

    Science.gov (United States)

    Hill, N. Jeremy; Häuser, Ann-Katrin; Schalk, Gerwin

    2014-04-01

    Objective. When researchers evaluate brain-computer interface (BCI) systems, we want quantitative answers to questions such as: How good is the system’s performance? How good does it need to be? and: Is it capable of reaching the desired level in future? In response to the current lack of objective, quantitative, study-independent approaches, we introduce methods that help to address such questions. We identified three challenges: (I) the need for efficient measurement techniques that adapt rapidly and reliably to capture a wide range of performance levels; (II) the need to express results in a way that allows comparison between similar but non-identical tasks; (III) the need to measure the extent to which certain components of a BCI system (e.g. the signal processing pipeline) not only support BCI performance, but also potentially restrict the maximum level it can reach. Approach. For challenge (I), we developed an automatic staircase method that adjusted task difficulty adaptively along a single abstract axis. For challenge (II), we used the rate of information gain between two Bernoulli distributions: one reflecting the observed success rate, the other reflecting chance performance estimated by a matched random-walk method. This measure includes Wolpaw’s information transfer rate as a special case, but addresses the latter’s limitations including its restriction to item-selection tasks. To validate our approach and address challenge (III), we compared four healthy subjects’ performance using an EEG-based BCI, a ‘Direct Controller’ (a high-performance hardware input device), and a ‘Pseudo-BCI Controller’ (the same input device, but with control signals processed by the BCI signal processing pipeline). Main results. Our results confirm the repeatability and validity of our measures, and indicate that our BCI signal processing pipeline reduced attainable performance by about 33% (21 bits min-1). Significance. Our approach provides a flexible basis

  17. Limits to high-speed simulations of spiking neural networks using general-purpose computers.

    Science.gov (United States)

    Zenke, Friedemann; Gerstner, Wulfram

    2014-01-01

    To understand how the central nervous system performs computations using recurrent neuronal circuitry, simulations have become an indispensable tool for theoretical neuroscience. To study neuronal circuits and their ability to self-organize, increasing attention has been directed toward synaptic plasticity. In particular spike-timing-dependent plasticity (STDP) creates specific demands for simulations of spiking neural networks. On the one hand a high temporal resolution is required to capture the millisecond timescale of typical STDP windows. On the other hand network simulations have to evolve over hours up to days, to capture the timescale of long-term plasticity. To do this efficiently, fast simulation speed is the crucial ingredient rather than large neuron numbers. Using different medium-sized network models consisting of several thousands of neurons and off-the-shelf hardware, we compare the simulation speed of the simulators: Brian, NEST and Neuron as well as our own simulator Auryn. Our results show that real-time simulations of different plastic network models are possible in parallel simulations in which numerical precision is not a primary concern. Even so, the speed-up margin of parallelism is limited and boosting simulation speeds beyond one tenth of real-time is difficult. By profiling simulation code we show that the run times of typical plastic network simulations encounter a hard boundary. This limit is partly due to latencies in the inter-process communications and thus cannot be overcome by increased parallelism. Overall, these results show that to study plasticity in medium-sized spiking neural networks, adequate simulation tools are readily available which run efficiently on small clusters. However, to run simulations substantially faster than real-time, special hardware is a prerequisite.

  18. On-line data processing apparatus for spectroscopic measurements of atomic uranium

    International Nuclear Information System (INIS)

    Miron, E.; Levin, L.A.; Erez, G; Baumatz, D; Goren, I.; Shpancer, I.

    1977-01-01

    A computer-based apparatus for on-line spectroscopic measurements of atomic uranium is described. The system is capable of enhancing the signal-to-noise ratio by averaging, and performing calculations. Computation flow charts and programs are included

  19. A k-distribution-based radiation code and its computational optimization for an atmospheric general circulation model

    International Nuclear Information System (INIS)

    Sekiguchi, Miho; Nakajima, Teruyuki

    2008-01-01

    The gas absorption process scheme in the broadband radiative transfer code 'mstrn8', which is used to calculate atmospheric radiative transfer efficiently in a general circulation model, is improved. Three major improvements are made. The first is an update of the database of line absorption parameters and the continuum absorption model. The second is a change to the definition of the selection rule for gas absorption used to choose which absorption bands to include. The last is an upgrade of the optimization method used to decrease the number of quadrature points used for numerical integration in the correlated k-distribution approach, thereby realizing higher computational efficiency without losing accuracy. The new radiation package termed 'mstrnX' computes radiation fluxes and heating rates with errors less than 0.6 W/m 2 and 0.3 K/day, respectively, through the troposphere and the lower stratosphere for any standard AFGL atmospheres. A serious cold bias problem of an atmospheric general circulation model using the ancestor code 'mstrn8' is almost solved by the upgrade to 'mstrnX'

  20. Positron Emission Tomography Computed Tomography: A Guide for the General Radiologist.

    Science.gov (United States)

    Beadsmoore, Clare; Newman, David; MacIver, Duncan; Pawaroo, Davina

    2015-11-01

    Cancer remains a leading cause of death in Canada and worldwide. Whilst advances in anatomical imaging to detect and monitor malignant disease have continued over the last few decades, limitations remain. Functional imaging, such as positron emission tomography (PET), has improved the sensitivity and specificity in detecting malignant disease. In combination with computed tomography (CT), PET is now commonly used in the oncology setting and is an integral part of many cancer patients' pathways. Although initially the CT component of the study was purely for attenuation of the PET imaging and to provide anatomical coregistration, many centers now combine the PET study with a diagnostic quality contrast enhanced CT to provide one stop staging, thus refining the patient's pathway. The commonest tracer used in everyday practice is FDG (F18-fluorodeoxyglucose). There are many more tracers in routine clinical practice and those with emerging roles, such as 11C-choline, useful in the imaging of prostate cancer; 11C-methionine, useful in imaging brain tumours; C11-acetate, used in imaging hepatocellular carcinomas; 18F-FLT, which can be used as a marker of cellular proliferation in various malignancies; and F18-DOPA and various 68Ga-somatostatin analogues, used in patients with neuroendocrine tumours. In this article we concentrate on FDG PETCT as this is the most commonly available and widely utilised tracer now used to routinely stage a number of cancers. PETCT alters the stage in approximately one-third of patients compared to anatomical imaging alone. Increasingly, PETCT is being used to assess early metabolic response to treatment. Metabolic response can be seen much earlier than a change in the size/volume of the disease which is measured by standard CT imaging. This can aid treatment decisions in both in terms of modifying therapy and in addition to providing important prognostic information. Furthermore, it is helpful in patients with distorted anatomy from surgery

  1. A general model for likelihood computations of genetic marker data accounting for linkage, linkage disequilibrium, and mutations.

    Science.gov (United States)

    Kling, Daniel; Tillmar, Andreas; Egeland, Thore; Mostad, Petter

    2015-09-01

    Several applications necessitate an unbiased determination of relatedness, be it in linkage or association studies or in a forensic setting. An appropriate model to compute the joint probability of some genetic data for a set of persons given some hypothesis about the pedigree structure is then required. The increasing number of markers available through high-density SNP microarray typing and NGS technologies intensifies the demand, where using a large number of markers may lead to biased results due to strong dependencies between closely located loci, both within pedigrees (linkage) and in the population (allelic association or linkage disequilibrium (LD)). We present a new general model, based on a Markov chain for inheritance patterns and another Markov chain for founder allele patterns, the latter allowing us to account for LD. We also demonstrate a specific implementation for X chromosomal markers that allows for computation of likelihoods based on hypotheses of alleged relationships and genetic marker data. The algorithm can simultaneously account for linkage, LD, and mutations. We demonstrate its feasibility using simulated examples. The algorithm is implemented in the software FamLinkX, providing a user-friendly GUI for Windows systems (FamLinkX, as well as further usage instructions, is freely available at www.famlink.se ). Our software provides the necessary means to solve cases where no previous implementation exists. In addition, the software has the possibility to perform simulations in order to further study the impact of linkage and LD on computed likelihoods for an arbitrary set of markers.

  2. SYSTEM OF COMPUTER MODELING OBJECTS AND PROCESSES AND FEATURES OF ITS USE IN THE EDUCATIONAL PROCESS OF GENERAL SECONDARY EDUCATION

    Directory of Open Access Journals (Sweden)

    Svitlana G. Lytvynova

    2018-04-01

    Full Text Available The article analyzes the historical aspect of the formation of computer modeling as one of the perspective directions of educational process development. The notion of “system of computer modeling”, conceptual model of system of computer modeling (SCMod, its components (mathematical, animation, graphic, strategic, functions, principles and purposes of use are grounded. The features of the organization of students work using SCMod, individual and group work, the formation of subject competencies are described; the aspect of students’ motivation to learning is considered. It is established that educational institutions can use SCMod at different levels and stages of training and in different contexts, which consist of interrelated physical, social, cultural and technological aspects. It is determined that the use of SCMod in general secondary school would increase the capacity of teachers to improve the training of students in natural and mathematical subjects and contribute to the individualization of the learning process, in order to meet the pace, educational interests and capabilities of each particular student. It is substantiated that the use of SCMod in the study of natural-mathematical subjects contributes to the formation of subject competencies, develops the skills of analysis and decision-making, increases the level of digital communication, develops vigilance, raises the level of knowledge, increases the duration of attention of students. Further research requires the justification of the process of forming students’ competencies in natural-mathematical subjects and designing cognitive tasks using SCMod.

  3. GEM-E3: A computable general equilibrium model applied for Switzerland

    Energy Technology Data Exchange (ETDEWEB)

    Bahn, O. [Paul Scherrer Inst., CH-5232 Villigen PSI (Switzerland); Frei, C. [Ecole Polytechnique Federale de Lausanne (EPFL) and Paul Scherrer Inst. (Switzerland)

    2000-01-01

    The objectives of the European Research Project GEM-E3-ELITE, funded by the European Commission and coordinated by the Centre for European Economic Research (Germany), were to further develop the general equilibrium model GEM-E3 (Capros et al., 1995, 1997) and to conduct policy analysis through case studies. GEM-E3 is an applied general equilibrium model that analyses the macro-economy and its interaction with the energy system and the environment through the balancing of energy supply and demand, atmospheric emissions and pollution control, together with the fulfillment of overall equilibrium conditions. PSI's research objectives within GEM-E3-ELITE were to implement and apply GEM-E3 for Switzerland. The first objective required in particular the development of a Swiss database for each of GEM-E3 modules (economic module and environmental module). For the second objective, strategies to reduce CO{sub 2} emissions were evaluated for Switzerland. In order to develop the economic, PSI collaborated with the Laboratory of Applied Economics (LEA) of the University of Geneva and the Laboratory of Energy Systems (LASEN) of the Federal Institute of Technology in Lausanne (EPFL). The Swiss Federal Statistical Office (SFSO) and the Institute for Business Cycle Research (KOF) of the Swiss Federal Institute of Technology (ETH Zurich) contributed also data. The Swiss environmental database consists mainly of an Energy Balance Table and of an Emission Coefficients Table. Both were designed using national and international official statistics. The Emission Coefficients Table is furthermore based on know-how of the PSI GaBE Project. Using GEM-E3 Switzerland, two strategies to reduce the Swiss CO{sub 2} emissions were evaluated: a carbon tax ('tax only' strategy), and the combination of a carbon tax with the buying of CO{sub 2} emission permits ('permits and tax' strategy). In the first strategy, Switzerland would impose the necessary carbon tax to achieve

  4. GEM-E3: A computable general equilibrium model applied for Switzerland

    International Nuclear Information System (INIS)

    Bahn, O.; Frei, C.

    2000-01-01

    The objectives of the European Research Project GEM-E3-ELITE, funded by the European Commission and coordinated by the Centre for European Economic Research (Germany), were to further develop the general equilibrium model GEM-E3 (Capros et al., 1995, 1997) and to conduct policy analysis through case studies. GEM-E3 is an applied general equilibrium model that analyses the macro-economy and its interaction with the energy system and the environment through the balancing of energy supply and demand, atmospheric emissions and pollution control, together with the fulfillment of overall equilibrium conditions. PSI's research objectives within GEM-E3-ELITE were to implement and apply GEM-E3 for Switzerland. The first objective required in particular the development of a Swiss database for each of GEM-E3 modules (economic module and environmental module). For the second objective, strategies to reduce CO 2 emissions were evaluated for Switzerland. In order to develop the economic, PSI collaborated with the Laboratory of Applied Economics (LEA) of the University of Geneva and the Laboratory of Energy Systems (LASEN) of the Federal Institute of Technology in Lausanne (EPFL). The Swiss Federal Statistical Office (SFSO) and the Institute for Business Cycle Research (KOF) of the Swiss Federal Institute of Technology (ETH Zurich) contributed also data. The Swiss environmental database consists mainly of an Energy Balance Table and of an Emission Coefficients Table. Both were designed using national and international official statistics. The Emission Coefficients Table is furthermore based on know-how of the PSI GaBE Project. Using GEM-E3 Switzerland, two strategies to reduce the Swiss CO 2 emissions were evaluated: a carbon tax ('tax only' strategy), and the combination of a carbon tax with the buying of CO 2 emission permits ('permits and tax' strategy). In the first strategy, Switzerland would impose the necessary carbon tax to achieve the reduction target, and use the tax

  5. The economic impact of more sustainable water use in agriculture: A computable general equilibrium analysis

    Science.gov (United States)

    Calzadilla, Alvaro; Rehdanz, Katrin; Tol, Richard S. J.

    2010-04-01

    SummaryAgriculture is the largest consumer of freshwater resources - around 70 percent of all freshwater withdrawals are used for food production. These agricultural products are traded internationally. A full understanding of water use is, therefore, impossible without understanding the international market for food and related products, such as textiles. Based on the global general equilibrium model GTAP-W, we offer a method for investigating the role of green (rain) and blue (irrigation) water resources in agriculture and within the context of international trade. We use future projections of allowable water withdrawals for surface water and groundwater to define two alternative water management scenarios. The first scenario explores a deterioration of current trends and policies in the water sector (water crisis scenario). The second scenario assumes an improvement in policies and trends in the water sector and eliminates groundwater overdraft world-wide, increasing water allocation for the environment (sustainable water use scenario). In both scenarios, welfare gains or losses are not only associated with changes in agricultural water consumption. Under the water crisis scenario, welfare not only rises for regions where water consumption increases (China, South East Asia and the USA). Welfare gains are considerable for Japan and South Korea, Southeast Asia and Western Europe as well. These regions benefit from higher levels of irrigated production and lower food prices. Alternatively, under the sustainable water use scenario, welfare losses not only affect regions where overdrafting is occurring. Welfare decreases in other regions as well. These results indicate that, for water use, there is a clear trade-off between economic welfare and environmental sustainability.

  6. Numerical computation of gravitational field of general extended body and its application to rotation curve study of galaxies

    Science.gov (United States)

    Fukushima, Toshio

    2017-06-01

    Reviewed are recently developed methods of the numerical integration of the gravitational field of general two- or three-dimensional bodies with arbitrary shape and mass density distribution: (i) an axisymmetric infinitely-thin disc (Fukushima 2016a, MNRAS, 456, 3702), (ii) a general infinitely-thin plate (Fukushima 2016b, MNRAS, 459, 3825), (iii) a plane-symmetric and axisymmetric ring-like object (Fukushima 2016c, AJ, 152, 35), (iv) an axisymmetric thick disc (Fukushima 2016d, MNRAS, 462, 2138), and (v) a general three-dimensional body (Fukushima 2016e, MNRAS, 463, 1500). The key techniques employed are (a) the split quadrature method using the double exponential rule (Takahashi and Mori, 1973, Numer. Math., 21, 206), (b) the precise and fast computation of complete elliptic integrals (Fukushima 2015, J. Comp. Appl. Math., 282, 71), (c) Ridder's algorithm of numerical differentiaion (Ridder 1982, Adv. Eng. Softw., 4, 75), (d) the recursive computation of the zonal toroidal harmonics, and (e) the integration variable transformation to the local spherical polar coordinates. These devices succesfully regularize the Newton kernel in the integrands so as to provide accurate integral values. For example, the general 3D potential is regularly integrated as Φ (\\vec{x}) = - G \\int_0^∞ ( \\int_{-1}^1 ( \\int_0^{2π} ρ (\\vec{x}+\\vec{q}) dψ ) dγ ) q dq, where \\vec{q} = q (√{1-γ^2} cos ψ, √{1-γ^2} sin ψ, γ), is the relative position vector referred to \\vec{x}, the position vector at which the potential is evaluated. As a result, the new methods can compute the potential and acceleration vector very accurately. In fact, the axisymmetric integration reproduces the Miyamoto-Nagai potential with 14 correct digits. The developed methods are applied to the gravitational field study of galaxies and protoplanetary discs. Among them, the investigation on the rotation curve of M33 supports a disc-like structure of the dark matter with a double-power-law surface

  7. COMPUTING

    CERN Multimedia

    M. Kasemann

    Overview In autumn the main focus was to process and handle CRAFT data and to perform the Summer08 MC production. The operational aspects were well covered by regular Computing Shifts, experts on duty and Computing Run Coordination. At the Computing Resource Board (CRB) in October a model to account for service work at Tier 2s was approved. The computing resources for 2009 were reviewed for presentation at the C-RRB. The quarterly resource monitoring is continuing. Facilities/Infrastructure operations Operations during CRAFT data taking ran fine. This proved to be a very valuable experience for T0 workflows and operations. The transfers of custodial data to most T1s went smoothly. A first round of reprocessing started at the Tier-1 centers end of November; it will take about two weeks. The Computing Shifts procedure was tested full scale during this period and proved to be very efficient: 30 Computing Shifts Persons (CSP) and 10 Computing Resources Coordinators (CRC). The shift program for the shut down w...

  8. A general formula for computing maximum proportion correct scores in various psychophysical paradigms with arbitrary probability distributions of stimulus observations.

    Science.gov (United States)

    Dai, Huanping; Micheyl, Christophe

    2015-05-01

    Proportion correct (Pc) is a fundamental measure of task performance in psychophysics. The maximum Pc score that can be achieved by an optimal (maximum-likelihood) observer in a given task is of both theoretical and practical importance, because it sets an upper limit on human performance. Within the framework of signal detection theory, analytical solutions for computing the maximum Pc score have been established for several common experimental paradigms under the assumption of Gaussian additive internal noise. However, as the scope of applications of psychophysical signal detection theory expands, the need is growing for psychophysicists to compute maximum Pc scores for situations involving non-Gaussian (internal or stimulus-induced) noise. In this article, we provide a general formula for computing the maximum Pc in various psychophysical experimental paradigms for arbitrary probability distributions of sensory activity. Moreover, easy-to-use MATLAB code implementing the formula is provided. Practical applications of the formula are illustrated, and its accuracy is evaluated, for two paradigms and two types of probability distributions (uniform and Gaussian). The results demonstrate that Pc scores computed using the formula remain accurate even for continuous probability distributions, as long as the conversion from continuous probability density functions to discrete probability mass functions is supported by a sufficiently high sampling resolution. We hope that the exposition in this article, and the freely available MATLAB code, facilitates calculations of maximum performance for a wider range of experimental situations, as well as explorations of the impact of different assumptions concerning internal-noise distributions on maximum performance in psychophysical experiments.

  9. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction CMS distributed computing system performed well during the 2011 start-up. The events in 2011 have more pile-up and are more complex than last year; this results in longer reconstruction times and harder events to simulate. Significant increases in computing capacity were delivered in April for all computing tiers, and the utilisation and load is close to the planning predictions. All computing centre tiers performed their expected functionalities. Heavy-Ion Programme The CMS Heavy-Ion Programme had a very strong showing at the Quark Matter conference. A large number of analyses were shown. The dedicated heavy-ion reconstruction facility at the Vanderbilt Tier-2 is still involved in some commissioning activities, but is available for processing and analysis. Facilities and Infrastructure Operations Facility and Infrastructure operations have been active with operations and several important deployment tasks. Facilities participated in the testing and deployment of WMAgent and WorkQueue+Request...

  10. COMPUTING

    CERN Multimedia

    P. McBride

    The Computing Project is preparing for a busy year where the primary emphasis of the project moves towards steady operations. Following the very successful completion of Computing Software and Analysis challenge, CSA06, last fall, we have reorganized and established four groups in computing area: Commissioning, User Support, Facility/Infrastructure Operations and Data Operations. These groups work closely together with groups from the Offline Project in planning for data processing and operations. Monte Carlo production has continued since CSA06, with about 30M events produced each month to be used for HLT studies and physics validation. Monte Carlo production will continue throughout the year in the preparation of large samples for physics and detector studies ramping to 50 M events/month for CSA07. Commissioning of the full CMS computing system is a major goal for 2007. Site monitoring is an important commissioning component and work is ongoing to devise CMS specific tests to be included in Service Availa...

  11. COMPUTING

    CERN Multimedia

    M. Kasemann

    Overview During the past three months activities were focused on data operations, testing and re-enforcing shift and operational procedures for data production and transfer, MC production and on user support. Planning of the computing resources in view of the new LHC calendar in ongoing. Two new task forces were created for supporting the integration work: Site Commissioning, which develops tools helping distributed sites to monitor job and data workflows, and Analysis Support, collecting the user experience and feedback during analysis activities and developing tools to increase efficiency. The development plan for DMWM for 2009/2011 was developed at the beginning of the year, based on the requirements from the Physics, Computing and Offline groups (see Offline section). The Computing management meeting at FermiLab on February 19th and 20th was an excellent opportunity discussing the impact and for addressing issues and solutions to the main challenges facing CMS computing. The lack of manpower is particul...

  12. Constrained Total Generalized p-Variation Minimization for Few-View X-Ray Computed Tomography Image Reconstruction.

    Science.gov (United States)

    Zhang, Hanming; Wang, Linyuan; Yan, Bin; Li, Lei; Cai, Ailong; Hu, Guoen

    2016-01-01

    Total generalized variation (TGV)-based computed tomography (CT) image reconstruction, which utilizes high-order image derivatives, is superior to total variation-based methods in terms of the preservation of edge information and the suppression of unfavorable staircase effects. However, conventional TGV regularization employs l1-based form, which is not the most direct method for maximizing sparsity prior. In this study, we propose a total generalized p-variation (TGpV) regularization model to improve the sparsity exploitation of TGV and offer efficient solutions to few-view CT image reconstruction problems. To solve the nonconvex optimization problem of the TGpV minimization model, we then present an efficient iterative algorithm based on the alternating minimization of augmented Lagrangian function. All of the resulting subproblems decoupled by variable splitting admit explicit solutions by applying alternating minimization method and generalized p-shrinkage mapping. In addition, approximate solutions that can be easily performed and quickly calculated through fast Fourier transform are derived using the proximal point method to reduce the cost of inner subproblems. The accuracy and efficiency of the simulated and real data are qualitatively and quantitatively evaluated to validate the efficiency and feasibility of the proposed method. Overall, the proposed method exhibits reasonable performance and outperforms the original TGV-based method when applied to few-view problems.

  13. Predicting oropharyngeal tumor volume throughout the course of radiation therapy from pretreatment computed tomography data using general linear models

    International Nuclear Information System (INIS)

    Yock, Adam D.; Kudchadker, Rajat J.; Rao, Arvind; Dong, Lei; Beadle, Beth M.; Garden, Adam S.; Court, Laurence E.

    2014-01-01

    Purpose: The purpose of this work was to develop and evaluate the accuracy of several predictive models of variation in tumor volume throughout the course of radiation therapy. Methods: Nineteen patients with oropharyngeal cancers were imaged daily with CT-on-rails for image-guided alignment per an institutional protocol. The daily volumes of 35 tumors in these 19 patients were determined and used to generate (1) a linear model in which tumor volume changed at a constant rate, (2) a general linear model that utilized the power fit relationship between the daily and initial tumor volumes, and (3) a functional general linear model that identified and exploited the primary modes of variation between time series describing the changing tumor volumes. Primary and nodal tumor volumes were examined separately. The accuracy of these models in predicting daily tumor volumes were compared with those of static and linear reference models using leave-one-out cross-validation. Results: In predicting the daily volume of primary tumors, the general linear model and the functional general linear model were more accurate than the static reference model by 9.9% (range: −11.6%–23.8%) and 14.6% (range: −7.3%–27.5%), respectively, and were more accurate than the linear reference model by 14.2% (range: −6.8%–40.3%) and 13.1% (range: −1.5%–52.5%), respectively. In predicting the daily volume of nodal tumors, only the 14.4% (range: −11.1%–20.5%) improvement in accuracy of the functional general linear model compared to the static reference model was statistically significant. Conclusions: A general linear model and a functional general linear model trained on data from a small population of patients can predict the primary tumor volume throughout the course of radiation therapy with greater accuracy than standard reference models. These more accurate models may increase the prognostic value of information about the tumor garnered from pretreatment computed tomography

  14. COMPUTING

    CERN Multimedia

    I. Fisk

    2013-01-01

    Computing activity had ramped down after the completion of the reprocessing of the 2012 data and parked data, but is increasing with new simulation samples for analysis and upgrade studies. Much of the Computing effort is currently involved in activities to improve the computing system in preparation for 2015. Operations Office Since the beginning of 2013, the Computing Operations team successfully re-processed the 2012 data in record time, not only by using opportunistic resources like the San Diego Supercomputer Center which was accessible, to re-process the primary datasets HTMHT and MultiJet in Run2012D much earlier than planned. The Heavy-Ion data-taking period was successfully concluded in February collecting almost 500 T. Figure 3: Number of events per month (data) In LS1, our emphasis is to increase efficiency and flexibility of the infrastructure and operation. Computing Operations is working on separating disk and tape at the Tier-1 sites and the full implementation of the xrootd federation ...

  15. Generalization of the Lord-Wingersky Algorithm to Computing the Distribution of Summed Test Scores Based on Real-Number Item Scores

    Science.gov (United States)

    Kim, Seonghoon

    2013-01-01

    With known item response theory (IRT) item parameters, Lord and Wingersky provided a recursive algorithm for computing the conditional frequency distribution of number-correct test scores, given proficiency. This article presents a generalized algorithm for computing the conditional distribution of summed test scores involving real-number item…

  16. The impact of increased efficiency in the industrial use of energy: A computable general equilibrium analysis for the United Kingdom

    International Nuclear Information System (INIS)

    Allan, Grant; Hanley, Nick; McGregor, Peter; Swales, Kim; Turner, Karen

    2007-01-01

    The conventional wisdom is that improving energy efficiency will lower energy use. However, there is an extensive debate in the energy economics/policy literature concerning 'rebound' effects. These occur because an improvement in energy efficiency produces a fall in the effective price of energy services. The response of the economic system to this price fall at least partially offsets the expected beneficial impact of the energy efficiency gain. In this paper we use an economy-energy-environment computable general equilibrium (CGE) model for the UK to measure the impact of a 5% across the board improvement in the efficiency of energy use in all production sectors. We identify rebound effects of the order of 30-50%, but no backfire (no increase in energy use). However, these results are sensitive to the assumed structure of the labour market, key production elasticities, the time period under consideration and the mechanism through which increased government revenues are recycled back to the economy

  17. Computer-assisted analyses of (/sup 14/C)2-DG autoradiographs employing a general purpose image processing system

    Energy Technology Data Exchange (ETDEWEB)

    Porro, C; Biral, G P [Modena Univ. (Italy). Ist. di Fisiologia Umana; Fonda, S; Baraldi, P [Modena Univ. (Italy). Lab. di Bioingegneria della Clinica Oculistica; Cavazzuti, M [Modena Univ. (Italy). Clinica Neurologica

    1984-09-01

    A general purpose image processing system is described including B/W TV camera, high resolution image processor and display system (TESAK VDC 501), computer (DEC PDP 11/23) and monochrome and color monitors. Images may be acquired from a microscope equipped with a TV camera or using the TV in direct viewing; the A/D converter and the image processor provides fast (40 ms) and precise (512x512 data points) digitization of TV signal with a 256 gray levels maximum resolution. Computer programs have been developed in order to perform qualitative and quantitative analyses of autoradiographs obtained with the 2-DG method, which are written in FORTRAN and MACRO 11 Assembly Language. They include: (1) procedures designed to recognize errors in acquisition due to possible image shading and correct them via software; (2) routines suitable for qualitative analyses of the whole image or selected regions of it, providing the opportunity for pseudocolor coding, statistics, graphic overlays; (3) programs permitting the conversion of gray levels into metabolic rates of glucose utilization and the display of gray- or color-coded metabolic maps.

  18. 23rd October 2010 - UNESCO Director-General I. Bokova signing the Guest Book with CERN Director for Research and Scientific Computing S. Bertolucci and CERN Director-General R. Heuer.

    CERN Multimedia

    Maximilien Brice

    2010-01-01

    CERN-HI-1010244 37: in the SM18 hall: Ms Jasmina Sopova, Communication Officer J. Sopova; Director, Division of Basic & Engineering Sciences M. Nalecz, Assistant Director-General for the Natural Sciences G. Kalonji; Former CERN Director-General H. Schopper, CERN Head of Education R. Landua; UNESCO Director-General I. Bokova; CERN Adviser M. Bona; CERN Director for Research and Scientific Computing S. Bertolucci and UNESCO Office in Geneva Director Luis M. Tiburcio.

  19. COMPUTING

    CERN Multimedia

    I. Fisk

    2010-01-01

    Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion An activity that is still in progress is computing for the heavy-ion program. The heavy-ion events are collected without zero suppression, so the event size is much large at roughly 11 MB per event of RAW. The central collisions are more complex and...

  20. COMPUTING

    CERN Multimedia

    M. Kasemann P. McBride Edited by M-C. Sawley with contributions from: P. Kreuzer D. Bonacorsi S. Belforte F. Wuerthwein L. Bauerdick K. Lassila-Perini M-C. Sawley

    Introduction More than seventy CMS collaborators attended the Computing and Offline Workshop in San Diego, California, April 20-24th to discuss the state of readiness of software and computing for collisions. Focus and priority were given to preparations for data taking and providing room for ample dialog between groups involved in Commissioning, Data Operations, Analysis and MC Production. Throughout the workshop, aspects of software, operating procedures and issues addressing all parts of the computing model were discussed. Plans for the CMS participation in STEP’09, the combined scale testing for all four experiments due in June 2009, were refined. The article in CMS Times by Frank Wuerthwein gave a good recap of the highly collaborative atmosphere of the workshop. Many thanks to UCSD and to the organizers for taking care of this workshop, which resulted in a long list of action items and was definitely a success. A considerable amount of effort and care is invested in the estimate of the comput...

  1. A subspace approach to high-resolution spectroscopic imaging.

    Science.gov (United States)

    Lam, Fan; Liang, Zhi-Pei

    2014-04-01

    To accelerate spectroscopic imaging using sparse sampling of (k,t)-space and subspace (or low-rank) modeling to enable high-resolution metabolic imaging with good signal-to-noise ratio. The proposed method, called SPectroscopic Imaging by exploiting spatiospectral CorrElation, exploits a unique property known as partial separability of spectroscopic signals. This property indicates that high-dimensional spectroscopic signals reside in a very low-dimensional subspace and enables special data acquisition and image reconstruction strategies to be used to obtain high-resolution spatiospectral distributions with good signal-to-noise ratio. More specifically, a hybrid chemical shift imaging/echo-planar spectroscopic imaging pulse sequence is proposed for sparse sampling of (k,t)-space, and a low-rank model-based algorithm is proposed for subspace estimation and image reconstruction from sparse data with the capability to incorporate prior information and field inhomogeneity correction. The performance of the proposed method has been evaluated using both computer simulations and phantom studies, which produced very encouraging results. For two-dimensional spectroscopic imaging experiments on a metabolite phantom, a factor of 10 acceleration was achieved with a minimal loss in signal-to-noise ratio compared to the long chemical shift imaging experiments and with a significant gain in signal-to-noise ratio compared to the accelerated echo-planar spectroscopic imaging experiments. The proposed method, SPectroscopic Imaging by exploiting spatiospectral CorrElation, is able to significantly accelerate spectroscopic imaging experiments, making high-resolution metabolic imaging possible. Copyright © 2014 Wiley Periodicals, Inc.

  2. Impuestos al capital y al trabajo en Colombia: un análisis mediante equilibrio general computable Effect of Taxes on Capital and Labor in Colombia: A Computable General Equilibrium Analysis

    Directory of Open Access Journals (Sweden)

    Jesús Botero Garcia

    2011-10-01

    Full Text Available Mediante un modelo de equilibrio general computable, calibrado para Colombia, se analiza el impacto de diversas políticas económicas, que afectan el precio relativo de los factores productivos. Se concluye que los estímulos a la inversión, que pueden interpretarse como acciones que disminuyen el precio del capital, propician sin embargo la acumulación de capital, y por esa vía, incrementan la productividad del trabajo, generando efectos positivos netos sobre el empleo. La eliminación de los aportes parafiscales, por su parte, genera una reducción en el costo del trabajo, pero su efecto global sobre el empleo es compensado parcialmente por las acciones fiscales tendientes a generar rentas alternativas que permitan mantener los beneficios asociados a esos aportes. Se sugiere que el esquema ideal sería aquel que establece estímulos a la inversión, focalizados hacia sectores intensivos en empleo, al tiempo que crea redes de protección social adecuadas, para enfrentar los problemas asociados a la pobreza.   Abstract Using a computable general equilibrium model, calibrated for Colombia, it is analyze the impact of various economic policies, which affect the relative price of production factors. The results concluded that the incentives for investment, which can be interpreted as actions that decrease the cost of capital, however lead to the accumulation of capital, and thereby increase the productivity of labour, generating net positive effects on employment. The Elimination of the payroll taxes, for its part, generates a reduction in the cost of labour, but their overall effect on employment is partially offset by the tax measures designed to generate alternative income to keep the benefits associated with these contributions. Finally the suggestion is that the ideal scheme would be one that provides incentives for investment, focused towards employment-intensive sectors, at the time that creates networks of social protection appropriate

  3. Influence of Cone-beam Computed Tomography on Endodontic Retreatment Strategies among General Dental Practitioners and Endodontists.

    Science.gov (United States)

    Rodríguez, Gustavo; Patel, Shanon; Durán-Sindreu, Fernando; Roig, Miguel; Abella, Francesc

    2017-09-01

    Treatment options for endodontic failure include nonsurgical or surgical endodontic retreatment, intentional replantation, and extraction with or without replacement of the tooth. The aim of the present study was to determine the impact of cone-beam computed tomographic (CBCT) imaging on clinical decision making among general dental practitioners and endodontists after failed root canal treatment. A second objective was to assess the self-reported level of difficulty in making a treatment choice before and after viewing a preoperative CBCT scan. Eight patients with endodontically treated teeth diagnosed as symptomatic apical periodontitis, acute apical abscess, or chronic apical abscess were selected. In the first session, the examiners were given the details of each case, including any relevant radiographs, and were asked to choose 1 of the proposed treatment alternatives and assess the difficulty of making a decision. One month later, the examiners reviewed randomly the same 8 cases with the additional information from the CBCT data. The examiners altered their treatment plan after viewing the CBCT scan in 49.8% of the cases. A significant difference in the treatment plan between the 2 imaging modalities was recorded for endodontists and general practitioners (P < .05). After CBCT evaluation, neither group altered their self-reported level of difficulty when choosing a treatment plan (P = .0524). The extraction option rose significantly to 20% after viewing the CBCT scan (P < .05). CBCT imaging directly influences endodontic retreatment strategies among general dental practitioners and endodontists. Copyright © 2017 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.

  4. Incidence of lumbar spondylolysis in the general population in Japan based on multidetector computed tomography scans from two thousand subjects.

    Science.gov (United States)

    Sakai, Toshinori; Sairyo, Koichi; Takao, Shoichiro; Nishitani, Hiromu; Yasui, Natsuo

    2009-10-01

    Epidemiological analysis using CTs. To investigate the true incidence of lumbar spondylolysis in the general population in Japan. Although there have been several reports on the incidence of lumbar spondylolysis, they had some weakness. One of them concerns the subjects investigated, because the incidence of lumbar spondylolysis varies considerably, and some patients are asymptomatic. In addition, most of the past studies used plain radiograph films or skeletal investigation. Therefore, the past reported incidence may not correspond to that of the general population. We reviewed the computed tomography (CT) scans of 2000 subjects (age: 20-92 years) who had undergone abdominal and pelvic CT on a single multidetector CT scanner for reasons unrelated to low back pain. We reviewed them for spondylolysis, spondylolytic spondylolisthesis, and spina bifida occulta (SBO) in the lumbosacral region. The grade (I-IV) of spondylolisthesis was measured using midsagittal reconstructions. Lumbar spondylolysis was found in 117 subjects (5.9%). Their male-female ratio was 2:1. Multiple-level spondylolysis was found in 5 subjects (0.3%). Among these 117 subjects, there were 124 vertebrae with spondylolysis. Of them, 112 (90.3%) corresponded to L5, and 26 (21.0%) had unilateral spondylolysis.SBO was found in 154 subjects. Of them, 25 had spondylolysis (16.2%), whereas, in 1846 subjects without SBO, 92 had spondylolysis (5.0%). The incidence of spondylolysis among the patients with SBO was significantly higher than that in subjects without SBO (Odd ratio was 3.7-fold).Of 124 vertebrae with spondylolysis, 75 (60.5%) showed low-grade (Meyerding grade I or II) spondylolisthesis, and no subject presented high-grade spondylolisthesis. Spondylolisthesis was found in 74.5% of the subjects with bilateral spondylolysis, and in 7.7% of those with unilateral spondylolysis. The incidence of lumbar spondylolysis in the Japanese general population was 5.9% (males: 7.9%, females: 3.9%).

  5. An adaptive maneuvering logic computer program for the simulation of one-on-one air-to-air combat. Volume 1: General description

    Science.gov (United States)

    Burgin, G. H.; Fogel, L. J.; Phelps, J. P.

    1975-01-01

    A technique for computer simulation of air combat is described. Volume 1 decribes the computer program and its development in general terms. Two versions of the program exist. Both incorporate a logic for selecting and executing air combat maneuvers with performance models of specific fighter aircraft. In the batch processing version the flight paths of two aircraft engaged in interactive aerial combat and controlled by the same logic are computed. The realtime version permits human pilots to fly air-to-air combat against the adaptive maneuvering logic (AML) in Langley Differential Maneuvering Simulator (DMS). Volume 2 consists of a detailed description of the computer programs.

  6. COMPUTING

    CERN Multimedia

    P. McBride

    It has been a very active year for the computing project with strong contributions from members of the global community. The project has focused on site preparation and Monte Carlo production. The operations group has begun processing data from P5 as part of the global data commissioning. Improvements in transfer rates and site availability have been seen as computing sites across the globe prepare for large scale production and analysis as part of CSA07. Preparations for the upcoming Computing Software and Analysis Challenge CSA07 are progressing. Ian Fisk and Neil Geddes have been appointed as coordinators for the challenge. CSA07 will include production tests of the Tier-0 production system, reprocessing at the Tier-1 sites and Monte Carlo production at the Tier-2 sites. At the same time there will be a large analysis exercise at the Tier-2 centres. Pre-production simulation of the Monte Carlo events for the challenge is beginning. Scale tests of the Tier-0 will begin in mid-July and the challenge it...

  7. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction During the past six months, Computing participated in the STEP09 exercise, had a major involvement in the October exercise and has been working with CMS sites on improving open issues relevant for data taking. At the same time operations for MC production, real data reconstruction and re-reconstructions and data transfers at large scales were performed. STEP09 was successfully conducted in June as a joint exercise with ATLAS and the other experiments. It gave good indication about the readiness of the WLCG infrastructure with the two major LHC experiments stressing the reading, writing and processing of physics data. The October Exercise, in contrast, was conducted as an all-CMS exercise, where Physics, Computing and Offline worked on a common plan to exercise all steps to efficiently access and analyze data. As one of the major results, the CMS Tier-2s demonstrated to be fully capable for performing data analysis. In recent weeks, efforts were devoted to CMS Computing readiness. All th...

  8. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion The Tier 0 infrastructure was able to repack and promptly reconstruct heavy-ion collision data. Two copies were made of the data at CERN using a large CASTOR disk pool, and the core physics sample was replicated ...

  9. COMPUTING

    CERN Multimedia

    I. Fisk

    2012-01-01

    Introduction Computing continued with a high level of activity over the winter in preparation for conferences and the start of the 2012 run. 2012 brings new challenges with a new energy, more complex events, and the need to make the best use of the available time before the Long Shutdown. We expect to be resource constrained on all tiers of the computing system in 2012 and are working to ensure the high-priority goals of CMS are not impacted. Heavy ions After a successful 2011 heavy-ion run, the programme is moving to analysis. During the run, the CAF resources were well used for prompt analysis. Since then in 2012 on average 200 job slots have been used continuously at Vanderbilt for analysis workflows. Operations Office As of 2012, the Computing Project emphasis has moved from commissioning to operation of the various systems. This is reflected in the new organisation structure where the Facilities and Data Operations tasks have been merged into a common Operations Office, which now covers everything ...

  10. COMPUTING

    CERN Multimedia

    M. Kasemann

    CCRC’08 challenges and CSA08 During the February campaign of the Common Computing readiness challenges (CCRC’08), the CMS computing team had achieved very good results. The link between the detector site and the Tier0 was tested by gradually increasing the number of parallel transfer streams well beyond the target. Tests covered the global robustness at the Tier0, processing a massive number of very large files and with a high writing speed to tapes.  Other tests covered the links between the different Tiers of the distributed infrastructure and the pre-staging and reprocessing capacity of the Tier1’s: response time, data transfer rate and success rate for Tape to Buffer staging of files kept exclusively on Tape were measured. In all cases, coordination with the sites was efficient and no serious problem was found. These successful preparations prepared the ground for the second phase of the CCRC’08 campaign, in May. The Computing Software and Analysis challen...

  11. COMPUTING

    CERN Multimedia

    I. Fisk

    2010-01-01

    Introduction The first data taking period of November produced a first scientific paper, and this is a very satisfactory step for Computing. It also gave the invaluable opportunity to learn and debrief from this first, intense period, and make the necessary adaptations. The alarm procedures between different groups (DAQ, Physics, T0 processing, Alignment/calibration, T1 and T2 communications) have been reinforced. A major effort has also been invested into remodeling and optimizing operator tasks in all activities in Computing, in parallel with the recruitment of new Cat A operators. The teams are being completed and by mid year the new tasks will have been assigned. CRB (Computing Resource Board) The Board met twice since last CMS week. In December it reviewed the experience of the November data-taking period and could measure the positive improvements made for the site readiness. It also reviewed the policy under which Tier-2 are associated with Physics Groups. Such associations are decided twice per ye...

  12. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction More than seventy CMS collaborators attended the Computing and Offline Workshop in San Diego, California, April 20-24th to discuss the state of readiness of software and computing for collisions. Focus and priority were given to preparations for data taking and providing room for ample dialog between groups involved in Commissioning, Data Operations, Analysis and MC Production. Throughout the workshop, aspects of software, operating procedures and issues addressing all parts of the computing model were discussed. Plans for the CMS participation in STEP’09, the combined scale testing for all four experiments due in June 2009, were refined. The article in CMS Times by Frank Wuerthwein gave a good recap of the highly collaborative atmosphere of the workshop. Many thanks to UCSD and to the organizers for taking care of this workshop, which resulted in a long list of action items and was definitely a success. A considerable amount of effort and care is invested in the estimate of the co...

  13. Sub-THz spectroscopic characterization of vibrational modes in artificially designed DNA monocrystal

    International Nuclear Information System (INIS)

    Sizov, Igor; Rahman, Masudur; Gelmont, Boris; Norton, Michael L.; Globus, Tatiana

    2013-01-01

    Highlights: • Sub-THz spectroscopy is used to characterize artificially designed DNA monocrystal. • Results are obtained using a novel near field, RT, frequency domain spectrometer. • Narrow resonances of 0.1 cm −1 width in absorption spectra of crystal are observed. • Signature measured between 310 and 490 GHz is reproducible and well resolved. • Absorption pattern is explained in part by simulation results from dsDNA fragment. - Abstract: Sub-terahertz (sub-THz) vibrational spectroscopy is a new spectroscopic branch for characterizing biological macromolecules. In this work, highly resolved sub-THz resonance spectroscopy is used for characterizing engineered molecular structures, an artificially designed DNA monocrystal, built from a short DNA sequence. Using a recently developed frequency domain spectroscopic instrument operating at room temperature with high spectral and spatial resolution, we demonstrated very intense and specific spectral lines from a DNA crystal in general agreement with a computational molecular dynamics (MD) simulation of a short double stranded DNA fragment. The spectroscopic signature measured in the frequency range between 310 and 490 GHz is rich in well resolved and reproducible spectral features thus demonstrating the capability of THz resonance spectroscopy to be used for characterizing custom macromolecules and structures designed and implemented via nanotechnology for a wide variety of application domains. Analysis of MD simulation indicates that intense and narrow vibrational modes with atomic movements perpendicular (transverse) and parallel (longitudinal) to the long DNA axis coexist in dsDNA, with much higher contribution from longitudinal vibrations

  14. The HITRAN2016 molecular spectroscopic database

    Energy Technology Data Exchange (ETDEWEB)

    Gordon, I. E.; Rothman, L. S.; Hill, C.; Kochanov, R. V.; Tan, Y.; Bernath, P. F.; Birk, M.; Boudon, V.; Campargue, A.; Chance, K. V.; Drouin, B. J.; Flaud, J. -M.; Gamache, R. R.; Hodges, J. T.; Jacquemart, D.; Perevalov, V. I.; Perrin, A.; Shine, K. P.; Smith, M. -A. H.; Tennyson, J.; Toon, G. C.; Tran, H.; Tyuterev, V. G.; Barbe, A.; Császár, A. G.; Devi, V. M.; Furtenbacher, T.; Harrison, J. J.; Hartmann, J. -M.; Jolly, A.; Johnson, T. J.; Karman, T.; Kleiner, I.; Kyuberis, A. A.; Loos, J.; Lyulin, O. M.; Massie, S. T.; Mikhailenko, S. N.; Moazzen-Ahmadi, N.; Müller, H. S. P.; Naumenko, O. V.; Nikitin, A. V.; Polyansky, O. L.; Rey, M.; Rotger, M.; Sharpe, S. W.; Sung, K.; Starikova, E.; Tashkun, S. A.; Auwera, J. Vander; Wagner, G.; Wilzewski, J.; Wcisło, P.; Yu, S.; Zak, E. J.

    2017-12-01

    This paper describes the contents of the 2016 edition of the HITRAN molecular spectroscopic compilation. The new edition replaces the previous HITRAN edition of 2012 and its updates during the intervening years. The HITRAN molecular absorption compilation is comprised of five major components: the traditional line-by-line spectroscopic parameters required for high-resolution radiative-transfer codes, infrared absorption cross-sections for molecules not yet amenable to representation in a line-by-line form, collision-induced absorption data, aerosol indices of refraction, and general tables such as partition sums that apply globally to the data. The new HITRAN is greatly extended in terms of accuracy, spectral coverage, additional absorption phenomena, added line-shape formalisms, and validity. Moreover, molecules, isotopologues, and perturbing gases have been added that address the issues of atmospheres beyond the Earth. Of considerable note, experimental IR cross-sections for almost 200 additional significant molecules have been added to the database.

  15. The HITRAN 2008 molecular spectroscopic database

    International Nuclear Information System (INIS)

    Rothman, L.S.; Gordon, I.E.; Barbe, A.; Benner, D.Chris; Bernath, P.F.; Birk, M.; Boudon, V.; Brown, L.R.; Campargue, A.; Champion, J.-P.; Chance, K.; Coudert, L.H.; Dana, V.; Devi, V.M.; Fally, S.; Flaud, J.-M.

    2009-01-01

    This paper describes the status of the 2008 edition of the HITRAN molecular spectroscopic database. The new edition is the first official public release since the 2004 edition, although a number of crucial updates had been made available online since 2004. The HITRAN compilation consists of several components that serve as input for radiative-transfer calculation codes: individual line parameters for the microwave through visible spectra of molecules in the gas phase; absorption cross-sections for molecules having dense spectral features, i.e. spectra in which the individual lines are not resolved; individual line parameters and absorption cross-sections for bands in the ultraviolet; refractive indices of aerosols, tables and files of general properties associated with the database; and database management software. The line-by-line portion of the database contains spectroscopic parameters for 42 molecules including many of their isotopologues.

  16. Assessment regarding the use of the computer aided analytical models in the calculus of the general strength of a ship hull

    Science.gov (United States)

    Hreniuc, V.; Hreniuc, A.; Pescaru, A.

    2017-08-01

    Solving a general strength problem of a ship hull may be done using analytical approaches which are useful to deduce the buoyancy forces distribution, the weighting forces distribution along the hull and the geometrical characteristics of the sections. These data are used to draw the free body diagrams and to compute the stresses. The general strength problems require a large amount of calculi, therefore it is interesting how a computer may be used to solve such problems. Using computer programming an engineer may conceive software instruments based on analytical approaches. However, before developing the computer code the research topic must be thoroughly analysed, in this way being reached a meta-level of understanding of the problem. The following stage is to conceive an appropriate development strategy of the original software instruments useful for the rapid development of computer aided analytical models. The geometrical characteristics of the sections may be computed using a bool algebra that operates with ‘simple’ geometrical shapes. By ‘simple’ we mean that for the according shapes we have direct calculus relations. In the set of ‘simple’ shapes we also have geometrical entities bounded by curves approximated as spline functions or as polygons. To conclude, computer programming offers the necessary support to solve general strength ship hull problems using analytical methods.

  17. COMPUTING

    CERN Multimedia

    2010-01-01

    Introduction Just two months after the “LHC First Physics” event of 30th March, the analysis of the O(200) million 7 TeV collision events in CMS accumulated during the first 60 days is well under way. The consistency of the CMS computing model has been confirmed during these first weeks of data taking. This model is based on a hierarchy of use-cases deployed between the different tiers and, in particular, the distribution of RECO data to T1s, who then serve data on request to T2s, along a topology known as “fat tree”. Indeed, during this period this model was further extended by almost full “mesh” commissioning, meaning that RECO data were shipped to T2s whenever possible, enabling additional physics analyses compared with the “fat tree” model. Computing activities at the CMS Analysis Facility (CAF) have been marked by a good time response for a load almost evenly shared between ALCA (Alignment and Calibration tasks - highest p...

  18. COMPUTING

    CERN Multimedia

    Contributions from I. Fisk

    2012-01-01

    Introduction The start of the 2012 run has been busy for Computing. We have reconstructed, archived, and served a larger sample of new data than in 2011, and we are in the process of producing an even larger new sample of simulations at 8 TeV. The running conditions and system performance are largely what was anticipated in the plan, thanks to the hard work and preparation of many people. Heavy ions Heavy Ions has been actively analysing data and preparing for conferences.  Operations Office Figure 6: Transfers from all sites in the last 90 days For ICHEP and the Upgrade efforts, we needed to produce and process record amounts of MC samples while supporting the very successful data-taking. This was a large burden, especially on the team members. Nevertheless the last three months were very successful and the total output was phenomenal, thanks to our dedicated site admins who keep the sites operational and the computing project members who spend countless hours nursing the...

  19. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction A large fraction of the effort was focused during the last period into the preparation and monitoring of the February tests of Common VO Computing Readiness Challenge 08. CCRC08 is being run by the WLCG collaboration in two phases, between the centres and all experiments. The February test is dedicated to functionality tests, while the May challenge will consist of running at all centres and with full workflows. For this first period, a number of functionality checks of the computing power, data repositories and archives as well as network links are planned. This will help assess the reliability of the systems under a variety of loads, and identifying possible bottlenecks. Many tests are scheduled together with other VOs, allowing the full scale stress test. The data rates (writing, accessing and transfer¬ring) are being checked under a variety of loads and operating conditions, as well as the reliability and transfer rates of the links between Tier-0 and Tier-1s. In addition, the capa...

  20. COMPUTING

    CERN Multimedia

    Matthias Kasemann

    Overview The main focus during the summer was to handle data coming from the detector and to perform Monte Carlo production. The lessons learned during the CCRC and CSA08 challenges in May were addressed by dedicated PADA campaigns lead by the Integration team. Big improvements were achieved in the stability and reliability of the CMS Tier1 and Tier2 centres by regular and systematic follow-up of faults and errors with the help of the Savannah bug tracking system. In preparation for data taking the roles of a Computing Run Coordinator and regular computing shifts monitoring the services and infrastructure as well as interfacing to the data operations tasks are being defined. The shift plan until the end of 2008 is being put together. User support worked on documentation and organized several training sessions. The ECoM task force delivered the report on “Use Cases for Start-up of pp Data-Taking” with recommendations and a set of tests to be performed for trigger rates much higher than the ...

  1. COMPUTING

    CERN Multimedia

    P. MacBride

    The Computing Software and Analysis Challenge CSA07 has been the main focus of the Computing Project for the past few months. Activities began over the summer with the preparation of the Monte Carlo data sets for the challenge and tests of the new production system at the Tier-0 at CERN. The pre-challenge Monte Carlo production was done in several steps: physics generation, detector simulation, digitization, conversion to RAW format and the samples were run through the High Level Trigger (HLT). The data was then merged into three "Soups": Chowder (ALPGEN), Stew (Filtered Pythia) and Gumbo (Pythia). The challenge officially started when the first Chowder events were reconstructed on the Tier-0 on October 3rd. The data operations teams were very busy during the the challenge period. The MC production teams continued with signal production and processing while the Tier-0 and Tier-1 teams worked on splitting the Soups into Primary Data Sets (PDS), reconstruction and skimming. The storage sys...

  2. COMPUTING

    CERN Multimedia

    I. Fisk

    2013-01-01

    Computing operation has been lower as the Run 1 samples are completing and smaller samples for upgrades and preparations are ramping up. Much of the computing activity is focusing on preparations for Run 2 and improvements in data access and flexibility of using resources. Operations Office Data processing was slow in the second half of 2013 with only the legacy re-reconstruction pass of 2011 data being processed at the sites.   Figure 1: MC production and processing was more in demand with a peak of over 750 Million GEN-SIM events in a single month.   Figure 2: The transfer system worked reliably and efficiently and transferred on average close to 520 TB per week with peaks at close to 1.2 PB.   Figure 3: The volume of data moved between CMS sites in the last six months   The tape utilisation was a focus for the operation teams with frequent deletion campaigns from deprecated 7 TeV MC GEN-SIM samples to INVALID datasets, which could be cleaned up...

  3. COMPUTING

    CERN Multimedia

    I. Fisk

    2012-01-01

      Introduction Computing activity has been running at a sustained, high rate as we collect data at high luminosity, process simulation, and begin to process the parked data. The system is functional, though a number of improvements are planned during LS1. Many of the changes will impact users, we hope only in positive ways. We are trying to improve the distributed analysis tools as well as the ability to access more data samples more transparently.  Operations Office Figure 2: Number of events per month, for 2012 Since the June CMS Week, Computing Operations teams successfully completed data re-reconstruction passes and finished the CMSSW_53X MC campaign with over three billion events available in AOD format. Recorded data was successfully processed in parallel, exceeding 1.2 billion raw physics events per month for the first time in October 2012 due to the increase in data-parking rate. In parallel, large efforts were dedicated to WMAgent development and integrati...

  4. Computational Model of D-Region Ion Production Caused by Energetic Electron Precipitations Based on General Monte Carlo Transport Calculations

    Science.gov (United States)

    Kouznetsov, A.; Cully, C. M.

    2017-12-01

    During enhanced magnetic activities, large ejections of energetic electrons from radiation belts are deposited in the upper polar atmosphere where they play important roles in its physical and chemical processes, including VLF signals subionospheric propagation. Electron deposition can affect D-Region ionization, which are estimated based on ionization rates derived from energy depositions. We present a model of D-region ion production caused by an arbitrary (in energy and pitch angle) distribution of fast (10 keV - 1 MeV) electrons. The model relies on a set of pre-calculated results obtained using a general Monte Carlo approach with the latest version of the MCNP6 (Monte Carlo N-Particle) code for the explicit electron tracking in magnetic fields. By expressing those results using the ionization yield functions, the pre-calculated results are extended to cover arbitrary magnetic field inclinations and atmospheric density profiles, allowing ionization rate altitude profile computations in the range of 20 and 200 km at any geographic point of interest and date/time by adopting results from an external atmospheric density model (e.g. NRLMSISE-00). The pre-calculated MCNP6 results are stored in a CDF (Common Data Format) file, and IDL routines library is written to provide an end-user interface to the model.

  5. Air pollution-induced health impacts on the national economy of China: demonstration of a computable general equilibrium approach.

    Science.gov (United States)

    Wan, Yue; Yang, Hongwei; Masui, Toshihiko

    2005-01-01

    At the present time, ambient air pollution is a serious public health problem in China. Based on the concentration-response relationship provided by international and domestic epidemiologic studies, the authors estimated the mortality and morbidity induced by the ambient air pollution of 2000. To address the mechanism of the health impact on the national economy, the authors applied a computable general equilibrium (CGE) model, named AIM/Material China, containing 39 production sectors and 32 commodities. AIM/Material analyzes changes of the gross domestic product (GDP), final demand, and production activity originating from health damages. If ambient air quality met Grade II of China's air quality standard in 2000, then the avoidable GDP loss would be 0.38%o of the national total, of which 95% was led by labor loss. Comparatively, medical expenditure had less impact on national economy, which is explained from the aspect of the final demand by commodities and the production activities by sectors. The authors conclude that the CGE model is a suitable tool for assessing health impacts from a point of view of national economy through the discussion about its applicability.

  6. Analysis of Future Vehicle Energy Demand in China Based on a Gompertz Function Method and Computable General Equilibrium Model

    Directory of Open Access Journals (Sweden)

    Tian Wu

    2014-11-01

    Full Text Available This paper presents a model for the projection of Chinese vehicle stocks and road vehicle energy demand through 2050 based on low-, medium-, and high-growth scenarios. To derive a gross-domestic product (GDP-dependent Gompertz function, Chinese GDP is estimated using a recursive dynamic Computable General Equilibrium (CGE model. The Gompertz function is estimated using historical data on vehicle development trends in North America, Pacific Rim and Europe to overcome the problem of insufficient long-running data on Chinese vehicle ownership. Results indicate that the number of projected vehicle stocks for 2050 is 300, 455 and 463 million for low-, medium-, and high-growth scenarios respectively. Furthermore, the growth in China’s vehicle stock will increase beyond the inflection point of Gompertz curve by 2020, but will not reach saturation point during the period 2014–2050. Of major road vehicle categories, cars are the largest energy consumers, followed by trucks and buses. Growth in Chinese vehicle demand is primarily determined by per capita GDP. Vehicle saturation levels solely influence the shape of the Gompertz curve and population growth weakly affects vehicle demand. Projected total energy consumption of road vehicles in 2050 is 380, 575 and 586 million tonnes of oil equivalent for each scenario.

  7. Feasibility Study of a Generalized Framework for Developing Computer-Aided Detection Systems-a New Paradigm.

    Science.gov (United States)

    Nemoto, Mitsutaka; Hayashi, Naoto; Hanaoka, Shouhei; Nomura, Yukihiro; Miki, Soichiro; Yoshikawa, Takeharu

    2017-10-01

    We propose a generalized framework for developing computer-aided detection (CADe) systems whose characteristics depend only on those of the training dataset. The purpose of this study is to show the feasibility of the framework. Two different CADe systems were experimentally developed by a prototype of the framework, but with different training datasets. The CADe systems include four components; preprocessing, candidate area extraction, candidate detection, and candidate classification. Four pretrained algorithms with dedicated optimization/setting methods corresponding to the respective components were prepared in advance. The pretrained algorithms were sequentially trained in the order of processing of the components. In this study, two different datasets, brain MRA with cerebral aneurysms and chest CT with lung nodules, were collected to develop two different types of CADe systems in the framework. The performances of the developed CADe systems were evaluated by threefold cross-validation. The CADe systems for detecting cerebral aneurysms in brain MRAs and for detecting lung nodules in chest CTs were successfully developed using the respective datasets. The framework was shown to be feasible by the successful development of the two different types of CADe systems. The feasibility of this framework shows promise for a new paradigm in the development of CADe systems: development of CADe systems without any lesion specific algorithm designing.

  8. Economic Impacts of Potential Foot and Mouth Disease Agro-terrorism in the United States: A Computable General Equilibrium Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Oladosu, Gbadebo A [ORNL; Rose, Adam [University of Southern California, Los Angeles; Bumsoo, Lee [University of Illinois

    2013-01-01

    The foot and mouth disease (FMD) virus has high agro-terrorism potential because it is contagious, can be easily transmitted via inanimate objects and can be spread by wind. An outbreak of FMD in developed countries results in massive slaughtering of animals (for disease control) and disruptions in meat supply chains and trade, with potentially large economic losses. Although the United States has been FMD-free since 1929, the potential of FMD as a deliberate terrorist weapon calls for estimates of the physical and economic damage that could result from an outbreak. This paper estimates the economic impacts of three alternative scenarios of potential FMD attacks using a computable general equilibrium (CGE) model of the US economy. The three scenarios range from a small outbreak successfully contained within a state to a large multi-state attack resulting in slaughtering of 30 percent of the national livestock. Overall, the value of total output losses in our simulations range between $37 billion (0.15% of 2006 baseline economic output) and $228 billion (0.92%). Major impacts stem from the supply constraint on livestock due to massive animal slaughtering. As expected, the economic losses are heavily concentrated in agriculture and food manufacturing sectors, with losses ranging from $23 billion to $61 billion in the two industries.

  9. Comprehensive optimisation of China’s energy prices, taxes and subsidy policies based on the dynamic computable general equilibrium model

    International Nuclear Information System (INIS)

    He, Y.X.; Liu, Y.Y.; Du, M.; Zhang, J.X.; Pang, Y.X.

    2015-01-01

    Highlights: • Energy policy is defined as a complication of energy price, tax and subsidy policies. • The maximisation of total social benefit is the optimised objective. • A more rational carbon tax ranges from 10 to 20 Yuan/ton under the current situation. • The optimal coefficient pricing is more conducive to maximise total social benefit. - Abstract: Under the condition of increasingly serious environmental pollution, rational energy policy plays an important role in the practical significance of energy conservation and emission reduction. This paper defines energy policies as the compilation of energy prices, taxes and subsidy policies. Moreover, it establishes the optimisation model of China’s energy policy based on the dynamic computable general equilibrium model, which maximises the total social benefit, in order to explore the comprehensive influences of a carbon tax, the sales pricing mechanism and the renewable energy fund policy. The results show that when the change rates of gross domestic product and consumer price index are ±2%, ±5% and the renewable energy supply structure ratio is 7%, the more reasonable carbon tax ranges from 10 to 20 Yuan/ton, and the optimal coefficient pricing mechanism is more conducive to the objective of maximising the total social benefit. From the perspective of optimising the overall energy policies, if the upper limit of change rate in consumer price index is 2.2%, the existing renewable energy fund should be improved

  10. Comparison of stresses on homogeneous spheroids in the optical stretcher computed with geometrical optics and generalized Lorenz-Mie theory.

    Science.gov (United States)

    Boyde, Lars; Ekpenyong, Andrew; Whyte, Graeme; Guck, Jochen

    2012-11-20

    We present two electromagnetic frameworks to compare the surface stresses on spheroidal particles in the optical stretcher (a dual-beam laser trap that can be used to capture and deform biological cells). The first model is based on geometrical optics (GO) and limited in its applicability to particles that are much greater than the incident wavelength. The second framework is more sophisticated and hinges on the generalized Lorenz-Mie theory (GLMT). Despite the difference in complexity between both theories, the stress profiles computed with GO and GLMT are in good agreement with each other (relative errors are on the order of 1-10%). Both models predict a diminishing of the stresses for larger wavelengths and a strong increase of the stresses for shorter laser-cell distances. Results indicate that surface stresses on a spheroid with an aspect ratio of 1.2 hardly differ from the stresses on a sphere of similar size. Knowledge of the surface stresses and whether or not they redistribute during the stretching process is of crucial importance in real-time applications of the stretcher that aim to discern the viscoelastic properties of cells for purposes of cell characterization, sorting, and medical diagnostics.

  11. The Optimal Price Ratio of Typical Energy Sources in Beijing Based on the Computable General Equilibrium Model

    Directory of Open Access Journals (Sweden)

    Yongxiu He

    2014-04-01

    Full Text Available In Beijing, China, the rational consumption of energy is affected by the insufficient linkage mechanism of the energy pricing system, the unreasonable price ratio and other issues. This paper combines the characteristics of Beijing’s energy market, putting forward the society-economy equilibrium indicator R maximization taking into consideration the mitigation cost to determine a reasonable price ratio range. Based on the computable general equilibrium (CGE model, and dividing four kinds of energy sources into three groups, the impact of price fluctuations of electricity and natural gas on the Gross Domestic Product (GDP, Consumer Price Index (CPI, energy consumption and CO2 and SO2 emissions can be simulated for various scenarios. On this basis, the integrated effects of electricity and natural gas price shocks on the Beijing economy and environment can be calculated. The results show that relative to the coal prices, the electricity and natural gas prices in Beijing are currently below reasonable levels; the solution to these unreasonable energy price ratios should begin by improving the energy pricing mechanism, through means such as the establishment of a sound dynamic adjustment mechanism between regulated prices and market prices. This provides a new idea for exploring the rationality of energy price ratios in imperfect competitive energy markets.

  12. Development of effect assessment methodology for the deployment of fast reactor cycle system with dynamic computable general equilibrium model

    International Nuclear Information System (INIS)

    Shiotani, Hiroki; Ono, Kiyoshi

    2009-01-01

    The Global Trade and Analysis Project (GTAP) is a widely used computable general equilibrium (CGE) model developed by Purdue University. Although the GTAP-E, an energy environmental version of the GTAP model, is useful for surveying the energy-economy-environment-trade linkage is economic policy analysis, it does not have the decomposed model of the electricity sector and its analyses are comparatively static. In this study, a recursive dynamic CGE model with a detailed electricity technology bundle with nuclear power generation including FR was developed based on the GTAP-E to evaluate the long-term socioeconomic effects of FR deployment. The capital stock changes caused by international investments and some dynamic constraints of the FR deployment and operation (e.g., load following capability and plutonium mass balance) were incorporated in the analyses. The long-term socioeconomic effects resulting from the deployment of economic competitive FR with innovative technologies can be assessed; the cumulative effects of the FR deployment on GDP calculated using this model costed over 40 trillion yen in Japan and 400 trillion yen worldwide, which were several times more than the cost of the effects calculated using the conventional cost-benefit analysis tool, because of ripple effects and energy substitutions among others. (author)

  13. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction The Computing Team successfully completed the storage, initial processing, and distribution for analysis of proton-proton data in 2011. There are still a variety of activities ongoing to support winter conference activities and preparations for 2012. Heavy ions The heavy-ion run for 2011 started in early November and has already demonstrated good machine performance and success of some of the more advanced workflows planned for 2011. Data collection will continue until early December. Facilities and Infrastructure Operations Operational and deployment support for WMAgent and WorkQueue+Request Manager components, routinely used in production by Data Operations, are provided. The GlideInWMS and components installation are now deployed at CERN, which is added to the GlideInWMS factory placed in the US. There has been new operational collaboration between the CERN team and the UCSD GlideIn factory operators, covering each others time zones by monitoring/debugging pilot jobs sent from the facto...

  14. Data management and language enhancement for generalized set theory computer language for operation of large relational databases

    Science.gov (United States)

    Finley, Gail T.

    1988-01-01

    This report covers the study of the relational database implementation in the NASCAD computer program system. The existing system is used primarily for computer aided design. Attention is also directed to a hidden-surface algorithm for final drawing output.

  15. Emission spectroscopic 15N analysis 1985

    International Nuclear Information System (INIS)

    Meier, G.

    1986-01-01

    The state of the art of emission spectroscopic 15 N analysis is demonstrated taking the NOI-6e 15 N analyzer as an example. The analyzer is equipped with a microcomputer to ensure a high operational comfort, computer control, and both data acquisition and data processing. In small amounts of nitrogen-containing substances (10 to 50 μg N 2 ) the 15 N abundance can be very quickly determined in standard discharge tubes or in aqueous ammonium salt solutions with a standard deviation less than 0.6 percent

  16. COMPUTING

    CERN Multimedia

    M. Kasemann

    CMS relies on a well functioning, distributed computing infrastructure. The Site Availability Monitoring (SAM) and the Job Robot submission have been very instrumental for site commissioning in order to increase availability of more sites such that they are available to participate in CSA07 and are ready to be used for analysis. The commissioning process has been further developed, including "lessons learned" documentation via the CMS twiki. Recently the visualization, presentation and summarizing of SAM tests for sites has been redesigned, it is now developed by the central ARDA project of WLCG. Work to test the new gLite Workload Management System was performed; a 4 times increase in throughput with respect to LCG Resource Broker is observed. CMS has designed and launched a new-generation traffic load generator called "LoadTest" to commission and to keep exercised all data transfer routes in the CMS PhE-DEx topology. Since mid-February, a transfer volume of about 12 P...

  17. The economy-wide impact of pandemic influenza on the UK: a computable general equilibrium modelling experiment.

    Science.gov (United States)

    Smith, Richard D; Keogh-Brown, Marcus R; Barnett, Tony; Tait, Joyce

    2009-11-19

    To estimate the potential economic impact of pandemic influenza, associated behavioural responses, school closures, and vaccination on the United Kingdom. A computable general equilibrium model of the UK economy was specified for various combinations of mortality and morbidity from pandemic influenza, vaccine efficacy, school closures, and prophylactic absenteeism using published data. The 2004 UK economy (the most up to date available with suitable economic data). The economic impact of various scenarios with different pandemic severity, vaccination, school closure, and prophylactic absenteeism specified in terms of gross domestic product, output from different economic sectors, and equivalent variation. The costs related to illness alone ranged between 0.5% and 1.0% of gross domestic product ( pound8.4bn to pound16.8bn) for low fatality scenarios, 3.3% and 4.3% ( pound55.5bn to pound72.3bn) for high fatality scenarios, and larger still for an extreme pandemic. School closure increases the economic impact, particularly for mild pandemics. If widespread behavioural change takes place and there is large scale prophylactic absence from work, the economic impact would be notably increased with few health benefits. Vaccination with a pre-pandemic vaccine could save 0.13% to 2.3% of gross domestic product ( pound2.2bn to pound38.6bn); a single dose of a matched vaccine could save 0.3% to 4.3% ( pound5.0bn to pound72.3bn); and two doses of a matched vaccine could limit the overall economic impact to about 1% of gross domestic product for all disease scenarios. Balancing school closure against "business as usual" and obtaining sufficient stocks of effective vaccine are more important factors in determining the economic impact of an influenza pandemic than is the disease itself. Prophylactic absence from work in response to fear of infection can add considerably to the economic impact.

  18. General general game AI

    OpenAIRE

    Togelius, Julian; Yannakakis, Georgios N.; 2016 IEEE Conference on Computational Intelligence and Games (CIG)

    2016-01-01

    Arguably the grand goal of artificial intelligence research is to produce machines with general intelligence: the capacity to solve multiple problems, not just one. Artificial intelligence (AI) has investigated the general intelligence capacity of machines within the domain of games more than any other domain given the ideal properties of games for that purpose: controlled yet interesting and computationally hard problems. This line of research, however, has so far focuse...

  19. Summary of computational support and general documentation for computer code (GENTREE) used in Office of Nuclear Waste Isolation Pilot Salt Site Selection Project

    International Nuclear Information System (INIS)

    Beatty, J.A.; Younker, J.L.; Rousseau, W.F.; Elayat, H.A.

    1983-01-01

    A Decision Tree Computer Model was adapted for the purposes of a Pilot Salt Site Selection Project conducted by the Office of Nuclear Waste Isolation (ONWI). A deterministic computer model was developed to structure the site selection problem with submodels reflecting the five major outcome categories (Cost, Safety, Delay, Environment, Community Impact) to be evaluated in the decision process. Time-saving modifications were made in the tree code as part of the effort. In addition, format changes allowed retention of information items which are valuable in directing future research and in isolation of key variabilities in the Site Selection Decision Model. The deterministic code was linked to the modified tree code and the entire program was transferred to the ONWI-VAX computer for future use by the ONWI project

  20. Assessing the economic impact of North China’s water scarcity mitigation strategy : a multi - region, water - extended computable general equilibrium analysis

    NARCIS (Netherlands)

    Qin, Changbo; Qin, C.; Su, Zhongbo; Bressers, Johannes T.A.; Jia, Y.; Wang, H.

    2013-01-01

    This paper describes a multi-region computable general equilibrium model for analyzing the effectiveness of measures and policies for mitigating North China’s water scarcity with respect to three different groups of scenarios. The findings suggest that a reduction in groundwater use would negatively

  1. Controlled trial of effect of computer-based nutrition course on knowledge and practice of general practitioner trainees

    NARCIS (Netherlands)

    Maiburg, Bas H. J.; Rethans, Jan-Joost E.; Schuwirth, Lambert W. T.; Mathus-Vliegen, Lisbeth M. H.; van Ree, Jan W.

    2003-01-01

    Nutrition education is not an integral part of either undergraduate or postgraduate medical education. Computer-based instruction on nutrition might be an attractive and appropriate tool to fill this gap. The study objective was to assess the degree to which computer-based instruction on nutrition

  2. Handling of computational in vitro/in vivo correlation problems by Microsoft Excel: I. Principles and some general algorithms.

    Science.gov (United States)

    Langenbucher, Frieder

    2002-01-01

    Most computations in the field of in vitro/in vivo correlations can be handled directly by Excel worksheets, without the need for specialized software. Following a summary of Excel features, applications are illustrated for numerical computation of AUC and Mean, Wagner-Nelson and Loo-Riegelman absorption plots, and polyexponential curve fitting.

  3. Computer group

    International Nuclear Information System (INIS)

    Bauer, H.; Black, I.; Heusler, A.; Hoeptner, G.; Krafft, F.; Lang, R.; Moellenkamp, R.; Mueller, W.; Mueller, W.F.; Schati, C.; Schmidt, A.; Schwind, D.; Weber, G.

    1983-01-01

    The computer groups has been reorganized to take charge for the general purpose computers DEC10 and VAX and the computer network (Dataswitch, DECnet, IBM - connections to GSI and IPP, preparation for Datex-P). (orig.)

  4. A SCILAB Program for Computing General-Relativistic Models of Rotating Neutron Stars by Implementing Hartle's Perturbation Method

    Science.gov (United States)

    Papasotiriou, P. J.; Geroyannis, V. S.

    We implement Hartle's perturbation method to the computation of relativistic rigidly rotating neutron star models. The program has been written in SCILAB (© INRIA ENPC), a matrix-oriented high-level programming language. The numerical method is described in very detail and is applied to many models in slow or fast rotation. We show that, although the method is perturbative, it gives accurate results for all practical purposes and it should prove an efficient tool for computing rapidly rotating pulsars.

  5. Computational, electrochemical, and spectroscopic studies of two mononuclear cobaloximes: the influence of an axial pyridine and solvent on the redox behaviour and evidence for pyridine coordination to cobalt(I) and cobalt(II) metal centres†

    Science.gov (United States)

    Lawrence, Mark A. W.; Celestine, Michael J.; Artis, Edward T.; Joseph, Lorne S.; Esquivel, Deisy L.; Ledbetter, Abram J.; Cropek, Donald M.; Jarrett, William L.; Bayse, Craig A.; Brewer, Matthew I.; Holder, Alvin A.

    2018-01-01

    [Co(dmgBF2)2(H2O)2] 1 (where dmgBF2 = difluoroboryldimethylglyoximato) was used to synthesize [Co(dmgBF2)2(H2O)(py)]·0.5(CH3)2CO 2 (where py = pyridine) in acetone. The formulation of complex 2 was confirmed by elemental analysis, high resolution MS, and various spectroscopic techniques. The complex [Co(dmgBF2)2(solv)(py)] (where solv = solvent) was readily formed in situ upon the addition of pyridine to complex 1. A spectrophotometric titration involving complex 1 and pyridine proved the formation of such a species, with formation constants, log K = 5.5, 5.1, 5.0, 4.4, and 3.1 in 2-butanone, dichloromethane, acetone, 1,2-difluorobenzene/acetone (4 : 1, v/v), and acetonitrile, respectively, at 20 °C. In strongly coordinating solvents, such as acetonitrile, the lower magnitude of K along with cyclic voltammetry, NMR, and UV-visible spectroscopic measurements indicated extensive dissociation of the axial pyridine. In strongly coordinating solvents, [Co(dmgBF2)2(solv)(py)] can only be distinguished from [Co(dmgBF2)2(solv)2] upon addition of an excess of pyridine, however, in weakly coordinating solvents the distinctions were apparent without the need for excess pyridine. The coordination of pyridine to the cobalt(II) centre diminished the peak current at the Epc value of the CoI/0 redox couple, which was indicative of the relative position of the reaction equilibrium. Herein we report the first experimental and theoretical 59Co NMR spectroscopic data for the formation of Co(I) species of reduced cobaloximes in the presence and absence of py (and its derivatives) in CD3CN. From spectroelectrochemical studies, it was found that pyridine coordination to a cobalt(I) metal centre is more favourable than coordination to a cobalt(II) metal centre as evident by the larger formation constant, log K = 4.6 versus 3.1, respectively, in acetonitrile at 20 °C. The electrosynthesis of hydrogen by complexes 1 and 2 in various solvents demonstrated the dramatic effects of the axial

  6. Spectroscopic Studies of Molecular Systems relevant in Astrobiology

    Science.gov (United States)

    Fornaro, Teresa

    2016-01-01

    In the Astrobiology context, the study of the physico-chemical interactions involving "building blocks of life" in plausible prebiotic and space-like conditions is fundamental to shed light on the processes that led to emergence of life on Earth as well as to molecular chemical evolution in space. In this PhD Thesis, such issues have been addressed both experimentally and computationally by employing vibrational spectroscopy, which has shown to be an effective tool to investigate the variety of intermolecular interactions that play a key role in self-assembling mechanisms of nucleic acid components and their binding to mineral surfaces. In particular, in order to dissect the contributions of the different interactions to the overall spectroscopic signals and shed light on the intricate experimental data, feasible computational protocols have been developed for the characterization of the spectroscopic properties of such complex systems. This study has been carried out through a multi-step strategy, starting the investigation from the spectroscopic properties of the isolated nucleobases, then studying the perturbation induced by the interaction with another molecule (molecular dimers), towards condensed phases like the molecular solid, up to the case of nucleic acid components adsorbed on minerals. A proper modeling of these weakly bound molecular systems has required, firstly, a validation of dispersion-corrected Density Functional Theory methods for simulating anharmonic vibrational properties. The isolated nucleobases and some of their dimers have been used as benchmark set for identifying a general, reliable and effective computational procedure based on fully anharmonic quantum mechanical computations of the vibrational wavenumbers and infrared intensities within the generalized second order vibrational perturbation theory (GVPT2) approach, combined with the cost-effective dispersion-corrected density functional B3LYP-D3, in conjunction with basis sets of

  7. Spectroscopic analysis of optoelectronic semiconductors

    CERN Document Server

    Jimenez, Juan

    2016-01-01

    This book deals with standard spectroscopic techniques which can be used to analyze semiconductor samples or devices, in both, bulk, micrometer and submicrometer scale. The book aims helping experimental physicists and engineers to choose the right analytical spectroscopic technique in order to get specific information about their specific demands. For this purpose, the techniques including technical details such as apparatus and probed sample region are described. More important, also the expected outcome from experiments is provided. This involves also the link to theory, that is not subject of this book, and the link to current experimental results in the literature which are presented in a review-like style. Many special spectroscopic techniques are introduced and their relationship to the standard techniques is revealed. Thus the book works also as a type of guide or reference book for people researching in optical spectroscopy of semiconductors.

  8. sick: The Spectroscopic Inference Crank

    Science.gov (United States)

    Casey, Andrew R.

    2016-03-01

    There exists an inordinate amount of spectral data in both public and private astronomical archives that remain severely under-utilized. The lack of reliable open-source tools for analyzing large volumes of spectra contributes to this situation, which is poised to worsen as large surveys successively release orders of magnitude more spectra. In this article I introduce sick, the spectroscopic inference crank, a flexible and fast Bayesian tool for inferring astrophysical parameters from spectra. sick is agnostic to the wavelength coverage, resolving power, or general data format, allowing any user to easily construct a generative model for their data, regardless of its source. sick can be used to provide a nearest-neighbor estimate of model parameters, a numerically optimized point estimate, or full Markov Chain Monte Carlo sampling of the posterior probability distributions. This generality empowers any astronomer to capitalize on the plethora of published synthetic and observed spectra, and make precise inferences for a host of astrophysical (and nuisance) quantities. Model intensities can be reliably approximated from existing grids of synthetic or observed spectra using linear multi-dimensional interpolation, or a Cannon-based model. Additional phenomena that transform the data (e.g., redshift, rotational broadening, continuum, spectral resolution) are incorporated as free parameters and can be marginalized away. Outlier pixels (e.g., cosmic rays or poorly modeled regimes) can be treated with a Gaussian mixture model, and a noise model is included to account for systematically underestimated variance. Combining these phenomena into a scalar-justified, quantitative model permits precise inferences with credible uncertainties on noisy data. I describe the common model features, the implementation details, and the default behavior, which is balanced to be suitable for most astronomical applications. Using a forward model on low-resolution, high signal

  9. SICK: THE SPECTROSCOPIC INFERENCE CRANK

    Energy Technology Data Exchange (ETDEWEB)

    Casey, Andrew R., E-mail: arc@ast.cam.ac.uk [Institute of Astronomy, University of Cambridge, Madingley Road, Cambdridge, CB3 0HA (United Kingdom)

    2016-03-15

    There exists an inordinate amount of spectral data in both public and private astronomical archives that remain severely under-utilized. The lack of reliable open-source tools for analyzing large volumes of spectra contributes to this situation, which is poised to worsen as large surveys successively release orders of magnitude more spectra. In this article I introduce sick, the spectroscopic inference crank, a flexible and fast Bayesian tool for inferring astrophysical parameters from spectra. sick is agnostic to the wavelength coverage, resolving power, or general data format, allowing any user to easily construct a generative model for their data, regardless of its source. sick can be used to provide a nearest-neighbor estimate of model parameters, a numerically optimized point estimate, or full Markov Chain Monte Carlo sampling of the posterior probability distributions. This generality empowers any astronomer to capitalize on the plethora of published synthetic and observed spectra, and make precise inferences for a host of astrophysical (and nuisance) quantities. Model intensities can be reliably approximated from existing grids of synthetic or observed spectra using linear multi-dimensional interpolation, or a Cannon-based model. Additional phenomena that transform the data (e.g., redshift, rotational broadening, continuum, spectral resolution) are incorporated as free parameters and can be marginalized away. Outlier pixels (e.g., cosmic rays or poorly modeled regimes) can be treated with a Gaussian mixture model, and a noise model is included to account for systematically underestimated variance. Combining these phenomena into a scalar-justified, quantitative model permits precise inferences with credible uncertainties on noisy data. I describe the common model features, the implementation details, and the default behavior, which is balanced to be suitable for most astronomical applications. Using a forward model on low-resolution, high signal

  10. SICK: THE SPECTROSCOPIC INFERENCE CRANK

    International Nuclear Information System (INIS)

    Casey, Andrew R.

    2016-01-01

    There exists an inordinate amount of spectral data in both public and private astronomical archives that remain severely under-utilized. The lack of reliable open-source tools for analyzing large volumes of spectra contributes to this situation, which is poised to worsen as large surveys successively release orders of magnitude more spectra. In this article I introduce sick, the spectroscopic inference crank, a flexible and fast Bayesian tool for inferring astrophysical parameters from spectra. sick is agnostic to the wavelength coverage, resolving power, or general data format, allowing any user to easily construct a generative model for their data, regardless of its source. sick can be used to provide a nearest-neighbor estimate of model parameters, a numerically optimized point estimate, or full Markov Chain Monte Carlo sampling of the posterior probability distributions. This generality empowers any astronomer to capitalize on the plethora of published synthetic and observed spectra, and make precise inferences for a host of astrophysical (and nuisance) quantities. Model intensities can be reliably approximated from existing grids of synthetic or observed spectra using linear multi-dimensional interpolation, or a Cannon-based model. Additional phenomena that transform the data (e.g., redshift, rotational broadening, continuum, spectral resolution) are incorporated as free parameters and can be marginalized away. Outlier pixels (e.g., cosmic rays or poorly modeled regimes) can be treated with a Gaussian mixture model, and a noise model is included to account for systematically underestimated variance. Combining these phenomena into a scalar-justified, quantitative model permits precise inferences with credible uncertainties on noisy data. I describe the common model features, the implementation details, and the default behavior, which is balanced to be suitable for most astronomical applications. Using a forward model on low-resolution, high signal

  11. ON THE SPECTROSCOPIC CLASSES OF NOVAE IN M33

    International Nuclear Information System (INIS)

    Shafter, A. W.; Darnley, M. J.; Bode, M. F.; Ciardullo, R.

    2012-01-01

    We report the initial results from an ongoing multi-year spectroscopic survey of novae in M33. The survey resulted in the spectroscopic classification of six novae (M33N 2006-09a, 2007-09a, 2009-01a, 2010-10a, 2010-11a, and 2011-12a) and a determination of rates of decline (t 2 times) for four of them (2006-09a, 2007-09a, 2009-01a, and 2010-10a). When these data are combined with existing spectroscopic data for two additional M33 novae (2003-09a and 2008-02a), we find that five of the eight novae with available spectroscopic class appear to be members of either the He/N or Fe IIb (hybrid) classes, with only two clear members of the Fe II spectroscopic class. This initial finding is very different from what would be expected based on the results for M31 and the Galaxy where Fe II novae dominate, and the He/N and Fe IIb classes together make up only ∼20% of the total. It is plausible that the increased fraction of He/N and Fe IIb novae observed in M33 thus far may be the result of the younger stellar population that dominates this galaxy, which is expected to produce novae that harbor generally more massive white dwarfs than those typically associated with novae in M31 or the Milky Way.

  12. Spectroscopic Evidence for Nonuniform Starspot Properties on II Pegasi

    Science.gov (United States)

    ONeal, Douglas; Saar, Steven H.; Neff, James E.

    1998-01-01

    We present spectroscopic evidence for Multiple Spot temperatures on the RS CVn star II Pegasi (HD 224085). We model the strengths of the 7055 and 8860 A TiO absorption bands in the spectrum of II Peg using weighted sums of inactive comparison spectra: a K star to represent the nonspotted photosphere and an M star to represent the spots. The best fit yields independent measurements of the starspot filling factor (f(sub s) and mean spot temperature (T(sub s)) averaged over the visible hemisphere of the star. During three-fourths of a rotation of II Peg in late 1996, we measure a constant f(sub s) approximately equals 55% +/- 5%. However, (T(sub s) varies from 3350 +/- 60 to 3550 +/- 70 K. We compute (T(sub s) for two simple models: (1) a star with two distinct spot temperatures, and (2) a star with different umbral/penumbral area ratios. The changing (T(sub s) correlates with emission strengths of H(alpha) and the Ca II infrared triplet in the sense that cooler (T(sub s) accompanies weaker emission. We explore possible implications of these results for the physical properties of the spots on II Peg and for stellar surface structure in general.

  13. II Peg: Spectroscopic Evidence for Multiple Starspot Temperatures

    Science.gov (United States)

    O'Neal, Douglas; Saar, Steven H.; Neff, James E. Neff

    We present spectroscopic evidence for multiple spot temperatures on the RS CVn star II Pegasi (HD 224085). We fit the strengths of the 7055 AAg and 8860 AAg TiO absorption bands in the spectrum of an active star using weighted sums of comparison spectra: the spectrum of an inactive K star to represent the non-spotted photosphere and the spectrum of an M star to represent the spots. We can thus independently measure starspot filling factor (fspot) and temperature (tspot). During 3/4 of a rotation of II Peg in Sept.-Oct. 1996, we measure fspot approximately constant at 55+/-5%. However, tspot varies from 3350 K to 3500 K. Since our method yields one derived tspot integrated over the visible hemisphere of the star, we present the results of simple models of a star with two distinct spot temperatures and compute the tspot we would derive in those cases. The changing tspot correlates with emission strengths of Hα and the Ca 2 infrared triplet, in the sense that cooler \\tspot accompanies weaker emission. We explore the consequences of these results for the physical properties of the spots on II Peg and for stellar surface structure in general.

  14. Demographics of undergraduates studying games in the United States: a comparison of computer science students and the general population

    Science.gov (United States)

    McGill, Monica M.; Settle, Amber; Decker, Adrienne

    2013-06-01

    Our study gathered data to serve as a benchmark of demographics of undergraduate students in game degree programs. Due to the high number of programs that are cross-disciplinary with computer science programs or that are housed in computer science departments, the data is presented in comparison to data from computing students (where available) and the US population. Participants included students studying games at four nationally recognized postsecondary institutions. The results of the study indicate that there is no significant difference between the ratio of men to women studying in computing programs or in game degree programs, with women being severely underrepresented in both. Women, blacks, Hispanics/Latinos, and heterosexuals are underrepresented compared to the US population. Those with moderate and conservative political views and with religious affiliations are underrepresented in the game student population. Participants agree that workforce diversity is important and that their programs are adequately diverse, but only one-half of the participants indicated that diversity has been discussed in any of their courses.

  15. A Comparison of Computer-Based Classification Testing Approaches Using Mixed-Format Tests with the Generalized Partial Credit Model

    Science.gov (United States)

    Kim, Jiseon

    2010-01-01

    Classification testing has been widely used to make categorical decisions by determining whether an examinee has a certain degree of ability required by established standards. As computer technologies have developed, classification testing has become more computerized. Several approaches have been proposed and investigated in the context of…

  16. High throughput assessment of cells and tissues: Bayesian classification of spectral metrics from infrared vibrational spectroscopic imaging data.

    Science.gov (United States)

    Bhargava, Rohit; Fernandez, Daniel C; Hewitt, Stephen M; Levin, Ira W

    2006-07-01

    Vibrational spectroscopy allows a visualization of tissue constituents based on intrinsic chemical composition and provides a potential route to obtaining diagnostic markers of diseases. Characterizations utilizing infrared vibrational spectroscopy, in particular, are conventionally low throughput in data acquisition, generally lacking in spatial resolution with the resulting data requiring intensive numerical computations to extract information. These factors impair the ability of infrared spectroscopic measurements to represent accurately the spatial heterogeneity in tissue, to incorporate robustly the diversity introduced by patient cohorts or preparative artifacts and to validate developed protocols in large population studies. In this manuscript, we demonstrate a combination of Fourier transform infrared (FTIR) spectroscopic imaging, tissue microarrays (TMAs) and fast numerical analysis as a paradigm for the rapid analysis, development and validation of high throughput spectroscopic characterization protocols. We provide an extended description of the data treatment algorithm and a discussion of various factors that may influence decision-making using this approach. Finally, a number of prostate tissue biopsies, arranged in an array modality, are employed to examine the efficacy of this approach in histologic recognition of epithelial cell polarization in patients displaying a variety of normal, malignant and hyperplastic conditions. An index of epithelial cell polarization, derived from a combined spectral and morphological analysis, is determined to be a potentially useful diagnostic marker.

  17. Handling of computational in vitro/in vivo correlation problems by Microsoft Excel: IV. Generalized matrix analysis of linear compartment systems.

    Science.gov (United States)

    Langenbucher, Frieder

    2005-01-01

    A linear system comprising n compartments is completely defined by the rate constants between any of the compartments and the initial condition in which compartment(s) the drug is present at the beginning. The generalized solution is the time profiles of drug amount in each compartment, described by polyexponential equations. Based on standard matrix operations, an Excel worksheet computes the rate constants and the coefficients, finally the full time profiles for a specified range of time values.

  18. 1-Amino-4-hydroxy-9,10-anthraquinone - An analogue of anthracycline anticancer drugs, interacts with DNA and induces apoptosis in human MDA-MB-231 breast adinocarcinoma cells: Evaluation of structure-activity relationship using computational, spectroscopic and biochemical studies.

    Science.gov (United States)

    Mondal, Palash; Roy, Sanjay; Loganathan, Gayathri; Mandal, Bitapi; Dharumadurai, Dhanasekaran; Akbarsha, Mohammad A; Sengupta, Partha Sarathi; Chattopadhyay, Shouvik; Guin, Partha Sarathi

    2015-12-01

    The X-ray diffraction and spectroscopic properties of 1-amino-4-hydroxy-9,10-anthraquinone (1-AHAQ), a simple analogue of anthracycline chemotherapeutic drugs were studied by adopting experimental and computational methods. The optimized geometrical parameters obtained from computational methods were compared with the results of X-ray diffraction analysis and the two were found to be in reasonably good agreement. X-ray diffraction study, Density Functional Theory (DFT) and natural bond orbital (NBO) analysis indicated two types of hydrogen bonds in the molecule. The IR spectra of 1-AHAQ were studied by Vibrational Energy Distribution Analysis (VEDA) using potential energy distribution (PED) analysis. The electronic spectra were studied by TDDFT computation and compared with the experimental results. Experimental and theoretical results corroborated each other to a fair extent. To understand the biological efficacy of 1-AHAQ, it was allowed to interact with calf thymus DNA and human breast adino-carcinoma cell MDA-MB-231. It was found that the molecule induces apoptosis in this adinocarcinoma cell, with little, if any, cytotoxic effect in HBL-100 normal breast epithelial cell.

  19. On the analysis of glow curves with the general order kinetics: Reliability of the computed trap parameters

    Energy Technology Data Exchange (ETDEWEB)

    Ortega, F. [Facultad de Ingeniería (UNCPBA) and CIFICEN (UNCPBA – CICPBA – CONICET), Av. del Valle 5737, 7400 Olavarría (Argentina); Santiago, M.; Martinez, N.; Marcazzó, J.; Molina, P.; Caselli, E. [Instituto de Física Arroyo Seco (UNCPBA) and CIFICEN (UNCPBA – CICPBA – CONICET), Pinto 399, 7000 Tandil (Argentina)

    2017-04-15

    Nowadays the most employed kinetics for analyzing glow curves is the general order kinetics (GO) proposed by C. E. May and J. A. Partridge. As shown in many articles this kinetics might yield wrong parameters characterizing trap and recombination centers. In this article this kinetics is compared with the modified general order kinetics put forward by M. S. Rasheedy by analyzing synthetic glow curves. The results show that the modified kinetics gives parameters, which are more accurate than that yield by the original general order kinetics. A criterion is reported to evaluate the accuracy of the trap parameters found by deconvolving glow curves. This criterion was employed to assess the reliability of the trap parameters of the YVO{sub 4}: Eu{sup 3+} compounds.

  20. Universal relation between spectroscopic constants

    Indian Academy of Sciences (India)

    (3) The author has used eq. (6) of his paper to calculate De. This relation leads to a large deviation from the correct value depending upon the extent to which experimental values are known. Guided by this fact, in our work, we used experimentally observed De values to derive the relation between spectroscopic constants.

  1. The VANDELS ESO spectroscopic survey

    Science.gov (United States)

    McLure, R. J.; Pentericci, L.; Cimatti, A.; Dunlop, J. S.; Elbaz, D.; Fontana, A.; Nandra, K.; Amorin, R.; Bolzonella, M.; Bongiorno, A.; Carnall, A. C.; Castellano, M.; Cirasuolo, M.; Cucciati, O.; Cullen, F.; De Barros, S.; Finkelstein, S. L.; Fontanot, F.; Franzetti, P.; Fumana, M.; Gargiulo, A.; Garilli, B.; Guaita, L.; Hartley, W. G.; Iovino, A.; Jarvis, M. J.; Juneau, S.; Karman, W.; Maccagni, D.; Marchi, F.; Mármol-Queraltó, E.; Pompei, E.; Pozzetti, L.; Scodeggio, M.; Sommariva, V.; Talia, M.; Almaini, O.; Balestra, I.; Bardelli, S.; Bell, E. F.; Bourne, N.; Bowler, R. A. A.; Brusa, M.; Buitrago, F.; Caputi, K. I.; Cassata, P.; Charlot, S.; Citro, A.; Cresci, G.; Cristiani, S.; Curtis-Lake, E.; Dickinson, M.; Fazio, G. G.; Ferguson, H. C.; Fiore, F.; Franco, M.; Fynbo, J. P. U.; Galametz, A.; Georgakakis, A.; Giavalisco, M.; Grazian, A.; Hathi, N. P.; Jung, I.; Kim, S.; Koekemoer, A. M.; Khusanova, Y.; Le Fèvre, O.; Lotz, J. M.; Mannucci, F.; Maltby, D. T.; Matsuoka, K.; McLeod, D. J.; Mendez-Hernandez, H.; Mendez-Abreu, J.; Mignoli, M.; Moresco, M.; Mortlock, A.; Nonino, M.; Pannella, M.; Papovich, C.; Popesso, P.; Rosario, D. P.; Salvato, M.; Santini, P.; Schaerer, D.; Schreiber, C.; Stark, D. P.; Tasca, L. A. M.; Thomas, R.; Treu, T.; Vanzella, E.; Wild, V.; Williams, C. C.; Zamorani, G.; Zucca, E.

    2018-05-01

    VANDELS is a uniquely-deep spectroscopic survey of high-redshift galaxies with the VIMOS spectrograph on ESO's Very Large Telescope (VLT). The survey has obtained ultra-deep optical (0.48 studies. Using integration times calculated to produce an approximately constant signal-to-noise ratio (20 motivation, survey design and target selection.

  2. Statistical investigation of spectroscopic binary stars

    International Nuclear Information System (INIS)

    Tutukov, A.V.; Yungelson, L.R.

    1980-01-01

    A catalog of physical parameters of about 1000 spectroscopic binary stars (SB), based on the Batten catalog, its extensions, and newly published data has been compiled. Masses of stars' components (M 1 and M 2 ), mass ratios of components (q=M 1 /M 2 ) and orbital angular momenta are computed, wherever possible. It is probable that the initial mass function of the primaries is non-monotonic and is described only approximately by a power-law. A number of assumed 'initial' distributions of M 1 , q and the semiaxes of orbits were transformed with the aim of obtaining 'observed' distributions taking into account the observational selection due to the luminosities of the components, their radial velocities, inclinations of the orbits, and the effects of matter exchange between the components. (Auth.)

  3. Micron scale spectroscopic analysis of materials

    International Nuclear Information System (INIS)

    James, David; Finlayson, Trevor; Prawer, Steven

    1991-01-01

    The goal of this proposal is the establishment of a facility which will enable complete micron scale spectroscopic analysis of any sample which can be imaged in the optical microscope. Current applications include studies of carbon fibres, diamond thin films, ceramics (zirconia and high T c superconductors), semiconductors, wood pulp, wool fibres, mineral inclusions, proteins, plant cells, polymers, fluoride glasses, and optical fibres. The range of interests crosses traditional discipline boundaries and augurs well for a truly interdisciplinary collaboration. Developments in instrumentation such as confocal imaging are planned to achieve sub-micron resolution, and advances in computer software and hardware will enable the aforementioned spectroscopies to be used to map molecular and crystalline phases on the surfaces of materials. Coupled with existing compositional microprobes (e.g. the proton microprobe) the possibilities for the development of new, powerful, hybrid imaging technologies appear to be excellent

  4. Infrared Spectroscopic Imaging: The Next Generation

    Science.gov (United States)

    Bhargava, Rohit

    2013-01-01

    Infrared (IR) spectroscopic imaging seemingly matured as a technology in the mid-2000s, with commercially successful instrumentation and reports in numerous applications. Recent developments, however, have transformed our understanding of the recorded data, provided capability for new instrumentation, and greatly enhanced the ability to extract more useful information in less time. These developments are summarized here in three broad areas— data recording, interpretation of recorded data, and information extraction—and their critical review is employed to project emerging trends. Overall, the convergence of selected components from hardware, theory, algorithms, and applications is one trend. Instead of similar, general-purpose instrumentation, another trend is likely to be diverse and application-targeted designs of instrumentation driven by emerging component technologies. The recent renaissance in both fundamental science and instrumentation will likely spur investigations at the confluence of conventional spectroscopic analyses and optical physics for improved data interpretation. While chemometrics has dominated data processing, a trend will likely lie in the development of signal processing algorithms to optimally extract spectral and spatial information prior to conventional chemometric analyses. Finally, the sum of these recent advances is likely to provide unprecedented capability in measurement and scientific insight, which will present new opportunities for the applied spectroscopist. PMID:23031693

  5. The HITRAN 2004 molecular spectroscopic database

    Energy Technology Data Exchange (ETDEWEB)

    Rothman, L.S. [Harvard-Smithsonian Center for Astrophysics, Atomic and Molecular Physics Division, Cambridge, MA 02138 (United States)]. E-mail: lrothman@cfa.harvard.edu; Jacquemart, D. [Harvard-Smithsonian Center for Astrophysics, Atomic and Molecular Physics Division, Cambridge, MA 02138 (United States); Barbe, A. [Universite de Reims-Champagne-Ardenne, Groupe de Spectrometrie Moleculaire et Atmospherique, 51062 Reims (France)] (and others)

    2005-12-01

    This paper describes the status of the 2004 edition of the HITRAN molecular spectroscopic database. The HITRAN compilation consists of several components that serve as input for radiative transfer calculation codes: individual line parameters for the microwave through visible spectra of molecules in the gas phase; absorption cross-sections for molecules having dense spectral features, i.e., spectra in which the individual lines are unresolvable; individual line parameters and absorption cross-sections for bands in the ultra-violet; refractive indices of aerosols; tables and files of general properties associated with the database; and database management software. The line-by-line portion of the database contains spectroscopic parameters for 39 molecules including many of their isotopologues. The format of the section of the database on individual line parameters of HITRAN has undergone the most extensive enhancement in almost two decades. It now lists the Einstein A-coefficients, statistical weights of the upper and lower levels of the transitions, a better system for the representation of quantum identifications, and enhanced referencing and uncertainty codes. In addition, there is a provision for making corrections to the broadening of line transitions due to line mixing.

  6. The HITRAN 2004 molecular spectroscopic database

    International Nuclear Information System (INIS)

    Rothman, L.S.; Jacquemart, D.; Barbe, A.

    2005-01-01

    This paper describes the status of the 2004 edition of the HITRAN molecular spectroscopic database. The HITRAN compilation consists of several components that serve as input for radiative transfer calculation codes: individual line parameters for the microwave through visible spectra of molecules in the gas phase; absorption cross-sections for molecules having dense spectral features, i.e., spectra in which the individual lines are unresolvable; individual line parameters and absorption cross-sections for bands in the ultra-violet; refractive indices of aerosols; tables and files of general properties associated with the database; and database management software. The line-by-line portion of the database contains spectroscopic parameters for 39 molecules including many of their isotopologues. The format of the section of the database on individual line parameters of HITRAN has undergone the most extensive enhancement in almost two decades. It now lists the Einstein A-coefficients, statistical weights of the upper and lower levels of the transitions, a better system for the representation of quantum identifications, and enhanced referencing and uncertainty codes. In addition, there is a provision for making corrections to the broadening of line transitions due to line mixing

  7. A third order accurate Lagrangian finite element scheme for the computation of generalized molecular stress function fluids

    DEFF Research Database (Denmark)

    Fasano, Andrea; Rasmussen, Henrik K.

    2017-01-01

    A third order accurate, in time and space, finite element scheme for the numerical simulation of three- dimensional time-dependent flow of the molecular stress function type of fluids in a generalized formu- lation is presented. The scheme is an extension of the K-BKZ Lagrangian finite element me...

  8. A Generalized Least Squares Regression Approach for Computing Effect Sizes in Single-Case Research: Application Examples

    Science.gov (United States)

    Maggin, Daniel M.; Swaminathan, Hariharan; Rogers, Helen J.; O'Keeffe, Breda V.; Sugai, George; Horner, Robert H.

    2011-01-01

    A new method for deriving effect sizes from single-case designs is proposed. The strategy is applicable to small-sample time-series data with autoregressive errors. The method uses Generalized Least Squares (GLS) to model the autocorrelation of the data and estimate regression parameters to produce an effect size that represents the magnitude of…

  9. TRIO a general computer code for reactor 3-D flows analysis. Application to a LMFBR hot plenum

    International Nuclear Information System (INIS)

    Magnaud, J.P.; Rouzaud, P.

    1985-09-01

    TRIO is a code developed at CEA to investigate general incompressible 2D and 3D viscous flows. Two calculations are presented: the lid driven cubic cavity at Re=400; steady state (velocity and temperature field) of a LMFBR hot plenum, carried out in order to prepare the calculation of a cold shock consecutive to a reactor scram. 8 refs., 26 figs.

  10. General Electromagnetic Model for the Analysis of Complex Systems (GEMACS) Computer Code Documentation (Version 3). Volume 3. Part 2.

    Science.gov (United States)

    1983-09-01

    F.P. PX /AMPZIJ/ REFH /AMPZIJ/ REFV /AI4PZIJ/ * RHOX /AI4PZIJ/ RHOY /At4PZIJ/ RHOZ /AI4PZIJ/ S A-ZJ SA /AMPZIJ/ SALP /AMPZIJ/ 6. CALLING ROUTINE: FLDDRV...US3NG ALGORITHM 72 COMPUTE P- YES .~:*:.~~ USING* *. 1. NAME: PLAINT (GTD) ] 2. PURPOSE: To determine if a ray traveling from a given source loca...determine if a source ray reflection from plate MP occurs. If a ray traveling from the source image location in the reflected ray direction passes through

  11. Economic analysis of energy supply and national economy on the basis of general equilibrium models. Applications of the input-output decomposition analysis and the Computable General Equilibrium models shown by the example of Korea

    International Nuclear Information System (INIS)

    Ko, Jong-Hwan.

    1993-01-01

    Firstly, this study investigaties the causes of sectoral growth and structural changes in the Korean economy. Secondly, it develops the borders of a consistent economic model in order to investigate simultaneously the different impacts of changes in energy and in the domestic economy. This is done any both the Input-Output-Decomposition analysis and a Computable General Equilibrium model (CGE Model). The CGE Model eliminates the disadvantages of the IO Model and allows the investigation of the interdegenerative of the various energy sectors with the economy. The Social Accounting Matrix serves as the data basis of the GCE Model. Simulated experiments have been comet out with the help of the GCE Model, indicating the likely impact of an oil price shock in the economy-sectorally and generally. (orig.) [de

  12. Precise and Fast Computation of the Gravitational Field of a General Finite Body and Its Application to the Gravitational Study of Asteroid Eros

    International Nuclear Information System (INIS)

    Fukushima, Toshio

    2017-01-01

    In order to obtain the gravitational field of a general finite body inside its Brillouin sphere, we developed a new method to compute the field accurately. First, the body is assumed to consist of some layers in a certain spherical polar coordinate system and the volume mass density of each layer is expanded as a Maclaurin series of the radial coordinate. Second, the line integral with respect to the radial coordinate is analytically evaluated in a closed form. Third, the resulting surface integrals are numerically integrated by the split quadrature method using the double exponential rule. Finally, the associated gravitational acceleration vector is obtained by numerically differentiating the numerically integrated potential. Numerical experiments confirmed that the new method is capable of computing the gravitational field independently of the location of the evaluation point, namely whether inside, on the surface of, or outside the body. It can also provide sufficiently precise field values, say of 14–15 digits for the potential and of 9–10 digits for the acceleration. Furthermore, its computational efficiency is better than that of the polyhedron approximation. This is because the computational error of the new method decreases much faster than that of the polyhedron models when the number of required transcendental function calls increases. As an application, we obtained the gravitational field of 433 Eros from its shape model expressed as the 24 × 24 spherical harmonic expansion by assuming homogeneity of the object.

  13. Precise and Fast Computation of the Gravitational Field of a General Finite Body and Its Application to the Gravitational Study of Asteroid Eros

    Energy Technology Data Exchange (ETDEWEB)

    Fukushima, Toshio, E-mail: Toshio.Fukushima@nao.ac.jp [National Astronomical Observatory/SOKENDAI, Ohsawa, Mitaka, Tokyo 181-8588 (Japan)

    2017-10-01

    In order to obtain the gravitational field of a general finite body inside its Brillouin sphere, we developed a new method to compute the field accurately. First, the body is assumed to consist of some layers in a certain spherical polar coordinate system and the volume mass density of each layer is expanded as a Maclaurin series of the radial coordinate. Second, the line integral with respect to the radial coordinate is analytically evaluated in a closed form. Third, the resulting surface integrals are numerically integrated by the split quadrature method using the double exponential rule. Finally, the associated gravitational acceleration vector is obtained by numerically differentiating the numerically integrated potential. Numerical experiments confirmed that the new method is capable of computing the gravitational field independently of the location of the evaluation point, namely whether inside, on the surface of, or outside the body. It can also provide sufficiently precise field values, say of 14–15 digits for the potential and of 9–10 digits for the acceleration. Furthermore, its computational efficiency is better than that of the polyhedron approximation. This is because the computational error of the new method decreases much faster than that of the polyhedron models when the number of required transcendental function calls increases. As an application, we obtained the gravitational field of 433 Eros from its shape model expressed as the 24 × 24 spherical harmonic expansion by assuming homogeneity of the object.

  14. Spectroscopic (FT-IR, FT-Raman, UV, 1H and 13C NMR insights, electronic profiling and DFT computations on ({(E-[3-(1H-imidazol-1-yl-1-phenylpropylidene] amino}oxy(4-nitrophenylmethanone, an imidazole-bearing anti-Candida agent

    Directory of Open Access Journals (Sweden)

    Al-Wahaibi Lamya H.

    2018-02-01

    Full Text Available The anti-Candida agent, ({(E-[3-(1H-imidazol-1-yl-1-phenylpropylidene]amnio}oxy(4-nitropheny methanone (IPAONM, was subjected to comprehensive spectroscopic (FT-IR, FT-Raman, UV–Vis 1H and 13C NMR characterization as well as Hartree Fock and density functional theory computation studies. The selected optimized geometric bond lengths and bond angles of the IPAONM molecule were compared with the experimental values. The calculated wavenumbers have been scaled and compared with the experimental spectra. Mulliken charges and natural bond orbital analysis of the title molecule were calculated and interpreted. The energy and oscillator strengths of the IPAONM molecule were calculated by time-dependent density functional theory (TD-DFT. In addition, frontier molecular orbitals and molecular electrostatic potential diagram of the title compound were computed and analyzed. A study on the electronic properties, such as HOMO, HOMO-1, LUMO and LUMO+1 energies was carried out using TD-DFT approach. The 1H and 13C NMR chemical shift values of the title compound were calculated by the gauge independent atomic orbital method and compared with the experimental results.

  15. Forecasting of chalcogenide spinels of the general formula AB2X4 using the method of training of an electronic computer

    International Nuclear Information System (INIS)

    Kiseleva, N.N.; Savitskij, E.M.

    1979-01-01

    Experimental evidence on the existence of AB 2 X 4 compounds in A-B-X systems (A and B are any elements of the periodic system, in particular, rare earth elements, X is S or Se) and the data on the properties of elements and simple sulphides (selenides) were used for an attempt to find the probability of formation of AB 2 X 4 compounds as a function of the component properties. The experimental evidence was analyzed by a computer employing algorithms of concept formation training. The results were used to predict new AB 2 X 4 compounds. The computer training method proved an efficient means of revealing the connection between probability of spinel type compounds existence with the component properties. Certain compounds of general formula AB 2 X 4 were predicted and the probability that they possess spinel structure was evaluated

  16. Numerical solution to generalized Burgers'-Fisher equation using Exp-function method hybridized with heuristic computation.

    Directory of Open Access Journals (Sweden)

    Suheel Abdullah Malik

    Full Text Available In this paper, a new heuristic scheme for the approximate solution of the generalized Burgers'-Fisher equation is proposed. The scheme is based on the hybridization of Exp-function method with nature inspired algorithm. The given nonlinear partial differential equation (NPDE through substitution is converted into a nonlinear ordinary differential equation (NODE. The travelling wave solution is approximated by the Exp-function method with unknown parameters. The unknown parameters are estimated by transforming the NODE into an equivalent global error minimization problem by using a fitness function. The popular genetic algorithm (GA is used to solve the minimization problem, and to achieve the unknown parameters. The proposed scheme is successfully implemented to solve the generalized Burgers'-Fisher equation. The comparison of numerical results with the exact solutions, and the solutions obtained using some traditional methods, including adomian decomposition method (ADM, homotopy perturbation method (HPM, and optimal homotopy asymptotic method (OHAM, show that the suggested scheme is fairly accurate and viable for solving such problems.

  17. Computational Fluid Dynamics Modeling of Steam Condensation on Nuclear Containment Wall Surfaces Based on Semiempirical Generalized Correlations

    Directory of Open Access Journals (Sweden)

    Pavan K. Sharma

    2012-01-01

    Full Text Available In water-cooled nuclear power reactors, significant quantities of steam and hydrogen could be produced within the primary containment following the postulated design basis accidents (DBA or beyond design basis accidents (BDBA. For accurate calculation of the temperature/pressure rise and hydrogen transport calculation in nuclear reactor containment due to such scenarios, wall condensation heat transfer coefficient (HTC is used. In the present work, the adaptation of a commercial CFD code with the implementation of models for steam condensation on wall surfaces in presence of noncondensable gases is explained. Steam condensation has been modeled using the empirical average HTC, which was originally developed to be used for “lumped-parameter” (volume-averaged modeling of steam condensation in the presence of noncondensable gases. The present paper suggests a generalized HTC based on curve fitting of most of the reported semiempirical condensation models, which are valid for specific wall conditions. The present methodology has been validated against limited reported experimental data from the COPAIN experimental facility. This is the first step towards the CFD-based generalized analysis procedure for condensation modeling applicable for containment wall surfaces that is being evolved further for specific wall surfaces within the multicompartment containment atmosphere.

  18. Numerical solution to generalized Burgers'-Fisher equation using Exp-function method hybridized with heuristic computation.

    Science.gov (United States)

    Malik, Suheel Abdullah; Qureshi, Ijaz Mansoor; Amir, Muhammad; Malik, Aqdas Naveed; Haq, Ihsanul

    2015-01-01

    In this paper, a new heuristic scheme for the approximate solution of the generalized Burgers'-Fisher equation is proposed. The scheme is based on the hybridization of Exp-function method with nature inspired algorithm. The given nonlinear partial differential equation (NPDE) through substitution is converted into a nonlinear ordinary differential equation (NODE). The travelling wave solution is approximated by the Exp-function method with unknown parameters. The unknown parameters are estimated by transforming the NODE into an equivalent global error minimization problem by using a fitness function. The popular genetic algorithm (GA) is used to solve the minimization problem, and to achieve the unknown parameters. The proposed scheme is successfully implemented to solve the generalized Burgers'-Fisher equation. The comparison of numerical results with the exact solutions, and the solutions obtained using some traditional methods, including adomian decomposition method (ADM), homotopy perturbation method (HPM), and optimal homotopy asymptotic method (OHAM), show that the suggested scheme is fairly accurate and viable for solving such problems.

  19. Single nanoparticle tracking spectroscopic microscope

    Science.gov (United States)

    Yang, Haw [Moraga, CA; Cang, Hu [Berkeley, CA; Xu, Cangshan [Berkeley, CA; Wong, Chung M [San Gabriel, CA

    2011-07-19

    A system that can maintain and track the position of a single nanoparticle in three dimensions for a prolonged period has been disclosed. The system allows for continuously imaging the particle to observe any interactions it may have. The system also enables the acquisition of real-time sequential spectroscopic information from the particle. The apparatus holds great promise in performing single molecule spectroscopy and imaging on a non-stationary target.

  20. Mid-infrared spectroscopic investigation

    International Nuclear Information System (INIS)

    Walter, L.; Vergo, N.; Salisbury, J.W.

    1987-01-01

    Mid-infrared spectroscopic research efforts are discussed. The development of a new instrumentation to permit advanced measurements in the mid-infrared region of the spectrum, the development of a special library of well-characterized mineral and rock specimens for interpretation of remote sensing data, and cooperative measurements of the spectral signatures of analogues of materials that may be present on the surfaces of asteroids, planets or their Moons are discussed

  1. Multi-pass spectroscopic ellipsometry

    International Nuclear Information System (INIS)

    Stehle, Jean-Louis; Samartzis, Peter C.; Stamataki, Katerina; Piel, Jean-Philippe; Katsoprinakis, George E.; Papadakis, Vassilis; Schimowski, Xavier; Rakitzis, T. Peter; Loppinet, Benoit

    2014-01-01

    Spectroscopic ellipsometry is an established technique, particularly useful for thickness measurements of thin films. It measures polarization rotation after a single reflection of a beam of light on the measured substrate at a given incidence angle. In this paper, we report the development of multi-pass spectroscopic ellipsometry where the light beam reflects multiple times on the sample. We have investigated both theoretically and experimentally the effect of sample reflectivity, number of reflections (passes), angles of incidence and detector dynamic range on ellipsometric observables tanΨ and cosΔ. The multiple pass approach provides increased sensitivity to small changes in Ψ and Δ, opening the way for single measurement determination of optical thickness T, refractive index n and absorption coefficient k of thin films, a significant improvement over the existing techniques. Based on our results, we discuss the strengths, the weaknesses and possible applications of this technique. - Highlights: • We present multi-pass spectroscopic ellipsometry (MPSE), a multi-pass approach to ellipsometry. • Different detectors, samples, angles of incidence and number of passes were tested. • N passes improve polarization ratio sensitivity to the power of N. • N reflections improve phase shift sensitivity by a factor of N. • MPSE can significantly improve thickness measurements in thin films

  2. Spectroscopic amplifier for pin diode

    International Nuclear Information System (INIS)

    Alonso M, M. S.; Hernandez D, V. M.; Vega C, H. R.

    2014-10-01

    The photodiode remains the basic choice for the photo-detection and is widely used in optical communications, medical diagnostics and field of corpuscular radiation. In detecting radiation it has been used for monitoring radon and its progeny and inexpensive spectrometric systems. The development of a spectroscopic amplifier for Pin diode is presented which has the following characteristics: canceler Pole-Zero (P/Z) with a time constant of 8 μs; constant gain of 57, suitable for the acquisition system; 4th integrator Gaussian order to waveform change of exponential input to semi-Gaussian output and finally a stage of baseline restorer which prevents Dc signal contribution to the next stage. The operational amplifier used is the TLE2074 of BiFET technology of Texas Instruments with 10 MHz bandwidth, 25 V/μs of slew rate and a noise floor of 17 nv/(Hz)1/2. The integrated circuit has 4 operational amplifiers and in is contained the total of spectroscopic amplifier that is the goal of electronic design. The results show like the exponential input signal is converted to semi-Gaussian, modifying only the amplitude according to the specifications in the design. The total system is formed by the detector, which is the Pin diode, a sensitive preamplifier to the load, the spectroscopic amplifier that is what is presented and finally a pulse height analyzer (Mca) which is where the spectrum is shown. (Author)

  3. A transformation with symbolic computation and abundant new soliton-like solutions for the (1+2)-dimensional generalized Burgers equation

    International Nuclear Information System (INIS)

    Yan Zhenya

    2002-01-01

    In this paper, an auto-Baecklund transformation is presented for the generalized Burgers equation: u t +u xy + αuu y +αu x ∂ -1 x u y =0 (α is constant) by using an ansatz and symbolic computation. Particularly, this equation is transformed into a (1+2)-dimensional generalized heat equation ω t + ω xy =0 by the Cole-Hopf transformation. This shows that this equation is C-integrable. Abundant types of new soliton-like solutions are obtained by virtue of the obtained transformation. These solutions contain n-soliton-like solutions, shock wave solutions and singular soliton-like solutions, which may be of important significance in explaining some physical phenomena. The approach can also be extended to other types of nonlinear partial differential equations in mathematical physics

  4. Accurate and computationally efficient prediction of thermochemical properties of biomolecules using the generalized connectivity-based hierarchy.

    Science.gov (United States)

    Sengupta, Arkajyoti; Ramabhadran, Raghunath O; Raghavachari, Krishnan

    2014-08-14

    In this study we have used the connectivity-based hierarchy (CBH) method to derive accurate heats of formation of a range of biomolecules, 18 amino acids and 10 barbituric acid/uracil derivatives. The hierarchy is based on the connectivity of the different atoms in a large molecule. It results in error-cancellation reaction schemes that are automated, general, and can be readily used for a broad range of organic molecules and biomolecules. Herein, we first locate stable conformational and tautomeric forms of these biomolecules using an accurate level of theory (viz. CCSD(T)/6-311++G(3df,2p)). Subsequently, the heats of formation of the amino acids are evaluated using the CBH-1 and CBH-2 schemes and routinely employed density functionals or wave function-based methods. The calculated heats of formation obtained herein using modest levels of theory and are in very good agreement with those obtained using more expensive W1-F12 and W2-F12 methods on amino acids and G3 results on barbituric acid derivatives. Overall, the present study (a) highlights the small effect of including multiple conformers in determining the heats of formation of biomolecules and (b) in concurrence with previous CBH studies, proves that use of the more effective error-cancelling isoatomic scheme (CBH-2) results in more accurate heats of formation with modestly sized basis sets along with common density functionals or wave function-based methods.

  5. An Interaction of Economy and Environment in Dynamic Computable General Equilibrium Modelling with a Focus on Climate Change Issues in Korea : A Proto-type Model

    Energy Technology Data Exchange (ETDEWEB)

    Joh, Seung Hun; Dellink, Rob; Nam, Yunmi; Kim, Yong Gun; Song, Yang Hoon [Korea Environment Institute, Seoul (Korea)

    2000-12-01

    In the beginning of the 21st century, climate change is one of hottest issues in arena of both international environment and domestic one. During the COP6 meeting held in The Hague, over 10,000 people got together from the world. This report is a series of policy study on climate change in context of Korea. This study addresses on interactions of economy and environment in a perfect foresight dynamic computable general equilibrium with a focus on greenhouse gas mitigation strategy in Korea. The primary goal of this study is to evaluate greenhouse gas mitigation portfolios of changes in timing and magnitude with a particular focus on developing a methodology to integrate the bottom-up information on technical measures to reduce pollution into a top-down multi-sectoral computable general equilibrium framework. As a non-Annex I country Korea has been under strong pressure to declare GHG reduction commitment. Of particular concern is economic consequences GHG mitigation would accrue to the society. Various economic assessment have been carried out to address on the issue including analyses on cost, ancillary benefit, emission trading, so far. In this vein, this study on GHG mitigation commitment is a timely answer to climate change policy field. Empirical results available next year would be highly demanded in the situation. 62 refs., 13 figs., 9 tabs.

  6. The inherent dangers of using computable general equilibrium models as a single integrated modelling framework for sustainability impact assessment. A critical note on Boehringer and Loeschel (2006)

    International Nuclear Information System (INIS)

    Scrieciu, S. Serban

    2007-01-01

    The search for methods of assessment that best evaluate and integrate the trade-offs and interactions between the economic, environmental and social components of development has been receiving a new impetus due to the requirement that sustainability concerns be incorporated into the policy formulation process. A paper forthcoming in Ecological Economics (Boehringer, C., Loeschel, A., in press. Computable general equilibrium models for sustainability impact assessment: status quo and prospects, Ecological Economics.) claims that Computable General Equilibrium (CGE) models may potentially represent the much needed 'back-bone' tool to carry out reliable integrated quantitative Sustainability Impact Assessments (SIAs). While acknowledging the usefulness of CGE models for some dimensions of SIA, this commentary questions the legitimacy of employing this particular economic modelling tool as a single integrating modelling framework for a comprehensive evaluation of the multi-dimensional, dynamic and complex interactions between policy and sustainability. It discusses several inherent dangers associated with the advocated prospects for the CGE modelling approach to contribute to comprehensive and reliable sustainability impact assessments. The paper warns that this reductionist viewpoint may seriously infringe upon the basic values underpinning the SIA process, namely a transparent, heterogeneous, balanced, inter-disciplinary, consultative and participatory take to policy evaluation and building of the evidence-base. (author)

  7. A general computation model based on inverse analysis principle used for rheological analysis of W/O rapeseed and soybean oil emulsions

    Science.gov (United States)

    Vintila, Iuliana; Gavrus, Adinel

    2017-10-01

    The present research paper proposes the validation of a rigorous computation model used as a numerical tool to identify rheological behavior of complex emulsions W/O. Considering a three-dimensional description of a general viscoplastic flow it is detailed the thermo-mechanical equations used to identify fluid or soft material's rheological laws starting from global experimental measurements. Analyses are conducted for complex emulsions W/O having generally a Bingham behavior using the shear stress - strain rate dependency based on a power law and using an improved analytical model. Experimental results are investigated in case of rheological behavior for crude and refined rapeseed/soybean oils and four types of corresponding W/O emulsions using different physical-chemical composition. The rheological behavior model was correlated with the thermo-mechanical analysis of a plane-plane rheometer, oil content, chemical composition, particle size and emulsifier's concentration. The parameters of rheological laws describing the industrial oils and the W/O concentrated emulsions behavior were computed from estimated shear stresses using a non-linear regression technique and from experimental torques using the inverse analysis tool designed by A. Gavrus (1992-2000).

  8. ProtDCal: A program to compute general-purpose-numerical descriptors for sequences and 3D-structures of proteins.

    Science.gov (United States)

    Ruiz-Blanco, Yasser B; Paz, Waldo; Green, James; Marrero-Ponce, Yovani

    2015-05-16

    The exponential growth of protein structural and sequence databases is enabling multifaceted approaches to understanding the long sought sequence-structure-function relationship. Advances in computation now make it possible to apply well-established data mining and pattern recognition techniques to these data to learn models that effectively relate structure and function. However, extracting meaningful numerical descriptors of protein sequence and structure is a key issue that requires an efficient and widely available solution. We here introduce ProtDCal, a new computational software suite capable of generating tens of thousands of features considering both sequence-based and 3D-structural descriptors. We demonstrate, by means of principle component analysis and Shannon entropy tests, how ProtDCal's sequence-based descriptors provide new and more relevant information not encoded by currently available servers for sequence-based protein feature generation. The wide diversity of the 3D-structure-based features generated by ProtDCal is shown to provide additional complementary information and effectively completes its general protein encoding capability. As demonstration of the utility of ProtDCal's features, prediction models of N-linked glycosylation sites are trained and evaluated. Classification performance compares favourably with that of contemporary predictors of N-linked glycosylation sites, in spite of not using domain-specific features as input information. ProtDCal provides a friendly and cross-platform graphical user interface, developed in the Java programming language and is freely available at: http://bioinf.sce.carleton.ca/ProtDCal/ . ProtDCal introduces local and group-based encoding which enhances the diversity of the information captured by the computed features. Furthermore, we have shown that adding structure-based descriptors contributes non-redundant additional information to the features-based characterization of polypeptide systems. This

  9. A computer method for spectral classification

    International Nuclear Information System (INIS)

    Appenzeller, I.; Zekl, H.

    1978-01-01

    The authors describe the start of an attempt to improve the accuracy of spectroscopic parallaxes by evaluating spectroscopic temperature and luminosity criteria such as those of the MK classification spectrograms which were analyzed automatically by means of a suitable computer program. (Auth.)

  10. Possible Radiation-Induced Damage to the Molecular Structure of Wooden Artifacts Due to Micro-Computed Tomography, Handheld X-Ray Fluorescence, and X-Ray Photoelectron Spectroscopic Techniques

    Directory of Open Access Journals (Sweden)

    Madalena Kozachuk

    2016-05-01

    Full Text Available This study was undertaken to ascertain whether radiation produced by X-ray photoelectron spectroscopy (XPS, micro-computed tomography (μCT and/or portable handheld X-ray fluorescence (XRF equipment might damage wood artifacts during analysis. Changes at the molecular level were monitored by Fourier transform infrared (FTIR analysis. No significant changes in FTIR spectra were observed as a result of μCT or handheld XRF analysis. No substantial changes in the collected FTIR spectra were observed when XPS analytical times on the order of minutes were used. However, XPS analysis collected over tens of hours did produce significant changes in the FTIR spectra.

  11. Spectroscopic and computational studies of cobalamin species with variable lower axial ligation: implications for the mechanism of Co-C bond activation by class I cobalamin-dependent isomerases.

    Science.gov (United States)

    Conrad, Karen S; Jordan, Christopher D; Brown, Kenneth L; Brunold, Thomas C

    2015-04-20

    5'-deoxyadenosylcobalamin (coenzyme B12, AdoCbl) serves as the cofactor for several enzymes that play important roles in fermentation and catabolism. All of these enzymes initiate catalysis by promoting homolytic cleavage of the cofactor's Co-C bond in response to substrate binding to their active sites. Despite considerable research efforts, the role of the lower axial ligand in facilitating Co-C bond homolysis remains incompletely understood. In the present study, we characterized several derivatives of AdoCbl and its one-electron reduced form, Co(II)Cbl, by using electronic absorption and magnetic circular dichroism spectroscopies. To complement our experimental data, we performed computations on these species, as well as additional Co(II)Cbl analogues. The geometries of all species investigated were optimized using a quantum mechanics/molecular mechanics method, and the optimized geometries were used to compute absorption spectra with time-dependent density functional theory. Collectively, our results indicate that a reduction in the basicity of the lower axial ligand causes changes to the cofactor's electronic structure in the Co(II) state that replicate the effects seen upon binding of Co(II)Cbl to Class I isomerases, which replace the lower axial dimethylbenzimidazole ligand of AdoCbl with a protein-derived histidine (His) residue. Such a reduction of the basicity of the His ligand in the enzyme active site may be achieved through proton uptake by the catalytic triad of conserved residues, DXHXGXK, during Co-C bond homolysis.

  12. Mossbauer spectroscopic studies in ferroboron

    Science.gov (United States)

    Yadav, Ravi Kumar; Govindaraj, R.; Amarendra, G.

    2017-05-01

    Mossbauer spectroscopic studies have been carried out in a detailed manner on ferroboron in order to understand the local structure and magnetic properties of the system. Evolution of the local structure and magnetic properties of the amorphous and crystalline phases and their thermal stability have been addressed in a detailed manner in this study. Role of bonding between Fe 4s and/or 4p electrons with valence electrons of boron (2s,2p) in influencing the stability and magnetic properties of Fe-B system is elucidated.

  13. Association between aortic valve calcification measured on non-contrast computed tomography and aortic valve stenosis in the general population

    DEFF Research Database (Denmark)

    Paulsen, Niels Herluf; Bønløkke Carlsen, Bjarke; Dahl, Jordi Sanchez

    2016-01-01

    BACKGROUND: Aortic valve calcification (AVC) measured on non-contrast computed tomography (CT) has shown correlation to severity of aortic valve stenosis (AS) and mortality in patients with known AS. The aim of this study was to determine the association of CT verified AVC and subclinical...... AS in a general population undergoing CT. METHODS: CT scans from 566 randomly selected male participants (age 65-74) in the Danish cardiovascular screening study (DANCAVAS) were analyzed for AVC. All participants with a moderately or severely increased AVC score (≥300 arbitrary units (AU)) and a matched control...... ICD leads 16 individuals were excluded from the AVC scoring. Moderate or severe increased AVC was observed in 10.7% (95% CI: 8.4-13.7). Echocardiography was performed in 101 individuals; 32.7% (95% CI: 21.8 to 46.0) with moderate or high AVC score had moderate or severe AS, while none with no or low...

  14. Normal values of regional left ventricular myocardial thickness, mass and distribution-assessed by 320-detector computed tomography angiography in the Copenhagen General Population Study

    DEFF Research Database (Denmark)

    Hindsø, Louise; Fuchs, Andreas; Kühl, Jørgen Tobias

    2017-01-01

    regional normal reference values of the left ventricle. The aim of this study was to derive reference values of regional LV myocardial thickness (LVMT) and mass (LVMM) from a healthy study group of the general population using cardiac computed tomography angiography (CCTA). We wanted to introduce LV...... myocardial distribution (LVMD) as a measure of regional variation of the LVMT. Moreover, we wanted to determine whether these parameters varied between men and women. We studied 568 (181 men; 32%) adults, free of cardiovascular disease and risk factors, who underwent 320-detector CCTA. Mean age was 55 (range...... 40-84) years. Regional LVMT and LVMM were measured, according to the American Heart Association's 17 segment model, using semi-automatic software. Mean LVMT were 6.6 mm for men and 5.4 mm for women (p normal LV was thickest in the basal septum (segment 3; men = 8.3 mm; women = 7.2 mm...

  15. Assessment of health and economic effects by PM2.5 pollution in Beijing: a combined exposure-response and computable general equilibrium analysis.

    Science.gov (United States)

    Wang, Guizhi; Gu, SaiJu; Chen, Jibo; Wu, Xianhua; Yu, Jun

    2016-12-01

    Assessment of the health and economic impacts of PM2.5 pollution is of great importance for urban air pollution prevention and control. In this study, we evaluate the damage of PM2.5 pollution using Beijing as an example. First, we use exposure-response functions to estimate the adverse health effects due to PM2.5 pollution. Then, the corresponding labour loss and excess medical expenditure are computed as two conducting variables. Finally, different from the conventional valuation methods, this paper introduces the two conducting variables into the computable general equilibrium (CGE) model to assess the impacts on sectors and the whole economic system caused by PM2.5 pollution. The results show that, substantial health effects of the residents in Beijing from PM2.5 pollution occurred in 2013, including 20,043 premature deaths and about one million other related medical cases. Correspondingly, using the 2010 social accounting data, Beijing gross domestic product loss due to the health impact of PM2.5 pollution is estimated as 1286.97 (95% CI: 488.58-1936.33) million RMB. This demonstrates that PM2.5 pollution not only has adverse health effects, but also brings huge economic loss.

  16. A Fully Customized Baseline Removal Framework for Spectroscopic Applications.

    Science.gov (United States)

    Giguere, Stephen; Boucher, Thomas; Carey, C J; Mahadevan, Sridhar; Dyar, M Darby

    2017-07-01

    The task of proper baseline or continuum removal is common to nearly all types of spectroscopy. Its goal is to remove any portion of a signal that is irrelevant to features of interest while preserving any predictive information. Despite the importance of baseline removal, median or guessed default parameters are commonly employed, often using commercially available software supplied with instruments. Several published baseline removal algorithms have been shown to be useful for particular spectroscopic applications but their generalizability is ambiguous. The new Custom Baseline Removal (Custom BLR) method presented here generalizes the problem of baseline removal by combining operations from previously proposed methods to synthesize new correction algorithms. It creates novel methods for each technique, application, and training set, discovering new algorithms that maximize the predictive accuracy of the resulting spectroscopic models. In most cases, these learned methods either match or improve on the performance of the best alternative. Examples of these advantages are shown for three different scenarios: quantification of components in near-infrared spectra of corn and laser-induced breakdown spectroscopy data of rocks, and classification/matching of minerals using Raman spectroscopy. Software to implement this optimization is available from the authors. By removing subjectivity from this commonly encountered task, Custom BLR is a significant step toward completely automatic and general baseline removal in spectroscopic and other applications.

  17. Synthesis, characterization and investigation of the spectroscopic properties of novel peripherally 2,3,5-trimethylphenoxy substituted Cu and Co phthalocyanines, the computational and experimental studies of the 4-(2,3,5-trimethylphenoxyphthalonitrile

    Directory of Open Access Journals (Sweden)

    Nesuhi Akdemir

    2016-11-01

    Full Text Available 4-(2,3,5-trimethylphenoxyphthalonitrile (3 was firstly prepared via aromatic nucleophilic substitution reaction and characterized by FT-IR, mass spectrometry, 1H and 13C NMR techniques. The molecular structure of the compound (3 was optimized using Density Functional Theory (DFT/B3LYP method with 6-311G(d,p basis set in the ground state. The molecular geometric parameters which were obtained by X-ray single crystal diffraction method and the spectral results were compared with computed bond lengths and angles, vibrational frequencies and 1H, 13C NMR chemical shifts values of the compound (3. Also, Cu(II and Co(II phthalocyanines were synthesized by the treatment of dinitrile derivative with anhydrous CuCl2 or CoCl2 under N2 atmosphere in dry n-pentanol at 140oC. The new compounds have been determined by elemental analysis, FT-IR and electronic absorption. The UV-Vis spectra of the Cu(II and Co(II phthalocyanines were recorded with different concentration in THF and also with different solvents as DMF, DMSO, DCM, CHCl3, toluene.

  18. Raman Spectroscopic Studies of Methane Gas Hydrates

    DEFF Research Database (Denmark)

    Hansen, Susanne Brunsgaard; Berg, Rolf W.

    2009-01-01

    A brief review of the Raman spectroscopic studies of methane gas hydrates is given, supported by some new measurements done in our laboratory.......A brief review of the Raman spectroscopic studies of methane gas hydrates is given, supported by some new measurements done in our laboratory....

  19. SPECTROSCOPIC AND INTERFEROMETRIC MEASUREMENTS OF NINE K GIANT STARS

    Energy Technology Data Exchange (ETDEWEB)

    Baines, Ellyn K. [Remote Sensing Division, Naval Research Laboratory, 4555 Overlook Avenue SW, Washington, DC 20375 (United States); Döllinger, Michaela P. [Max-Planck-Institut für Astronomie, Königstuhl 17, D-69117 Heidelberg (Germany); Guenther, Eike W.; Hatzes, Artie P. [Thüringer Landessternwarte Tautenburg, Sternwarte 5, D-07778 Tautenburg (Germany); Hrudkovu, Marie [Isaac Newton Group of Telescopes, Apartado de Correos 321, E-387 00 Santa Cruz de la Palma, Canary Islands (Spain); Belle, Gerard T. van, E-mail: ellyn.baines@nrl.navy.mil [Lowell Observatory, Flagstaff, AZ 86001 (United States)

    2016-09-01

    We present spectroscopic and interferometric measurements for a sample of nine K giant stars. These targets are of particular interest because they are slated for stellar oscillation observations. Our improved parameters will directly translate into reduced errors in the final masses for these stars when interferometric radii and asteroseismic densities are combined. Here, we determine each star’s limb-darkened angular diameter, physical radius, luminosity, bolometric flux, effective temperature, surface gravity, metallicity, and mass. When we compare our interferometric and spectroscopic results, we find no systematic offsets in the diameters and the values generally agree within the errors. Our interferometric temperatures for seven of the nine stars are hotter than those determined from spectroscopy with an average difference of about 380 K.

  20. An improved synthesis, spectroscopic (FT-IR, NMR) study and DFT computational analysis (IR, NMR, UV-Vis, MEP diagrams, NBO, NLO, FMO) of the 1,5-methanoazocino[4,3-b]indole core structure

    Science.gov (United States)

    Uludağ, Nesimi; Serdaroğlu, Goncagül

    2018-03-01

    This study examines the synthesis of azocino[4,3-b]indole structure, which constitutes the tetracyclic framework of uleine, dasycarpidoneand tubifolidineas well as ABDE substructure of the strychnosalkaloid family. It has been synthesized by Fischer indolization of 2 and through the cylization of 4 by 2,3-dichlor-5-6-dicyanobenzoquinone (DDQ). 1H and 1C NMR chemical shifts have been predicted with GIAO approach and the calculated chemical shifts show very good agreement with observed shifts. FT-IR spectroscopy is important for the analysis of functional groups of synthesized compounds and we also supported FT-IR vibrational analysis with computational IR analysis. The vibrational spectral analysis was performed at B3LYP level of the theory in both the gas and the water phases and it was compared with the observed IR values for the important functional groups. The DFT calculations have been conducted to determine the most stable structure of the 1,2,3,4,5,6,7-Hexahydro-1,5-methanoazocino [4,3-b] indole (5). The Frontier Molecular Orbital Analysis, quantum chemical parameters, physicochemical properties have been predicted by using the same theory of level in both gas phase and the water phase, at 631 + g** and 6311++g** basis sets. TD- DFT calculations have been performed to predict the UV- Vis spectral analysis for this synthesized molecule. The Natural Bond Orbital (NBO) analysis have been performed at B3LYP level of theory to elucidate the intra-molecular interactions such as electron delocalization and conjugative interactions. NLO calculations were conducted to obtain the electric dipole moment and polarizability of the title compound.

  1. QUANTIFYING THE BIASES OF SPECTROSCOPICALLY SELECTED GRAVITATIONAL LENSES

    International Nuclear Information System (INIS)

    Arneson, Ryan A.; Brownstein, Joel R.; Bolton, Adam S.

    2012-01-01

    Spectroscopic selection has been the most productive technique for the selection of galaxy-scale strong gravitational lens systems with known redshifts. Statistically significant samples of strong lenses provide a powerful method for measuring the mass-density parameters of the lensing population, but results can only be generalized to the parent population if the lensing selection biases are sufficiently understood. We perform controlled Monte Carlo simulations of spectroscopic lens surveys in order to quantify the bias of lenses relative to parent galaxies in velocity dispersion, mass axis ratio, and mass-density profile. For parameters typical of the SLACS and BELLS surveys, we find (1) no significant mass axis ratio detection bias of lenses relative to parent galaxies; (2) a very small detection bias toward shallow mass-density profiles, which is likely negligible compared to other sources of uncertainty in this parameter; (3) a detection bias toward smaller Einstein radius for systems drawn from parent populations with group- and cluster-scale lensing masses; and (4) a lens-modeling bias toward larger velocity dispersions for systems drawn from parent samples with sub-arcsecond mean Einstein radii. This last finding indicates that the incorporation of velocity-dispersion upper limits of non-lenses is an important ingredient for unbiased analyses of spectroscopically selected lens samples. In general, we find that the completeness of spectroscopic lens surveys in the plane of Einstein radius and mass-density profile power-law index is quite uniform, up to a sharp drop in the region of large Einstein radius and steep mass-density profile, and hence that such surveys are ideally suited to the study of massive field galaxies.

  2. Accurate spectroscopic characterization of protonated oxirane: a potential prebiotic species in Titan's atmosphere

    International Nuclear Information System (INIS)

    Giacomo Ciamician, Università di Bologna, Via Selmi 2, I-40126 Bologna (Italy))" data-affiliation=" (Dipartimento di Chimica Giacomo Ciamician, Università di Bologna, Via Selmi 2, I-40126 Bologna (Italy))" >Puzzarini, Cristina; Ali, Ashraf; Biczysko, Malgorzata; Barone, Vincenzo

    2014-01-01

    An accurate spectroscopic characterization of protonated oxirane has been carried out by means of state-of-the-art computational methods and approaches. The calculated spectroscopic parameters from our recent computational investigation of oxirane together with the corresponding experimental data available were used to assess the accuracy of our predicted rotational and IR spectra of protonated oxirane. We found an accuracy of about 10 cm –1 for vibrational transitions (fundamentals as well as overtones and combination bands) and, in relative terms, of 0.1% for rotational transitions. We are therefore confident that the spectroscopic data provided herein are a valuable support for the detection of protonated oxirane not only in Titan's atmosphere but also in the interstellar medium.

  3. Accurate spectroscopic characterization of protonated oxirane: a potential prebiotic species in Titan's atmosphere

    Energy Technology Data Exchange (ETDEWEB)

    Puzzarini, Cristina [Dipartimento di Chimica " Giacomo Ciamician," Università di Bologna, Via Selmi 2, I-40126 Bologna (Italy); Ali, Ashraf [NASA Goddard Space Flight Center, Greenbelt, MD 20771 (United States); Biczysko, Malgorzata; Barone, Vincenzo, E-mail: cristina.puzzarini@unibo.it [Scuola Normale Superiore, Piazza dei Cavalieri 7, I-56126 Pisa (Italy)

    2014-09-10

    An accurate spectroscopic characterization of protonated oxirane has been carried out by means of state-of-the-art computational methods and approaches. The calculated spectroscopic parameters from our recent computational investigation of oxirane together with the corresponding experimental data available were used to assess the accuracy of our predicted rotational and IR spectra of protonated oxirane. We found an accuracy of about 10 cm{sup –1} for vibrational transitions (fundamentals as well as overtones and combination bands) and, in relative terms, of 0.1% for rotational transitions. We are therefore confident that the spectroscopic data provided herein are a valuable support for the detection of protonated oxirane not only in Titan's atmosphere but also in the interstellar medium.

  4. Spectroscopic observations of AG Dra

    International Nuclear Information System (INIS)

    Chang-Chun, H.

    1982-01-01

    During summer 1981, spectroscopic observations of AG Dra were performed at the Haute-Provence Observatory using the Marly spectrograph with a dispersion of 80 A mm -1 at the 120 cm telescope and using the Coude spectrograph of the 193 cm telescope with a dispersion of 40 A mm -1 . The actual outlook of the spectrum of AG Dra is very different from what it was in 1966 in the sense that only a few intense absorption lines remain, the heavy emission continuum masking the absorption spectrum, while on the 1966 plate, about 140 absorption lines have been measured. Numerous emission lines have been measured, most of them, present in 1981, could also be detected in 1966. They are due to H, HeI and HeII. (Auth.)

  5. Association between aortic valve calcification measured on non-contrast computed tomography and aortic valve stenosis in the general population.

    Science.gov (United States)

    Paulsen, Niels Herluf; Carlsen, Bjarke Bønløkke; Dahl, Jordi Sanchez; Carter-Storch, Rasmus; Christensen, Nicolaj Lyhne; Khurrami, Lida; Møller, Jacob Eifer; Lindholt, Jes Sandal; Diederichsen, Axel Cosmus Pyndt

    2016-01-01

    Aortic valve calcification (AVC) measured on non-contrast computed tomography (CT) has shown correlation to severity of aortic valve stenosis (AS) and mortality in patients with known AS. The aim of this study was to determine the association of CT verified AVC and subclinical AS in a general population undergoing CT. CT scans from 566 randomly selected male participants (age 65-74) in the Danish cardiovascular screening study (DANCAVAS) were analyzed for AVC. All participants with a moderately or severely increased AVC score (≥300 arbitrary units (AU)) and a matched control group were invited for a supplementary echocardiography. AS was graded by indexed aortic valve area (AVAi) on echocardiography as moderate 0.6-0.85 cm(2)/m(2) and severe AVC scoring. Moderate or severe increased AVC was observed in 10.7% (95% CI: 8.4-13.7). Echocardiography was performed in 101 individuals; 32.7% (95% CI: 21.8 to 46.0) with moderate or high AVC score had moderate or severe AS, while none with no or low AVC. A ROC analysis defined an AVC score ≥588 AU to be suggestive of moderate or severe AS (AUC 0.89 ± 0.04, sensitivity 83% and specificity 87%). In the univariate analyses, AVC was the only variable significantly associated with AS. This study indicates an association between CT verified AVC and subclinical AS. Copyright © 2016 Society of Cardiovascular Computed Tomography. Published by Elsevier Inc. All rights reserved.

  6. Dunham spectroscopic constants for the ground and excited states of H2+

    International Nuclear Information System (INIS)

    Murai, Tomokazu

    1975-01-01

    The Dunham spectroscopic constants for 12 of the electronic states of H 2 + are computed theoretically from the adiabatic potentials, which are calculated by the author based on the method presented by Bates et al. in the Born-Oppenheimer approximation. (author)

  7. 2-d spectroscopic imaging of brain tumours

    International Nuclear Information System (INIS)

    Ferris, N.J.; Brotchie, P.R.

    2002-01-01

    Full text: This poster illustrates the use of two-dimensional spectroscopic imaging (2-D SI) in the characterisation of brain tumours, and the monitoring of subsequent treatment. After conventional contrast-enhanced MR imaging of patients with known or suspected brain tumours, 2-D SI is performed at a single axial level. The level is chosen to include the maximum volume of abnormal enhancement, or, in non-enhancing lesions. The most extensive T2 signal abnormality. Two different MR systems have been used (Marconi Edge and GE Signa LX); at each site, a PRESS localisation sequence is employed with TE 128-144 ms. Automated software is used to generate spectral arrays, metabolite maps, and metabolite ratio maps from the spectroscopic data. Colour overlays of the maps onto anatomical images are produced using manufacturer software or the Medex imaging data analysis package. High grade gliomas showed choline levels higher than those in apparently normal brain, with decreases in NAA and creatine. Some lesions showed spectral abnormality extending into otherwise normal appearing brain. This was also seen in a case of CNS lymphoma. Lowgrade lesions showed choline levels similar to normal brain, but with decreased NAA. Only a small number of metastases have been studied, but to date no metastasis has shown spectral abnormality beyond the margins suggested by conventional imaging. Follow-up studies generally show spectral heterogeneity. Regions with choline levels higher than those in normal-appearing brain are considered to represent recurrent high-grade tumour. Some regions show choline to be the dominant metabolite, but its level is not greater than that seen in normal brain. These regions are considered suspicious for residual / recurrent tumour when the choline / creatine ratio exceeds 2 (lower ratios may represent treatment effect). 2-D SI improves the initial assessment of brain tumours, and has potential for influencing the radiotherapy treatment strategy. 2-D SI also

  8. Enhancing Classification Performance of Functional Near-Infrared Spectroscopy- Brain–Computer Interface Using Adaptive Estimation of General Linear Model Coefficients

    Directory of Open Access Journals (Sweden)

    Nauman Khalid Qureshi

    2017-07-01

    Full Text Available In this paper, a novel methodology for enhanced classification of functional near-infrared spectroscopy (fNIRS signals utilizable in a two-class [motor imagery (MI and rest; mental rotation (MR and rest] brain–computer interface (BCI is presented. First, fNIRS signals corresponding to MI and MR are acquired from the motor and prefrontal cortex, respectively, afterward, filtered to remove physiological noises. Then, the signals are modeled using the general linear model, the coefficients of which are adaptively estimated using the least squares technique. Subsequently, multiple feature combinations of estimated coefficients were used for classification. The best classification accuracies achieved for five subjects, for MI versus rest are 79.5, 83.7, 82.6, 81.4, and 84.1% whereas those for MR versus rest are 85.5, 85.2, 87.8, 83.7, and 84.8%, respectively, using support vector machine. These results are compared with the best classification accuracies obtained using the conventional hemodynamic response. By means of the proposed methodology, the average classification accuracy obtained was significantly higher (p < 0.05. These results serve to demonstrate the feasibility of developing a high-classification-performance fNIRS-BCI.

  9. Common spatial pattern combined with kernel linear discriminate and generalized radial basis function for motor imagery-based brain computer interface applications

    Science.gov (United States)

    Hekmatmanesh, Amin; Jamaloo, Fatemeh; Wu, Huapeng; Handroos, Heikki; Kilpeläinen, Asko

    2018-04-01

    Brain Computer Interface (BCI) can be a challenge for developing of robotic, prosthesis and human-controlled systems. This work focuses on the implementation of a common spatial pattern (CSP) base algorithm to detect event related desynchronization patterns. Utilizing famous previous work in this area, features are extracted by filter bank with common spatial pattern (FBCSP) method, and then weighted by a sensitive learning vector quantization (SLVQ) algorithm. In the current work, application of the radial basis function (RBF) as a mapping kernel of linear discriminant analysis (KLDA) method on the weighted features, allows the transfer of data into a higher dimension for more discriminated data scattering by RBF kernel. Afterwards, support vector machine (SVM) with generalized radial basis function (GRBF) kernel is employed to improve the efficiency and robustness of the classification. Averagely, 89.60% accuracy and 74.19% robustness are achieved. BCI Competition III, Iva data set is used to evaluate the algorithm for detecting right hand and foot imagery movement patterns. Results show that combination of KLDA with SVM-GRBF classifier makes 8.9% and 14.19% improvements in accuracy and robustness, respectively. For all the subjects, it is concluded that mapping the CSP features into a higher dimension by RBF and utilization GRBF as a kernel of SVM, improve the accuracy and reliability of the proposed method.

  10. Macroeconomic impact of a mild influenza pandemic and associated policies in Thailand, South Africa and Uganda: a computable general equilibrium analysis.

    Science.gov (United States)

    Smith, Richard D; Keogh-Brown, Marcus R

    2013-11-01

    Previous research has demonstrated the value of macroeconomic analysis of the impact of influenza pandemics. However, previous modelling applications focus on high-income countries and there is a lack of evidence concerning the potential impact of an influenza pandemic on lower- and middle-income countries. To estimate the macroeconomic impact of pandemic influenza in Thailand, South Africa and Uganda with particular reference to pandemic (H1N1) 2009. A single-country whole-economy computable general equilibrium (CGE) model was set up for each of the three countries in question and used to estimate the economic impact of declines in labour attributable to morbidity, mortality and school closure. Overall GDP impacts were less than 1% of GDP for all countries and scenarios. Uganda's losses were proportionally larger than those of Thailand and South Africa. Labour-intensive sectors suffer the largest losses. The economic cost of unavoidable absence in the event of an influenza pandemic could be proportionally larger for low-income countries. The cost of mild pandemics, such as pandemic (H1N1) 2009, appears to be small, but could increase for more severe pandemics and/or pandemics with greater behavioural change and avoidable absence. © 2013 John Wiley & Sons Ltd.

  11. Transformations for a generalized variable-coefficient Korteweg-de Vries model from blood vessels, Bose-Einstein condensates, rods and positons with symbolic computation

    International Nuclear Information System (INIS)

    Tian Bo; Wei Guangmei; Zhang Chunyi; Shan Wenrui; Gao Yitian

    2006-01-01

    The variable-coefficient Korteweg-de Vries (KdV)-typed models, although often hard to be studied, are of current interest in describing various real situations. Under investigation hereby is a large class of the generalized variable-coefficient KdV models with external-force and perturbed/dissipative terms. Recent examples of this class include those in blood vessels and circulatory system, arterial dynamics, trapped Bose-Einstein condensates related to matter waves and nonlinear atom optics, Bose gas of impenetrable bosons with longitudinal confinement, rods of compressible hyperelastic material and semiconductor heterostructures with positonic phenomena. In this Letter, based on symbolic computation, four transformations are proposed from this class either to the cylindrical or standard KdV equation when the respective constraint holds. The constraints have nothing to do with the external-force term. Under those transformations, such analytic solutions as those with the Airy, Hermit and Jacobian elliptic functions can be obtained, including the solitonic profiles. The roles for the perturbed and external-force terms to play are observed and discussed. Investigations on this class can be performed through the properties of solutions of cylindrical and standard KdV equations

  12. Human Computation

    CERN Multimedia

    CERN. Geneva

    2008-01-01

    What if people could play computer games and accomplish work without even realizing it? What if billions of people collaborated to solve important problems for humanity or generate training data for computers? My work aims at a general paradigm for doing exactly that: utilizing human processing power to solve computational problems in a distributed manner. In particular, I focus on harnessing human time and energy for addressing problems that computers cannot yet solve. Although computers have advanced dramatically in many respects over the last 50 years, they still do not possess the basic conceptual intelligence or perceptual capabilities...

  13. Spectroscopic diagnostics of industrial plasmas

    International Nuclear Information System (INIS)

    Joshi, N.K.

    2004-01-01

    Plasmas play key role in modern industry and are being used for processing micro electronic circuits to the destruction of toxic waste. Characterization of industrial plasmas which includes both 'thermal plasmas' and non-equilibrium plasmas or 'cold plasmas' in industrial environment offers quite a challenge. Numerous diagnostic techniques have been developed for the measurement of these partially ionized plasma and/or particulate parameters. The 'simple' non-invasive spectroscopic methods for characterization of industrial plasmas will be discussed in detail in this paper. The excitation temperature in thermal (DC/RF) plasma jets has been determined using atomic Boltzmann technique. The central axis temperature of thermal plasma jets in a spray torch can be determined using modified atomic Boltzmann technique with out using Abel inversion. The Stark broadening of H β and Ar-I (430 nm) lines have been used to determine the electron number density in thermal plasma jets. In low-pressure non-equilibrium argon plasma, electron temperature has been measured using the Corona model from the ratio of line intensities of atomic and ionic transitions. (author)

  14. Spectroscopic studies of copper enzymes

    International Nuclear Information System (INIS)

    Dooley, D.M.; Moog, R.; Zumft, W.; Koenig, S.H.; Scott, R.A.; Cote, C.E.; McGuirl, M.

    1986-01-01

    Several spectroscopic methods, including absorption, circular dichroism (CD), magnetic CD (MCD), X-ray absorption, resonance Raman, EPR, NMR, and quasi-elastic light-scattering spectroscopy, have been used to probe the structures of copper-containing amine oxidases, nitrite reductase, and nitrous oxide reductase. The basic goals are to determine the copper site structure, electronic properties, and to generate structure-reactivity correlations. Collectively, the results on the amine oxidases permit a detailed model for the Cu(II) sites in these enzymes to be constructed that, in turn, rationalizes the ligand-binding chemistry. Resonance Raman spectra of the phenylhydrazine and 2,4-dinitrophenyl-hydrazine derivatives of bovine plasma amine oxidase and models for its organic cofactor, e.g. pyridoxal, methoxatin, are most consistent with methoxatin being the intrinsic cofactor. The structure of the Cu(I) forms of the amine oxidases have been investigated by X-ray absorption spectroscopy (XAS); the copper coordination geometry is significantly different in the oxidized and reduced forms. Some anomalous properties of the amine oxidases in solution are explicable in terms of their reversible aggregation, which the authors have characterized via light scattering. Nitrite and nitrous oxide reductases display several novel spectral properties. The data suggest that new types of copper sites are present

  15. 2-Ethynylpyridine dimers: IR spectroscopic and computational study

    Science.gov (United States)

    Bakarić, Danijela; Spanget-Larsen, Jens

    2018-04-01

    2-ethynylpyridine (2-EP) presents a multifunctional system capable of participation in hydrogen-bonded complexes utilizing hydrogen bond donating (tbnd Csbnd H, Aryl-H) and hydrogen bond accepting functions (N-atom, Ctbnd C and pyridine π-systems). In this work, IR spectroscopy and theoretical calculations are used to study possible 2-EP dimer structures as well as their distribution in an inert solvent such as tetrachloroethene. Experimentally, the tbnd Csbnd H stretching vibration of the 2-EP monomer absorbs close to 3300 cm-1, whereas a broad band with maximum around 3215 cm-1 emerges as the concentration rises, indicating the formation of hydrogen-bonded complexes involving the tbnd Csbnd H moiety. The Ctbnd C stretching vibration of monomer 2-EP close to 2120 cm-1 is, using derivative spectroscopy, resolved from the signals of the dimer complexes with maximum around 2112 cm-1. Quantum chemical calculations using the B3LYP + D3 model with counterpoise correction predict that the two most stable dimers are of the π-stacked variety, closely followed by dimers with intermolecular tbnd Csbnd H⋯N hydrogen bonding; the predicted red shifts of the tbnd Csbnd H stretching wavenumbers due to hydrogen bonding are in the range 54-120 cm-1. No species with obvious hydrogen bonding involving the Ctbnd C or pyridine π-systems as acceptors are predicted. Dimerization constant at 25 °C is estimated to be K2 = 0.13 ± 0.01 mol-1 dm3.

  16. Computer programming and computer systems

    CERN Document Server

    Hassitt, Anthony

    1966-01-01

    Computer Programming and Computer Systems imparts a "reading knowledge? of computer systems.This book describes the aspects of machine-language programming, monitor systems, computer hardware, and advanced programming that every thorough programmer should be acquainted with. This text discusses the automatic electronic digital computers, symbolic language, Reverse Polish Notation, and Fortran into assembly language. The routine for reading blocked tapes, dimension statements in subroutines, general-purpose input routine, and efficient use of memory are also elaborated.This publication is inten

  17. Quantum Computing

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 5; Issue 9. Quantum Computing - Building Blocks of a Quantum Computer. C S Vijay Vishal Gupta. General Article Volume 5 Issue 9 September 2000 pp 69-81. Fulltext. Click here to view fulltext PDF. Permanent link:

  18. Cielo Computational Environment Usage Model With Mappings to ACE Requirements for the General Availability User Environment Capabilities Release Version 1.1

    Energy Technology Data Exchange (ETDEWEB)

    Vigil,Benny Manuel [Los Alamos National Laboratory; Ballance, Robert [SNL; Haskell, Karen [SNL

    2012-08-09

    Cielo is a massively parallel supercomputer funded by the DOE/NNSA Advanced Simulation and Computing (ASC) program, and operated by the Alliance for Computing at Extreme Scale (ACES), a partnership between Los Alamos National Laboratory (LANL) and Sandia National Laboratories (SNL). The primary Cielo compute platform is physically located at Los Alamos National Laboratory. This Cielo Computational Environment Usage Model documents the capabilities and the environment to be provided for the Q1 FY12 Level 2 Cielo Capability Computing (CCC) Platform Production Readiness Milestone. This document describes specific capabilities, tools, and procedures to support both local and remote users. The model is focused on the needs of the ASC user working in the secure computing environments at Lawrence Livermore National Laboratory (LLNL), Los Alamos National Laboratory, or Sandia National Laboratories, but also addresses the needs of users working in the unclassified environment. The Cielo Computational Environment Usage Model maps the provided capabilities to the tri-Lab ASC Computing Environment (ACE) Version 8.0 requirements. The ACE requirements reflect the high performance computing requirements for the Production Readiness Milestone user environment capabilities of the ASC community. A description of ACE requirements met, and those requirements that are not met, are included in each section of this document. The Cielo Computing Environment, along with the ACE mappings, has been issued and reviewed throughout the tri-Lab community.

  19. Generalidades de un Sistema de Monitorización Informático para Unidades de Cuidados Intensivos Generalities of a Computer Monitoring System for Intensive Cares Units

    Directory of Open Access Journals (Sweden)

    María del Carmen Tellería Prieto

    2012-02-01

    Full Text Available El empleo de las tecnologías de la información y las comunicaciones en el sector de la salud adquiere cada día una importancia mayor. Se exponen en el trabajo los requisitos generales a partir de los cuales se desarrolla un Sistema Informático para la Monitorización de pacientes críticos en los diferentes servicios de atención al grave, aunque inicialmente está dirigido a las unidades de terapia intensiva. El trabajo es parte de un proyecto ramal que ejecuta la Dirección Nacional de Urgencias Médicas del Ministerio de Salud Pública de Cuba, con la participación de emergencistas e intensivistas de todo el país. El sistema se implementa por informáticos de la salud en Pinar del Río, cumplimentando las regulaciones establecidas por la Dirección Nacional de Informática y la empresa Softel. El sistema de monitorización facilitará la captura, gestión, tratamiento y almacenamiento de la información generada para cada paciente, integrando toda la información que se maneja en el servicio. Se hace hincapié en las evoluciones médicas y de enfermería, la prescripción de los tratamientos, así como en la evaluación clínica de los pacientes, lo que permitirá la toma de decisiones terapéuticas más efectivas. En las generalidades a partir de las cuales se desarrollará el sistema de monitorización, se ha especificado que el sistema sea modular, de manejo sencillo e intuitivo, e implementado con software libre.The application of information and communication technologies in the health sector gains a greater importance every day. General requisites to develop a Computer System to perform the monitoring of critically-ill patients throughout the different services of intensive care were considered; though it was firstly designed to the intensive care units. This paper is part of a branch project conducted by the National Direction of Medical Emergencies belonging to Cuban Ministry of Public Health, and with the participation of

  20. Assessment of coronary calcification using calibrated mass score with two different multidetector computed tomography scanners in the Copenhagen General Population Study

    Energy Technology Data Exchange (ETDEWEB)

    Fuchs, Andreas [Department of Cardiology, The Heart Centre, Rigshospitalet, University of Copenhagen, Copenhagen (Denmark); Groen, Jaap M. [Department of Radiology, University of Groningen, University Medical Center Groningen (Netherlands); Department of Medical Physics, OLVG, Amsterdam (Netherlands); Arnold, Ben A. [Image Analysis, 1380 Burkesville Road, Columbia, KY (United States); Nikolovski, Sasho [Department of Radiology, University of Groningen, University Medical Center Groningen (Netherlands); Knudsen, Andreas D., E-mail: dehlbaek@gmail.com [Department of Cardiology, The Heart Centre, Rigshospitalet, University of Copenhagen, Copenhagen (Denmark); Kühl, J. Tobias [Department of Cardiology, The Heart Centre, Rigshospitalet, University of Copenhagen, Copenhagen (Denmark); Nordestgaard, Børge G. [Department of Clinical Biochemistry and the Copenhagen General Population Study, Herlev Hospital, University of Copenhagen (Denmark); Greuter, Marcel J.W. [Department of Radiology, University of Groningen, University Medical Center Groningen (Netherlands); Kofoed, Klaus F. [Department of Cardiology, The Heart Centre, Rigshospitalet, University of Copenhagen, Copenhagen (Denmark); Department of Radiology, The Diagnostic Centre, Rigshospitalet, University of Copenhagen, Copenhagen (Denmark)

    2017-03-15

    Objective: Population studies have shown coronary calcium score to improve risk stratification in subjects suspected for cardiovascular disease. The aim of this work was to assess the validity of multidetector computed tomography (MDCT) for measurement of calibrated mass scores (MS) in a phantom study, and to investigate inter-scanner variability for MS and Agaston score (AS) recorded in a population study on two different high-end MDCT scanners. Materials and methods: A calcium phantom was scanned by a first (A) and second (B) generation 320-MDCT. MS was measured for each calcium deposit from repeated measurements in each scanner and compared to known physical phantom mass. Random samples of human subjects from the Copenhagen General Population Study were scanned with scanner A (N = 254) and scanner B (N = 253) where MS and AS distributions of these two groups were compared. Results: The mean total MS of the phantom was 32.9 ± 0.8 mg and 33.1 ± 0.9 mg (p = 0.43) assessed by scanner A and B respectively – the physical calcium mass was 34.0 mg. Correlation between measured MS and physical calcium mass was R{sup 2} = 0.99 in both scanners. In the population study the median total MS was 16.8 mg (interquartile range (IQR): 3.5–81.1) and 15.8 mg (IQR: 3.8–63.4) in scanner A and B (p = 0.88). The corresponding median total AS were 92 (IQR: 23–471) and 89 (IQR: 40–384) (p = 0.64). Conclusion: Calibrated calcium mass score may be assessed with very high accuracy in a calcium phantom by different generations of 320-MDCT scanners. In population studies, it appears acceptable to pool calcium scores acquired on different 320-MDCT scanners.

  1. Top-down/bottom-up description of electricity sector for Switzerland using the GEM-E3 computable general equilibrium model

    International Nuclear Information System (INIS)

    Krakowski, R. A.

    2006-06-01

    Participation of the Paul Scherrer Institute (PSI) in the advancement and extension of the multi-region, Computable General Equilibrium (CGE) model GEM-E3 (CES/KUL, 2002) focused primarily on two top-level facets: a) extension of the model database and model calibration, particularly as related to the second component of this study, which is; b) advancement of the dynamics of innovation and investment, primarily through the incorporation of Exogenous Technical Learning (ETL) into he Bottom-Up (BU, technology-based) part of the dynamic upgrade; this latter activity also included the completion of the dynamic coupling of the BU description of the electricity sector with the 'Top-Down' (TD, econometric) description of the economy inherent to the GEM-E3 CGE model. The results of this two- component study are described in two parts that have been combined in this single summary report: Part I describes the methodology and gives illustrative results from the BUTD integration, as well as describing the approach to and giving preliminary results from incorporating an ETL description into the BU component of the overall model; Part II reports on the calibration component of task in terms of: a) formulating a BU technology database for Switzerland based on previous work; incorporation of that database into the GEM-E3 model; and calibrating the BU database with the TD database embodied in the (Swiss) Social Accounting Matrix (SAM). The BUTD coupling along with the ETL incorporation described in Part I represent the major effort embodied in this investigation, but this effort could not be completed without the calibration preamble reported herein as Part II. A brief summary of the scope of each of these key study components is given. (author)

  2. Top-down/bottom-up description of electricity sector for Switzerland using the GEM-E3 computable general equilibrium model

    Energy Technology Data Exchange (ETDEWEB)

    Krakowski, R. A

    2006-06-15

    Participation of the Paul Scherrer Institute (PSI) in the advancement and extension of the multi-region, Computable General Equilibrium (CGE) model GEM-E3 (CES/KUL, 2002) focused primarily on two top-level facets: a) extension of the model database and model calibration, particularly as related to the second component of this study, which is; b) advancement of the dynamics of innovation and investment, primarily through the incorporation of Exogenous Technical Learning (ETL) into he Bottom-Up (BU, technology-based) part of the dynamic upgrade; this latter activity also included the completion of the dynamic coupling of the BU description of the electricity sector with the 'Top-Down' (TD, econometric) description of the economy inherent to the GEM-E3 CGE model. The results of this two- component study are described in two parts that have been combined in this single summary report: Part I describes the methodology and gives illustrative results from the BUTD integration, as well as describing the approach to and giving preliminary results from incorporating an ETL description into the BU component of the overall model; Part II reports on the calibration component of task in terms of: a) formulating a BU technology database for Switzerland based on previous work; incorporation of that database into the GEM-E3 model; and calibrating the BU database with the TD database embodied in the (Swiss) Social Accounting Matrix (SAM). The BUTD coupling along with the ETL incorporation described in Part I represent the major effort embodied in this investigation, but this effort could not be completed without the calibration preamble reported herein as Part II. A brief summary of the scope of each of these key study components is given. (author)

  3. GPScheDVS: A New Paradigm of the Autonomous CPU Speed Control for Commodity-OS-based General-Purpose Mobile Computers with a DVS-friendly Task Scheduling

    OpenAIRE

    Kim, Sookyoung

    2008-01-01

    This dissertation studies the problem of increasing battery life-time and reducing CPU heat dissipation without degrading system performance in commodity-OS-based general-purpose (GP) mobile computers using the dynamic voltage scaling (DVS) function of modern CPUs. The dissertation especially focuses on the impact of task scheduling on the effectiveness of DVS in achieving this goal. The task scheduling mechanism used in most contemporary general-purpose operating systems (GPOS) prioritizes t...

  4. High-Resolution Photoionization, Photoelectron and Photodissociation Studies. Determination of Accurate Energetic and Spectroscopic Database for Combustion Radicals and Molecules

    Energy Technology Data Exchange (ETDEWEB)

    Ng, Cheuk-Yiu [Univ. of California, Davis, CA (United States)

    2016-04-25

    The main goal of this research program was to obtain accurate thermochemical and spectroscopic data, such as ionization energies (IEs), 0 K bond dissociation energies, 0 K heats of formation, and spectroscopic constants for radicals and molecules and their ions of relevance to combustion chemistry. Two unique, generally applicable vacuum ultraviolet (VUV) laser photoion-photoelectron apparatuses have been developed in our group, which have used for high-resolution photoionization, photoelectron, and photodissociation studies for many small molecules of combustion relevance.

  5. Optimization of energy usage in textile finishing operations. Part I. The simulation of batch dyehouse activities with a general purpose computer model

    Energy Technology Data Exchange (ETDEWEB)

    Beard, J.N. Jr.; Rice, W.T. Jr.

    1980-01-01

    A project to develop a mathematical model capable of simulating the activities in a typical batch dyeing process in the textile industry is described. The model could be used to study the effects of changes in dye-house operations, and to determine effective guidelines for optimal dyehouse performance. The computer model is of a hypothetical dyehouse. The appendices contain a listing of the computer program, sample computer inputs and outputs, and instructions for using the model. (MCW)

  6. General complex rotated finite-element method for predissociation studies of diatomic molecules: An application on the (1-6)1Σg+ states of H2

    International Nuclear Information System (INIS)

    Andersson, Stefan; Elander, Nils

    2004-01-01

    An exterior complex rotated finite element method was applied on the diabatic multichannel Schroedinger equation in order to compute and compare rovibronic energy structures, predissociation widths, and nonradiative lifetimes for levels in the (1-4) (1-5), and (1-6) 1 Σ g + manifolds of H 2 . The rotationless (v,J=0) levels are found to be more or less shifted relative to each other when comparing the results for these three manifolds. The existence of homogeneous spectroscopic perturbations was investigated by studying the rovibronic (v,J=0-10) sequences for energies and level widths. Known experimental and theoretical radiative lifetimes were used to estimate present levels that might be spectroscopically measurable. The computed level widths for the EF, GK, and H electronic levels were generally found to be about two orders of magnitude larger than previously reported [P. Quadrelli, K. Pressler, and L. Woiniewicz, J. Chem. Phys. 93, 4958 (1990)], indicating a somewhat stronger predissociation

  7. Atomic Data for Fusion: Volume 6, Spectroscopic data for titanium, chromium, and nickel

    International Nuclear Information System (INIS)

    Wiese, W.L.; Musgrove, A.

    1989-09-01

    Comprehensive spectroscopic data tables are presented for all ionization stages of chromium. Tables of ionization potentials, spectral lines, energy levels, and transition probabilities are presented. These tables contain data which have been excerpted from general critical compilations prepared under the sponsorship of the National Standard Reference Data System (NSRDS)

  8. Atomic data for controlled fusion research. Volume IV. Spectroscopic data for iron

    Energy Technology Data Exchange (ETDEWEB)

    Wiese, W.L. (ed.)

    1985-02-01

    Comprehensive spectroscopic data tables are presented for all ions of Fe. Tables of ionization potentials, wave lengths of spectral lines, atomic energy levels, and transition probabilities are given which were excerpted from general critical compilations. All utilized compilations are less than five years old and include data on electric dipole as well as magnetic dipole transitions.

  9. Atomic data for controlled fusion research. Volume IV. Spectroscopic data for iron

    International Nuclear Information System (INIS)

    Wiese, W.L.

    1985-02-01

    Comprehensive spectroscopic data tables are presented for all ions of Fe. Tables of ionization potentials, wave lengths of spectral lines, atomic energy levels, and transition probabilities are given which were excerpted from general critical compilations. All utilized compilations are less than five years old and include data on electric dipole as well as magnetic dipole transitions

  10. Atomic Data for Fusion: Volume 6, Spectroscopic data for titanium, chromium, and nickel

    Energy Technology Data Exchange (ETDEWEB)

    Wiese, W.L.; Musgrove, A. (eds.) (National Inst. of Standards and Technology, Gaithersburg, MD (USA))

    1989-09-01

    Comprehensive spectroscopic data tables are presented for all ionization stages of chromium. Tables of ionization potentials, spectral lines, energy levels, and transition probabilities are presented. These tables contain data which have been excerpted from general critical compilations prepared under the sponsorship of the National Standard Reference Data System (NSRDS).

  11. Computability and unsolvability

    CERN Document Server

    Davis, Martin

    1985-01-01

    ""A clearly written, well-presented survey of an intriguing subject."" - Scientific American. Classic text considers general theory of computability, computable functions, operations on computable functions, Turing machines self-applied, unsolvable decision problems, applications of general theory, mathematical logic, Kleene hierarchy, computable functionals, classification of unsolvable decision problems and more.

  12. An Investigation of the Use of Computer-Aided-Instruction in Teaching Students How to Solve Selected Multistep General Chemistry Problems.

    Science.gov (United States)

    Grandey, Robert C.

    The development of computer-assisted instructional lessons on the following three topics is discussed: 1) the mole concept and chemical formulas, 2) concentration of solutions and quantities from chemical equations, and 3) balancing equations for oxidation-reduction reactions. Emphasis was placed on developing computer routines which interpret…

  13. Generalized Free-Surface Effect and Random Vibration Theory: a new tool for computing moment magnitudes of small earthquakes using borehole data

    Science.gov (United States)

    Malagnini, Luca; Dreger, Douglas S.

    2016-07-01

    Although optimal, computing the moment tensor solution is not always a viable option for the calculation of the size of an earthquake, especially for small events (say, below Mw 2.0). Here we show an alternative approach to the calculation of the moment-rate spectra of small earthquakes, and thus of their scalar moments, that uses a network-based calibration of crustal wave propagation. The method works best when applied to a relatively small crustal volume containing both the seismic sources and the recording sites. In this study we present the calibration of the crustal volume monitored by the High-Resolution Seismic Network (HRSN), along the San Andreas Fault (SAF) at Parkfield. After the quantification of the attenuation parameters within the crustal volume under investigation, we proceed to the spectral correction of the observed Fourier amplitude spectra for the 100 largest events in our data set. Multiple estimates of seismic moment for the all events (1811 events total) are obtained by calculating the ratio of rms-averaged spectral quantities based on the peak values of the ground velocity in the time domain, as they are observed in narrowband-filtered time-series. The mathematical operations allowing the described spectral ratios are obtained from Random Vibration Theory (RVT). Due to the optimal conditions of the HRSN, in terms of signal-to-noise ratios, our network-based calibration allows the accurate calculation of seismic moments down to Mw < 0. However, because the HRSN is equipped only with borehole instruments, we define a frequency-dependent Generalized Free-Surface Effect (GFSE), to be used instead of the usual free-surface constant F = 2. Our spectral corrections at Parkfield need a different GFSE for each side of the SAF, which can be quantified by means of the analysis of synthetic seismograms. The importance of the GFSE of borehole instruments increases for decreasing earthquake's size because for smaller earthquakes the bandwidth available

  14. A study of electricity planning in Thailand: An integrated top-down and bottom-up Computable General Equilibrium (CGE) modeling analysis

    Science.gov (United States)

    Srisamran, Supree

    This dissertation examines the potential impacts of three electricity policies on the economy of Thailand in terms of macroeconomic performance, income distribution, and unemployment rate. The three considered policies feature responses to potential disruption of imported natural gas used in electricity generation, alternative combinations (portfolios) of fuel feedstock for electricity generation, and increases in investment and local electricity consumption. The evaluation employs Computable General Equilibrium (CGE) approach with the extension of electricity generation and transmission module to simulate the counterfactual scenario for each policy. The dissertation consists of five chapters. Chapter one begins with a discussion of Thailand's economic condition and is followed by a discussion of the current state of electricity generation and consumption and current issues in power generation. The security of imported natural gas in power generation is then briefly discussed. The persistence of imported natural gas disruption has always caused trouble to the country, however, the economic consequences of this disruption have not yet been evaluated. The current portfolio of power generation and the concerns it raises are then presented. The current portfolio of power generation is heavily reliant upon natural gas and so needs to be diversified. Lastly, the anticipated increase in investment and electricity consumption as a consequence of regional integration is discussed. Chapter two introduces the CGE model, its background and limitations. Chapter three reviews relevant literature of the CGE method and its application in electricity policies. In addition, the submodule characterizing the network of electricity generation and distribution and the method of its integration with the CGE model are explained. Chapter four presents the findings of the policy simulations. The first simulation illustrates the consequences of responses to disruptions in natural gas imports

  15. A Speckle survey of Southern Hipparcos Visual Doubles and Geneva-Copenhagen Spectroscopic Binaries

    Science.gov (United States)

    Mendez, R. A.; Tokovinin, A.; Horch, E.

    2018-01-01

    We present a speckle survey of Hipparcos visual doubles and spectroscopic binary stars identified by the Geneva-Copenhagen spectroscopic survey with the SOAR 4m telescope + HRCam. These systems represent our best chance to take advantage of Gaia parallaxes for the purpose of stellar mass determinations. Many of these systems already have mass fractions (although generally no spectroscopic orbit - an astrometric orbit will determine individual masses), metallicity information, and Hipparcos distances. They will be used to improve our knowledge of the mass-luminosity relation, particularly for lower-metallicity stars. Our survey will create the first all-sky, volume-limited, speckle archive for the two primary samples, complementing a similar effort that has been recently been completed at the WIYN 3.5-m telescope in the Northern Hemisphere. This extension to the Southern Hemisphere will fill out the picture for a wider metallicity range.

  16. Spectroscopic databases - A tool for structure elucidation

    Energy Technology Data Exchange (ETDEWEB)

    Luksch, P [Fachinformationszentrum Karlsruhe, Gesellschaft fuer Wissenschaftlich-Technische Information mbH, Eggenstein-Leopoldshafen (Germany)

    1990-05-01

    Spectroscopic databases have developed to useful tools in the process of structure elucidation. Besides the conventional library searches, new intelligent programs have been added, that are able to predict structural features from measured spectra or to simulate for a given structure. The example of the C13NMR/IR database developed at BASF and available on STN is used to illustrate the present capabilities of online database. New developments in the field of spectrum simulation and methods for the prediction of complete structures from spectroscopic information are reviewed. (author). 10 refs, 5 figs.

  17. Genome-wide study of percent emphysema on computed tomography in the general population. The Multi-Ethnic Study of Atherosclerosis Lung/SNP Health Association Resource Study

    NARCIS (Netherlands)

    Manichaikul, Ani; Hoffman, Eric A.; Smolonska, Joanna; Gao, Wei; Cho, Michael H.; Baumhauer, Heather; Budoff, Matthew; Austin, John H. M.; Washko, George R.; Carr, J. Jeffrey; Kaufman, Joel D.; Pottinger, Tess; Powell, Charles A.; Wijmenga, Cisca; Zanen, Pieter; Groen, Harry J.M.; Postma, Dirkje S.; Wanner, Adam; Rouhani, Farshid N.; Brantly, Mark L.; Powell, Rhea; Smith, Benjamin M.; Rabinowitz, Dan; Raffel, Leslie J.; Stukovsky, Karen D. Hinckley; Crapo, James D.; Beaty, Terri H.; Hokanson, John E.; Silverman, Edwin K.; Dupuis, Josee; O'Connor, George T.; Boezen, Hendrika; Rich, Stephen S.; Barr, R. Graham

    2014-01-01

    Rationale: Pulmonary emphysema overlaps partially with spirometrically defined chronic obstructive pulmonary disease and is heritable, with moderately high familial clustering. Objectives: To complete a genome-wide association study (GWAS) for the percentage of emphysema-like lung on computed

  18. Spectroscopic and imaging diagnostics of pulsed laser deposition laser plasmas

    International Nuclear Information System (INIS)

    Thareja, Raj K.

    2002-01-01

    An overview of laser spectroscopic techniques used in the diagnostics of laser ablated plumes used for thin film deposition is given. An emerging laser spectroscopic imaging technique for the laser ablation material processing is discussed. (author)

  19. GPGPU COMPUTING

    Directory of Open Access Journals (Sweden)

    BOGDAN OANCEA

    2012-05-01

    Full Text Available Since the first idea of using GPU to general purpose computing, things have evolved over the years and now there are several approaches to GPU programming. GPU computing practically began with the introduction of CUDA (Compute Unified Device Architecture by NVIDIA and Stream by AMD. These are APIs designed by the GPU vendors to be used together with the hardware that they provide. A new emerging standard, OpenCL (Open Computing Language tries to unify different GPU general computing API implementations and provides a framework for writing programs executed across heterogeneous platforms consisting of both CPUs and GPUs. OpenCL provides parallel computing using task-based and data-based parallelism. In this paper we will focus on the CUDA parallel computing architecture and programming model introduced by NVIDIA. We will present the benefits of the CUDA programming model. We will also compare the two main approaches, CUDA and AMD APP (STREAM and the new framwork, OpenCL that tries to unify the GPGPU computing models.

  20. Generalized Superconductivity. Generalized Levitation

    International Nuclear Information System (INIS)

    Ciobanu, B.; Agop, M.

    2004-01-01

    In the recent papers, the gravitational superconductivity is described. We introduce the concept of generalized superconductivity observing that any nongeodesic motion and, in particular, the motion in an electromagnetic field, can be transformed in a geodesic motion by a suitable choice of the connection. In the present paper, the gravitoelectromagnetic London equations have been obtained from the generalized Helmholtz vortex theorem using the generalized local equivalence principle. In this context, the gravitoelectromagnetic Meissner effect and, implicitly, the gravitoelectromagnetic levitation are given. (authors)

  1. Computational Literacy

    DEFF Research Database (Denmark)

    Chongtay, Rocio; Robering, Klaus

    2016-01-01

    In recent years, there has been a growing interest in and recognition of the importance of Computational Literacy, a skill generally considered to be necessary for success in the 21st century. While much research has concentrated on requirements, tools, and teaching methodologies for the acquisit......In recent years, there has been a growing interest in and recognition of the importance of Computational Literacy, a skill generally considered to be necessary for success in the 21st century. While much research has concentrated on requirements, tools, and teaching methodologies...... for the acquisition of Computational Literacy at basic educational levels, focus on higher levels of education has been much less prominent. The present paper considers the case of courses for higher education programs within the Humanities. A model is proposed which conceives of Computational Literacy as a layered...

  2. Spectroscopic, thermal and biological studies of coordination

    Indian Academy of Sciences (India)

    Spectroscopic, thermal and biological studies of coordination compounds of sulfasalazine drug: Mn(II), Hg(II), Cr(III), ZrO(II), VO(II) and Y(III) transition metal ... The thermal decomposition of the complexes as well as thermodynamic parameters ( *}, *, * and *) were estimated using Coats–Redfern and ...

  3. 8th Czechoslovak spectroscopic conference. Abstracts

    International Nuclear Information System (INIS)

    1988-01-01

    Volume 3 of the conference proceedings contains abstracts of 17 invited papers, 101 poster presentations and 7 papers of instrument manufacturers, devoted to special spectroscopic techniques including X-ray microanalysis, X-ray spectral analysis, Moessbauer spectrometry, mass spectrometry, instrumental activation analysis and other instrumental radioanalytical methods, electron spectrometry, and techniques of environmental analysis. Sixty abstracts were inputted in INIS. (A.K.)

  4. Photoelectric Radial Velocities, Paper XIX Additional Spectroscopic ...

    Indian Academy of Sciences (India)

    ian velocity curve that does justice to the measurements, but it cannot be expected to have much predictive power. Key words. Stars: late-type—stars: radial velocities—spectroscopic binaries—orbits. 0. Preamble. The 'Redman K stars' are a lot of seventh-magnitude K stars whose radial velocities were first observed by ...

  5. The VANDELS ESO public spectroscopic survey

    Science.gov (United States)

    McLure, R. J.; Pentericci, L.; Cimatti, A.; Dunlop, J. S.; Elbaz, D.; Fontana, A.; Nandra, K.; Amorin, R.; Bolzonella, M.; Bongiorno, A.; Carnall, A. C.; Castellano, M.; Cirasuolo, M.; Cucciati, O.; Cullen, F.; De Barros, S.; Finkelstein, S. L.; Fontanot, F.; Franzetti, P.; Fumana, M.; Gargiulo, A.; Garilli, B.; Guaita, L.; Hartley, W. G.; Iovino, A.; Jarvis, M. J.; Juneau, S.; Karman, W.; Maccagni, D.; Marchi, F.; Mármol-Queraltó, E.; Pompei, E.; Pozzetti, L.; Scodeggio, M.; Sommariva, V.; Talia, M.; Almaini, O.; Balestra, I.; Bardelli, S.; Bell, E. F.; Bourne, N.; Bowler, R. A. A.; Brusa, M.; Buitrago, F.; Caputi, K. I.; Cassata, P.; Charlot, S.; Citro, A.; Cresci, G.; Cristiani, S.; Curtis-Lake, E.; Dickinson, M.; Fazio, G. G.; Ferguson, H. C.; Fiore, F.; Franco, M.; Fynbo, J. P. U.; Galametz, A.; Georgakakis, A.; Giavalisco, M.; Grazian, A.; Hathi, N. P.; Jung, I.; Kim, S.; Koekemoer, A. M.; Khusanova, Y.; Fèvre, O. Le; Lotz, J. M.; Mannucci, F.; Maltby, D. T.; Matsuoka, K.; McLeod, D. J.; Mendez-Hernandez, H.; Mendez-Abreu, J.; Mignoli, M.; Moresco, M.; Mortlock, A.; Nonino, M.; Pannella, M.; Papovich, C.; Popesso, P.; Rosario, D. P.; Salvato, M.; Santini, P.; Schaerer, D.; Schreiber, C.; Stark, D. P.; Tasca, L. A. M.; Thomas, R.; Treu, T.; Vanzella, E.; Wild, V.; Williams, C. C.; Zamorani, G.; Zucca, E.

    2018-05-01

    VANDELS is a uniquely-deep spectroscopic survey of high-redshift galaxies with the VIMOS spectrograph on ESO's Very Large Telescope (VLT). The survey has obtained ultra-deep optical (0.48 studies. Using integration times calculated to produce an approximately constant signal-to-noise ratio (20 motivation, survey design and target selection.

  6. The Gaia-ESO Public Spectroscopic Survey

    DEFF Research Database (Denmark)

    Gilmore, G.; Randich, S.; Asplund, M.

    2012-01-01

    The Gaia-ESO Public Spectroscopic Survey has begun and will obtain high quality spectroscopy of some 100000 Milky Way stars, in the field and in open clusters, down to magnitude 19, systematically covering all the major components of the Milky Way. This survey will provide the first homogeneous o...

  7. Highlights of the Brazilian Solar Spectroscope

    Czech Academy of Sciences Publication Activity Database

    Sawant, H. S.; Cecatto, J.R.; Mészárosová, Hana; Faria, C.; Fernandes, F. C. R.; Karlický, Marian; de Andrade, M. C.

    2009-01-01

    Roč. 44, č. 1 (2009), s. 54-57 ISSN 0273-1177 R&D Projects: GA AV ČR IAA300030701 Institutional research plan: CEZ:AV0Z10030501 Keywords : Sun istrumentation * spectroscope * corona * radio radiation Subject RIV: BN - Astronomy, Celestial Mechanics, Astrophysics Impact factor: 1.079, year: 2009

  8. The DFBS Spectroscopic Database and the Armenian Virtual Observatory

    Directory of Open Access Journals (Sweden)

    Areg M Mickaelian

    2009-05-01

    Full Text Available The Digitized First Byurakan Survey (DFBS is the digitized version of the famous Markarian Survey. It is the largest low-dispersion spectroscopic survey of the sky, covering 17,000 square degrees at galactic latitudes |b|>15. DFBS provides images and extracted spectra for all objects present in the FBS plates. Programs were developed to compute astrometric solution, extract spectra, and apply wavelength and photometric calibration for objects. A DFBS database and catalog has been assembled containing data for nearly 20,000,000 objects. A classification scheme for the DFBS spectra is being developed. The Armenian Virtual Observatory is based on the DFBS database and other large-area surveys and catalogue data.

  9. A spectroscopic transfer standard for accurate atmospheric CO measurements

    Science.gov (United States)

    Nwaboh, Javis A.; Li, Gang; Serdyukov, Anton; Werhahn, Olav; Ebert, Volker

    2016-04-01

    Atmospheric carbon monoxide (CO) is a precursor of essential climate variables and has an indirect effect for enhancing global warming. Accurate and reliable measurements of atmospheric CO concentration are becoming indispensable. WMO-GAW reports states a compatibility goal of ±2 ppb for atmospheric CO concentration measurements. Therefore, the EMRP-HIGHGAS (European metrology research program - high-impact greenhouse gases) project aims at developing spectroscopic transfer standards for CO concentration measurements to meet this goal. A spectroscopic transfer standard would provide results that are directly traceable to the SI, can be very useful for calibration of devices operating in the field, and could complement classical gas standards in the field where calibration gas mixtures in bottles often are not accurate, available or stable enough [1][2]. Here, we present our new direct tunable diode laser absorption spectroscopy (dTDLAS) sensor capable of performing absolute ("calibration free") CO concentration measurements, and being operated as a spectroscopic transfer standard. To achieve the compatibility goal stated by WMO for CO concentration measurements and ensure the traceability of the final concentration results, traceable spectral line data especially line intensities with appropriate uncertainties are needed. Therefore, we utilize our new high-resolution Fourier-transform infrared (FTIR) spectroscopy CO line data for the 2-0 band, with significantly reduced uncertainties, for the dTDLAS data evaluation. Further, we demonstrate the capability of our sensor for atmospheric CO measurements, discuss uncertainty calculation following the guide to the expression of uncertainty in measurement (GUM) principles and show that CO concentrations derived using the sensor, based on the TILSAM (traceable infrared laser spectroscopic amount fraction measurement) method, are in excellent agreement with gravimetric values. Acknowledgement Parts of this work have been

  10. Quantum chemical calculations and spectroscopic measurements of spectroscopic and thermodynamic properties of given uranyl complexes in aqueous solutions with possible environmental and industrial applications

    Directory of Open Access Journals (Sweden)

    Višňak Jakub

    2016-01-01

    Full Text Available A brief introduction into computational methodology and preliminary results for spectroscopic (excitation energies, vibrational frequencies in ground and excited electronic states and thermodynamic (stability constants, standard enthalpies and entropies of complexation reactions properties of some 1:1, 1:2 and 1:3 uranyl sulphato- and selenato- complexes in aqueos solutions will be given. The relativistic effects are included via Effective Core Potential (ECP, electron correlation via (TDDFT/B3LYP (dispersion interaction corrected and solvation is described via explicit inclusion of one hydration sphere beyond the coordinated water molecules. We acknowledge limits of this approximate description – more accurate calculations (ranging from semi-phenomenological two-component spin-orbit coupling up to four-component Dirac-Coulomb-Breit hamiltonian and Molecular Dynamics simulations are in preparation. The computational results are compared with the experimental results from Time-resolved Laser-induced Fluorescence Spectroscopy (TRLFS and UV-VIS spectroscopic studies (including our original experimental research on this topic. In case of the TRLFS and UV-VIS speciation studies, the problem of complex solution spectra decomposition into individual components is ill-conditioned and hints from theoretical chemistry could be very important. Qualitative agreement between our quantum chemical calculations of the spectroscopic properties and experimental data was achieved. Possible applications for geochemical modelling (e.g. safety studies of nuclear waste repositories, modelling of a future mining site and analytical chemical studies (including natural samples are discussed.

  11. Normal values of left ventricularmass and cardiac chamber volumes assessed by 320-detector computed tomography angiography in the Copenhagen General Population Study

    DEFF Research Database (Denmark)

    Fuchs, Andreas; Mejdahl, Mads Rams; Kühl, J Tobias

    2016-01-01

    Aims Normal values of left ventricular mass (LVM) and cardiac chamber sizes are prerequisites for the diagnosis of individuals with heart disease. LVM and cardiac chamber sizes may be recorded during cardiac computed tomography angiography (CCTA), and thus modality specific normal values are need...

  12. Computer Games : 5th Workshop on Computer Games, CGW 2016, and 5th Workshop on General Intelligence in Game-Playing Agents, GIGA 2016, held in conjunction with the 25th International Conference on Artificial Intelligence, IJCAI 2016, New York, USA, July 9-10, 2016, Revised selected papers

    NARCIS (Netherlands)

    Cazenave, Tristan; Winands, Mark H. M; Edelkamp, Stefan; Schiffel, Stephan; Thielscher, Michael; Togelius, Julian

    2017-01-01

    This book constitutes the refereed proceedings of the 5th Computer Games Workshop, CGW 2016, and the 5th Workshop on General Intelligence in Game-Playing Agents, GIGA 2016, held in conjunction with the 25th International Conference on Artificial Intelligence, IJCAI 2016, in New York, USA, in July

  13. Forces in General Relativity

    Science.gov (United States)

    Ridgely, Charles T.

    2010-01-01

    Many textbooks dealing with general relativity do not demonstrate the derivation of forces in enough detail. The analyses presented herein demonstrate straightforward methods for computing forces by way of general relativity. Covariant divergence of the stress-energy-momentum tensor is used to derive a general expression of the force experienced…

  14. Using three-channel video to evaluate the impact of the use of the computer on the patient-centredness of the general practice consultation

    Directory of Open Access Journals (Sweden)

    Alice Theadom

    2003-11-01

    Three-channel video proved to be a feasible and valuable technique for the analysis of primary care GP consultations, with advantages over single-channel video. Interesting differences in non-verbal and verbal behaviour became apparent with different types of computer use during the consultation. Implications for the three-channel video technique for training, monitoring GP competence and providing feedback are discussed.

  15. Short-distance expansion for the electromagnetic half-space Green's tensor: general results and an application to radiative lifetime computations

    International Nuclear Information System (INIS)

    Panasyuk, George Y; Schotland, John C; Markel, Vadim A

    2009-01-01

    We obtain a short-distance expansion for the half-space, frequency domain electromagnetic Green's tensor. The small parameter of the theory is ωε 1 L/c, where ω is the frequency, ε 1 is the permittivity of the upper half-space, in which both the source and the point of observation are located, and which is assumed to be transparent, c is the speed of light in vacuum and L is a characteristic length, defined as the distance from the point of observation to the reflected (with respect to the planar interface) position of the source. In the case when the lower half-space (the substrate) is characterized by a complex permittivity ε 2 , we compute the expansion to third order. For the case when the substrate is a transparent dielectric, we compute the imaginary part of the Green's tensor to seventh order. The analytical calculations are verified numerically. The practical utility of the obtained expansion is demonstrated by computing the radiative lifetime of two electromagnetically interacting molecules in the vicinity of a transparent dielectric substrate. The computation is performed in the strong interaction regime when the quasi-particle pole approximation is inapplicable. In this regime, the integral representation for the half-space Green's tensor is difficult to use while its electrostatic limiting expression is grossly inadequate. However, the analytical expansion derived in this paper can be used directly and efficiently. The results of this study are also relevant to nano-optics and near-field imaging, especially when tomographic image reconstruction is involved

  16. Computational Pathology

    Science.gov (United States)

    Louis, David N.; Feldman, Michael; Carter, Alexis B.; Dighe, Anand S.; Pfeifer, John D.; Bry, Lynn; Almeida, Jonas S.; Saltz, Joel; Braun, Jonathan; Tomaszewski, John E.; Gilbertson, John R.; Sinard, John H.; Gerber, Georg K.; Galli, Stephen J.; Golden, Jeffrey A.; Becich, Michael J.

    2016-01-01

    Context We define the scope and needs within the new discipline of computational pathology, a discipline critical to the future of both the practice of pathology and, more broadly, medical practice in general. Objective To define the scope and needs of computational pathology. Data Sources A meeting was convened in Boston, Massachusetts, in July 2014 prior to the annual Association of Pathology Chairs meeting, and it was attended by a variety of pathologists, including individuals highly invested in pathology informatics as well as chairs of pathology departments. Conclusions The meeting made recommendations to promote computational pathology, including clearly defining the field and articulating its value propositions; asserting that the value propositions for health care systems must include means to incorporate robust computational approaches to implement data-driven methods that aid in guiding individual and population health care; leveraging computational pathology as a center for data interpretation in modern health care systems; stating that realizing the value proposition will require working with institutional administrations, other departments, and pathology colleagues; declaring that a robust pipeline should be fostered that trains and develops future computational pathologists, for those with both pathology and non-pathology backgrounds; and deciding that computational pathology should serve as a hub for data-related research in health care systems. The dissemination of these recommendations to pathology and bioinformatics departments should help facilitate the development of computational pathology. PMID:26098131

  17. Quantum Computation

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 16; Issue 9. Quantum Computation - Particle and Wave Aspects of Algorithms. Apoorva Patel. General Article Volume 16 Issue 9 September 2011 pp 821-835. Fulltext. Click here to view fulltext PDF. Permanent link:

  18. Optical Computing

    Indian Academy of Sciences (India)

    Optical computing technology is, in general, developing in two directions. One approach is ... current support in many places, with private companies as well as governments in several countries encouraging such research work. For example, much ... which enables more information to be carried and data to be processed.

  19. Spectroscopic properties of a two-dimensional time-dependent Cepheid model. II. Determination of stellar parameters and abundances

    Science.gov (United States)

    Vasilyev, V.; Ludwig, H.-G.; Freytag, B.; Lemasle, B.; Marconi, M.

    2018-03-01

    Context. Standard spectroscopic analyses of variable stars are based on hydrostatic 1D model atmospheres. This quasi-static approach has not been theoretically validated. Aim. We aim at investigating the validity of the quasi-static approximation for Cepheid variables. We focus on the spectroscopic determination of the effective temperature Teff, surface gravity log g, microturbulent velocity ξt, and a generic metal abundance log A, here taken as iron. Methods: We calculated a grid of 1D hydrostatic plane-parallel models covering the ranges in effective temperature and gravity that are encountered during the evolution of a 2D time-dependent envelope model of a Cepheid computed with the radiation-hydrodynamics code CO5BOLD. We performed 1D spectral syntheses for artificial iron lines in local thermodynamic equilibrium by varying the microturbulent velocity and abundance. We fit the resulting equivalent widths to corresponding values obtained from our dynamical model for 150 instances in time, covering six pulsational cycles. In addition, we considered 99 instances during the initial non-pulsating stage of the temporal evolution of the 2D model. In the most general case, we treated Teff, log g, ξt, and log A as free parameters, and in two more limited cases, we fixed Teff and log g by independent constraints. We argue analytically that our approach of fitting equivalent widths is closely related to current standard procedures focusing on line-by-line abundances. Results: For the four-parametric case, the stellar parameters are typically underestimated and exhibit a bias in the iron abundance of ≈-0.2 dex. To avoid biases of this type, it is favorable to restrict the spectroscopic analysis to photometric phases ϕph ≈ 0.3…0.65 using additional information to fix the effective temperature and surface gravity. Conclusions: Hydrostatic 1D model atmospheres can provide unbiased estimates of stellar parameters and abundances of Cepheid variables for particular

  20. Spectroscopic follow up of Kepler planet candidates

    DEFF Research Database (Denmark)

    Latham..[], D. W.; Cochran, W. D.; Marcy, G.W.

    2010-01-01

    Spectroscopic follow-up observations play a crucial role in the confirmation and characterization of transiting planet candidates identified by Kepler. The most challenging part of this work is the determination of radial velocities with a precision approaching 1 m/s in order to derive masses from...... spectroscopic orbits. The most precious resource for this work is HIRES on Keck I, to be joined by HARPS-North on the William Herschel Telescope when that new spectrometer comes on line in two years. Because a large fraction of the planet candidates are in fact stellar systems involving eclipsing stars...... and not planets, our strategy is to start with reconnaissance spectroscopy using smaller telescopes, to sort out and reject as many of the false positives as possible before going to Keck. During the first Kepler observing season in 2009, more than 100 nights of telescope time were allocated for this work, using...

  1. Spectroscopic Chemical Analysis Methods and Apparatus

    Science.gov (United States)

    Hug, William F. (Inventor); Reid, Ray D. (Inventor); Bhartia, Rohit (Inventor); Lane, Arthur L. (Inventor)

    2018-01-01

    Spectroscopic chemical analysis methods and apparatus are disclosed which employ deep ultraviolet (e.g. in the 200 nm to 300 nm spectral range) electron beam pumped wide bandgap semiconductor lasers, incoherent wide bandgap semiconductor light emitting devices, and hollow cathode metal ion lasers to perform non-contact, non-invasive detection of unknown chemical analytes. These deep ultraviolet sources enable dramatic size, weight and power consumption reductions of chemical analysis instruments. In some embodiments, Raman spectroscopic detection methods and apparatus use ultra-narrow-band angle tuning filters, acousto-optic tuning filters, and temperature tuned filters to enable ultra-miniature analyzers for chemical identification. In some embodiments Raman analysis is conducted along with photoluminescence spectroscopy (i.e. fluorescence and/or phosphorescence spectroscopy) to provide high levels of sensitivity and specificity in the same instrument.

  2. Very large area multiwire spectroscopic proportional counters

    International Nuclear Information System (INIS)

    Ubertini, P.; Bazzano, A.; Boccaccini, L.; Mastropietro, M.; La Padula, C.D.; Patriarca, R.; Polcaro, V.F.

    1981-01-01

    As a result of a five year development program, a final prototype of a Very Large Area Spectroscopic Proportional Counter (VLASPC), to be employed in space borne payloads, was produced at the Istituto di Astrofisica Spaziale, Frascati. The instrument is the last version of a new generation of Multiwire Spectroscopic Proportional Counters (MWSPC) succesfully employed in many balloon borne flights, devoted to hard X-ray astronomy. The sensitive area of this standard unit is 2700 cm 2 with an efficiency higher than 10% in the range 15-180 keV (80% at 60 keV). The low cost and weight make this new type of VLASPC competitive with Nal arrays, phoswich and GSPC detectors in terms of achievable scientific results. (orig.)

  3. Very large area multiwire spectroscopic proportional counters

    Energy Technology Data Exchange (ETDEWEB)

    Ubertini, P.; Bazzano, A.; Boccaccini, L.; Mastropietro, M.; La Padula, C.D.; Patriarca, R.; Polcaro, V.F. (Istituto di Astrofisica Spaziale, Frascati (Italy))

    1981-07-01

    As a result of a five year development program, a final prototype of a Very Large Area Spectroscopic Proportional Counter (VLASPC), to be employed in space borne payloads, was produced at the Istituto di Astrofisica Spaziale, Frascati. The instrument is the last version of a new generation of Multiwire Spectroscopic Proportional Counters (MWSPC) successfully employed in many balloon borne flights, devoted to hard X-ray astronomy. The sensitive area of this standard unit is 2700 cm/sup 2/ with an efficiency higher than 10% in the range 15-180 keV (80% at 60 keV). The low cost and weight make this new type of VLASPC competitive with Nal arrays, phoswich and GSPC detectors in terms of achievable scientific results.

  4. Spectroscopic diagnostics and measurements at Jet

    International Nuclear Information System (INIS)

    Giannella, R.

    1994-01-01

    A concise review is presented of activity in the field spectroscopic diagnostic at JET during the latest few years. Together with a description of instruments, examples are given of the measurements conducted with these systems and some experimental result obtained with such activity are outlined. Emphasis is also given to the upgrading of existing apparatuses and the construction of new diagnostics ahead of the next experimental phase. 48 refs., 5 figs

  5. Spectroscopic studies of the transplutonium elements

    International Nuclear Information System (INIS)

    Carnall, W.T.; Conway, J.G.

    1983-01-01

    The challenging opportunity to develop insights into both atomic structure and the effects of bonding in compounds makes the study of actinide spectroscopy a particularly fruitful and exciting area of scientific endeavor. It is also the interpretation of f-element spectra that has stimulated the development of the most sophisticated theoretical modeling attempted for any elements in the periodic table. The unique nature of the spectra and the wealth of fine detail revealed make possible sensitive tests of both physical models and the results of Hartree-Fock type ab initio calculations. This paper focuses on the unique character of heavy actinide spectroscopy. It discusses how it differs from that of the lighter member of the series and what are the special properties that are manifested. Following the introduction, the paper covers the following: (1) the role of systematic studies and the relationships of heavy-actinide spectroscopy to ongoing spectroscopic investigations of the lighter members of the series; (2) atomic (free-ion) spectra which covers the present status of spectroscopic studies with transplutonium elements, and future needs and directions in atomic spectroscopy; (3) the spectra of actinide compounds which covers the present status and future directions of spectroscopic studies with compounds of the transplutonium elements; and other spectroscopies. 1 figure, 2 tables

  6. Spectroscopic methods for characterization of nuclear fuels

    International Nuclear Information System (INIS)

    Sastry, M.D.

    1999-01-01

    Spectroscopic techniques have contributed immensely in the characterisation and speciation of materials relevant to a variety of applications. These techniques have time tested credentials and continue to expand into newer areas. In the field of nuclear fuel fabrication, atomic spectroscopic methods are used for monitoring the trace metallic constituents in the starting materials and end product, and for monitoring process pick up. The current status of atomic spectroscopic methods for the determination of trace metallic constituents in nuclear fuel materials will be briefly reviewed and new approaches will be described with a special emphasis on inductively coupled plasma techniques and ETV-ICP-AES hyphenated techniques. Special emphasis will also be given in highlighting the importance of chemical separation procedures for the optimum utilization of potential of ICP. The presentation will also include newer techniques like Photo Acoustic Spectroscopy, and Electron Paramagnetic Resonance (EPR) Imaging. PAS results on uranium and plutonium oxides will be described with a reference to the determination of U 4+ /U 6+ concentration in U 3 O 8 . EPR imaging techniques for speciation and their spatial distribution in solids will be described and its potential use for Gd 3+ containing UO 2 pellets (used for flux flattening) will be highlighted. (author)

  7. Quantum Computer Science

    Science.gov (United States)

    Mermin, N. David

    2007-08-01

    Preface; 1. Cbits and Qbits; 2. General features and some simple examples; 3. Breaking RSA encryption with a quantum computer; 4. Searching with a quantum computer; 5. Quantum error correction; 6. Protocols that use just a few Qbits; Appendices; Index.

  8. Cognitive Computing for Security.

    Energy Technology Data Exchange (ETDEWEB)

    Debenedictis, Erik [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Rothganger, Fredrick [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Aimone, James Bradley [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Marinella, Matthew [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Evans, Brian Robert [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Warrender, Christina E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Mickel, Patrick [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-12-01

    Final report for Cognitive Computing for Security LDRD 165613. It reports on the development of hybrid of general purpose/ne uromorphic computer architecture, with an emphasis on potential implementation with memristors.

  9. CT colonography: effect of computer-aided detection of colonic polyps as a second and concurrent reader for general radiologists with moderate experience in CT colonography

    International Nuclear Information System (INIS)

    Mang, Thomas; Ringel, Helmut; Weber, Michael; Bogoni, Luca; Anand, Vikram X.; Hermosillo, Gerardo; Raykar, Vikas; Salganicoff, Marcos; Wolf, Matthias; Chandra, Dass; Curtin, Andrew J.; Lev-Toaff, Anna S.; Noah, Ralph; Shaw, Robert; Summerton, Susan; Tappouni, Rafel F.R.; Obuchowski, Nancy A.

    2014-01-01

    To assess the effectiveness of computer-aided detection (CAD) as a second reader or concurrent reader in helping radiologists who are moderately experienced in computed tomographic colonography (CTC) to detect colorectal polyps. Seventy CTC datasets (34 patients: 66 polyps ≥6 mm; 36 patients: no abnormalities) were retrospectively reviewed by seven radiologists with moderate CTC experience. After primary unassisted evaluation, a CAD second read and, after a time interval of ≥4 weeks, a CAD concurrent read were performed. Areas under the receiver operating characteristic (ROC) curve (AUC), along with per-segment, per-polyp and per-patient sensitivities, and also reading times, were calculated for each reader with and without CAD. Of seven readers, 86 % and 71 % achieved a higher accuracy (segment-level AUC) when using CAD as second and concurrent reader respectively. Average segment-level AUCs with second and concurrent CAD (0.853 and 0.864) were significantly greater (p < 0.0001) than average AUC in the unaided evaluation (0.781). Per-segment, per-polyp, and per-patient sensitivities for polyps ≥6 mm were significantly higher in both CAD reading paradigms compared with unaided evaluation. Second-read CAD reduced readers' average segment and patient specificity by 0.007 and 0.036 (p = 0.005 and 0.011), respectively. CAD significantly improves the sensitivities of radiologists moderately experienced in CTC for polyp detection, both as second reader and concurrent reader. (orig.)

  10. A novel computer system for the evaluation of nasolabial morphology, symmetry and aesthetics after cleft lip and palate treatment. Part 1: General concept and validation.

    Science.gov (United States)

    Pietruski, Piotr; Majak, Marcin; Debski, Tomasz; Antoszewski, Boguslaw

    2017-04-01

    The need for a widely accepted method suitable for a multicentre quantitative evaluation of facial aesthetics after surgical treatment of cleft lip and palate (CLP) has been emphasized for years. The aim of this study was to validate a novel computer system 'Analyse It Doc' (A.I.D.) as a tool for objective anthropometric analysis of the nasolabial region. An indirect anthropometric analysis of facial photographs was conducted with the A.I.D. system and Adobe Photoshop/ImageJ software. Intra-rater and inter-rater reliability and the time required for the analysis were estimated separately for each method and compared. Analysis with A.I.D. system was nearly 10-fold faster than that with the reference evaluation method. The A.I.D. system provided strong inter-rater and intra-rater correlations for linear, angular and area measurements of the nasolabial region, as well as a significantly higher accuracy and reproducibility of angular measurements in submental view. No statistically significant inter-method differences were found for other measurements. The hereby presented novel computer system is suitable for simple, time-efficient and reliable multicenter photogrammetric analyses of the nasolabial region in CLP patients and healthy subjects. Copyright © 2017 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.

  11. Generalized Anxiety Disorder and Social Anxiety Disorder, but Not Panic Anxiety Disorder, Are Associated with Higher Sensitivity to Learning from Negative Feedback: Behavioral and Computational Investigation

    OpenAIRE

    Khdour, Hussain Y.; Abushalbaq, Oday M.; Mughrabi, Ibrahim T.; Imam, Aya F.; Gluck, Mark A.; Herzallah, Mohammad M.; Moustafa, Ahmed A.

    2016-01-01

    Anxiety disorders, including generalized anxiety disorder (GAD), social anxiety disorder (SAD), and panic anxiety disorder (PAD), are a group of common psychiatric conditions. They are characterized by excessive worrying, uneasiness, and fear of future events, such that they affect social and occupational functioning. Anxiety disorders can alter behavior and cognition as well, yet little is known about the particular domains they affect. In this study, we tested the cognitive correlates of me...

  12. Detection of spectroscopic binaries in the Gaia-ESO Survey

    Science.gov (United States)

    Van der Swaelmen, M.; Merle, T.; Van Eck, S.; Jorissen, A.

    2017-12-01

    The Gaia-ESO survey (GES) is a ground-based spectroscopic survey, complementing the Gaia mission, in order to obtain high accuracy radial velocities and chemical abundances for 10^5 stars. Thanks to the numerous spectra collected by the GES, the detection of spectroscopic multiple system candidates (SBn, n ≥ 2) is one of the science case that can be tackled. We developed at IAA (Institut d'Astronomie et d'Astrophysique) a novative automatic method to detect multiple components from the cross-correlation function (CCF) of spectra and applied it to the CCFs provided by the GES. Since the bulk of the Milky Way field targets has been observed in both HR10 and HR21 GIRAFFE settings, we are also able to compare the efficiency of our SB detection tool depending on the wavelength range. In particular, we show that HR21 leads to a less efficient detection compared to HR10. The presence of strong and/or saturated lines (Ca II triplet, Mg I line, Paschen lines) in the wavelength domain covered by HR21 hampers the computation of CCFs, which tend to be broadened compared to their HR10 counterpart. The main drawback is that the minimal detectable radial velocity difference is ˜ \\SI{60}km/s for HR21 while it is ˜ \\SI{25}km/s for HR10. A careful design of CCF masks (especially masking Ca triplet lines) can substantially improve the detectability rate of HR21. Since HR21 spectra are quite similar to the one produced by the RVS spectrograph of the Gaia mission, analysis of RVS spectra in the context of spectroscpic binaries can take adavantage of the lessons learned from the GES to maximize the detection rate.

  13. Spectroscopic Parameters of Lumbar Intervertebral Disc Material

    Science.gov (United States)

    Terbetas, G.; Kozlovskaja, A.; Varanius, D.; Graziene, V.; Vaitkus, J.; Vaitkuviene, A.

    2009-06-01

    There are numerous methods of investigating intervertebral disc. Visualization methods are widely used in clinical practice. Histological, imunohistochemical and biochemical methods are more used in scientific research. We propose that a new spectroscopic investigation would be useful in determining intervertebral disc material, especially when no histological specimens are available. Purpose: to determine spectroscopic parameters of intervertebral disc material; to determine emission spectra common for all intervertebral discs; to create a background for further spectroscopic investigation where no histological specimen will be available. Material and Methods: 20 patients, 68 frozen sections of 20 μm thickness from operatively removed intervertebral disc hernia were excited by Nd:YAG microlaser STA-01-TH third harmonic 355 nm light throw 0, 1 mm fiber. Spectrophotometer OceanOptics USB2000 was used for spectra collection. Mathematical analysis of spectra was performed by ORIGIN multiple Gaussian peaks analysis. Results: In each specimen of disc hernia were found distinct maximal spectral peaks of 4 types supporting the histological evaluation of mixture content of the hernia. Fluorescence in the spectral regions 370-700 nm was detected in the disc hernias. The main spectral component was at 494 nm and the contribution of the components with the peak wavelength values at 388 nm, 412 nm and 435±5 nm were varying in the different groups of samples. In comparison to average spectrum of all cases, there are 4 groups of different spectral signatures in the region 400-500 nm in the patient groups, supporting a clinical data on different clinical features of the patients. Discussion and Conclusion: besides the classical open discectomy, new minimally invasive techniques of treating intervertebral disc emerge (PLDD). Intervertebral disc in these techniques is assessed by needle, no histological specimen is taken. Spectroscopic investigation via fiber optics through the

  14. General Ultrasound Imaging

    Medline Plus

    Full Text Available ... More Info Images/Videos About Us News Physician Resources Professions Site Index A-Z General Ultrasound Ultrasound ... computer or television monitor. The image is created based on the amplitude (loudness), frequency (pitch) and time ...

  15. Generalized method for computation of true thickness and x-ray intensity information in highly blurred sub-millimeter bone features in clinical CT images.

    Science.gov (United States)

    Pakdel, Amirreza; Robert, Normand; Fialkov, Jeffrey; Maloul, Asmaa; Whyne, Cari

    2012-12-07

    In clinical computed tomography (CT) images, cortical bone features with sub-millimeter (sub-mm) thickness are substantially blurred, such that their thickness is overestimated and their intensity appears underestimated. Therefore, any inquiry of the geometry or the density of such bones based on these images is severely error prone. We present a model-based method for estimating the true thickness and intensity magnitude of cortical and trabecular bone layers at localized regions of complex shell bones down to 0.25 mm. The method also computes the width of the corresponding point spread function. This approach is applicable on any CT image data, and does not rely on any scanner-specific parameter inputs beyond what is inherently available in the images themselves. The method applied on CT intensity profiles of custom phantoms mimicking shell-bones produced average cortical thickness errors of 0.07 ± 0.04 mm versus an average error of 0.47 ± 0.29 mm in the untreated cases (t(55) = 10.92, p ≪ 0.001)). Similarly, the average error of intensity magnitude estimates of the method were 22 ± 2.2 HU versus an error of 445 ± 137 HU in the untreated cases (t(55) = 26.48, p ≪ 0.001)). The method was also used to correct the CT intensity profiles from a cadaveric specimen of the craniofacial skeleton (CFS) in 15 different regions. There was excellent agreement between the corrections and µCT intensity profiles of the same regions used as a 'gold standard' measure. These results set the groundwork towards restoring cortical bone geometry and intensity information in entire image data sets. This information is essential for the generation of finite element models of the CFS that can accurately describe the biomechanical behavior of its complex thin bone structures.

  16. Molecular docking, spectroscopic studies and quantum calculations on nootropic drug.

    Science.gov (United States)

    Uma Maheswari, J; Muthu, S; Sundius, Tom

    2014-04-05

    A systematic vibrational spectroscopic assignment and analysis of piracetam [(2-oxo-1-pyrrolidineacetamide)] have been carried out using FT-IR and FT-Raman spectral data. The vibrational analysis was aided by an electronic structure calculation based on the hybrid density functional method B3LYP using a 6-311G++(d,p) basis set. Molecular equilibrium geometries, electronic energies, IR and Raman intensities, and harmonic vibrational frequencies have been computed. The assignments are based on the experimental IR and Raman spectra, and a complete assignment of the observed spectra has been proposed. The UV-visible spectrum of the compound was recorded and the electronic properties, such as HOMO and LUMO energies and the maximum absorption wavelengths λmax were determined by the time-dependent DFT (TD-DFT) method. The geometrical parameters, vibrational frequencies and absorption wavelengths were compared with the experimental data. The complete vibrational assignments are performed on the basis of the potential energy distributions (PED) of the vibrational modes in terms of natural internal coordinates. The simulated FT-IR, FT-Raman, and UV spectra of the title compound have been constructed. Molecular docking studies have been carried out in the active site of piracetam by using Argus Lab. In addition, the potential energy surface, HOMO and LUMO energies, first-order hyperpolarizability and the molecular electrostatic potential have been computed. Copyright © 2014 Elsevier B.V. All rights reserved.

  17. PL-1 program system for generalized Patterson superpositions. [PL1GEN, SYMPL1, and ALSPL1, in PL/1 for IBM 360/65 computer

    Energy Technology Data Exchange (ETDEWEB)

    Hubbard, C.R.; Babich, M.W.; Jacobson, R.A.

    1977-01-01

    A new system of three programs written in PL/1 can calculate symmetry and Patterson superposition maps for triclinic, monoclinic, and orthorhombic space groups as well as any space group reducible to one of these three. These programs are based on a system of FORTRAN programs developed at Ames Laboratory, but are more general and have expanded utility, especially with regard to large unit cells. The program PLIGEN calculates a direct access data set, SYMPL1 calculates a direct access symmetry map, and ALSPL1 calculates a superposition map using one or multiple superpositions. A detailed description of the use of these programs including symbolic program listings is included. 2 tables.

  18. Protonated Nitrous Oxide, NNOH(+): Fundamental Vibrational Frequencies and Spectroscopic Constants from Quartic Force Fields

    Science.gov (United States)

    Huang, Xinchuan; Fortenberry, Ryan C.; Lee, Timothy J.

    2013-01-01

    The interstellar presence of protonated nitrous oxide has been suspected for some time. Using established high-accuracy quantum chemical techniques, spectroscopic constants and fundamental vibrational frequencies are provided for the lower energy O-protonated isomer of this cation and its deuterated isotopologue. The vibrationally-averaged B0 and C0 rotational constants are within 6 MHz of their experimental values and the D(subJ) quartic distortion constants agree with experiment to within 3%. The known gas phase O-H stretch of NNOH(+) is 3330.91 cm(exp-1), and the vibrational configuration interaction computed result is 3330.9 cm(exp-1). Other spectroscopic constants are also provided, as are the rest of the fundamental vibrational frequencies for NNOH(+) and its deuterated isotopologue. This high-accuracy data should serve to better inform future observational or experimental studies of the rovibrational bands of protonated nitrous oxide in the ISM and the laboratory.

  19. RADIAL VELOCITIES OF GALACTIC O-TYPE STARS. II. SINGLE-LINED SPECTROSCOPIC BINARIES

    International Nuclear Information System (INIS)

    Williams, S. J.; Gies, D. R.; Hillwig, T. C.; McSwain, M. V.; Huang, W.

    2013-01-01

    We report on new radial velocity measurements of massive stars that are either suspected binaries or lacking prior observations. This is part of a survey to identify and characterize spectroscopic binaries among O-type stars with the goal of comparing the binary fraction of field and runaway stars with those in clusters and associations. We present orbits for HDE 308813, HD 152147, HD 164536, BD–16°4826, and HDE 229232, Galactic O-type stars exhibiting single-lined spectroscopic variation. By fitting model spectra to our observed spectra, we obtain estimates for effective temperature, surface gravity, and rotational velocity. We compute orbital periods and velocity semiamplitudes for each system and note the lack of photometric variation for any system. These binaries probably appear single-lined because the companions are faint and because their orbital Doppler shifts are small compared to the width of the rotationally broadened lines of the primary.

  20. COMPUTATIONAL THINKING

    Directory of Open Access Journals (Sweden)

    Evgeniy K. Khenner

    2016-01-01

    Full Text Available Abstract. The aim of the research is to draw attention of the educational community to the phenomenon of computational thinking which actively discussed in the last decade in the foreign scientific and educational literature, to substantiate of its importance, practical utility and the right on affirmation in Russian education.Methods. The research is based on the analysis of foreign studies of the phenomenon of computational thinking and the ways of its formation in the process of education; on comparing the notion of «computational thinking» with related concepts used in the Russian scientific and pedagogical literature.Results. The concept «computational thinking» is analyzed from the point of view of intuitive understanding and scientific and applied aspects. It is shown as computational thinking has evolved in the process of development of computers hardware and software. The practice-oriented interpretation of computational thinking which dominant among educators is described along with some ways of its formation. It is shown that computational thinking is a metasubject result of general education as well as its tool. From the point of view of the author, purposeful development of computational thinking should be one of the tasks of the Russian education.Scientific novelty. The author gives a theoretical justification of the role of computational thinking schemes as metasubject results of learning. The dynamics of the development of this concept is described. This process is connected with the evolution of computer and information technologies as well as increase of number of the tasks for effective solutions of which computational thinking is required. Author substantiated the affirmation that including «computational thinking » in the set of pedagogical concepts which are used in the national education system fills an existing gap.Practical significance. New metasubject result of education associated with

  1. Algorithmically specialized parallel computers

    CERN Document Server

    Snyder, Lawrence; Gannon, Dennis B

    1985-01-01

    Algorithmically Specialized Parallel Computers focuses on the concept and characteristics of an algorithmically specialized computer.This book discusses the algorithmically specialized computers, algorithmic specialization using VLSI, and innovative architectures. The architectures and algorithms for digital signal, speech, and image processing and specialized architectures for numerical computations are also elaborated. Other topics include the model for analyzing generalized inter-processor, pipelined architecture for search tree maintenance, and specialized computer organization for raster

  2. Soliton-like solutions of a generalized variable-coefficient higher order nonlinear Schroedinger equation from inhomogeneous optical fibers with symbolic computation

    International Nuclear Information System (INIS)

    Li Juan; Zhang Haiqiang; Xu Tao; Zhang, Ya-Xing; Tian Bo

    2007-01-01

    For the long-distance communication and manufacturing problems in optical fibers, the propagation of subpicosecond or femtosecond optical pulses can be governed by the variable-coefficient nonlinear Schroedinger equation with higher order effects, such as the third-order dispersion, self-steepening and self-frequency shift. In this paper, we firstly determine the general conditions for this equation to be integrable by employing the Painleve analysis. Based on the obtained 3 x 3 Lax pair, we construct the Darboux transformation for such a model under the corresponding constraints, and then derive the nth-iterated potential transformation formula by the iterative process of Darboux transformation. Through the one- and two-soliton-like solutions, we graphically discuss the features of femtosecond solitons in inhomogeneous optical fibers

  3. Enhancing forensic science with spectroscopic imaging

    Science.gov (United States)

    Ricci, Camilla; Kazarian, Sergei G.

    2006-09-01

    This presentation outlines the research we are developing in the area of Fourier Transform Infrared (FTIR) spectroscopic imaging with the focus on materials of forensic interest. FTIR spectroscopic imaging has recently emerged as a powerful tool for characterisation of heterogeneous materials. FTIR imaging relies on the ability of the military-developed infrared array detector to simultaneously measure spectra from thousands of different locations in a sample. Recently developed application of FTIR imaging using an ATR (Attenuated Total Reflection) mode has demonstrated the ability of this method to achieve spatial resolution beyond the diffraction limit of infrared light in air. Chemical visualisation with enhanced spatial resolution in micro-ATR mode broadens the range of materials studied with FTIR imaging with applications to pharmaceutical formulations or biological samples. Macro-ATR imaging has also been developed for chemical imaging analysis of large surface area samples and was applied to analyse the surface of human skin (e.g. finger), counterfeit tablets, textile materials (clothing), etc. This approach demonstrated the ability of this imaging method to detect trace materials attached to the surface of the skin. This may also prove as a valuable tool in detection of traces of explosives left or trapped on the surfaces of different materials. This FTIR imaging method is substantially superior to many of the other imaging methods due to inherent chemical specificity of infrared spectroscopy and fast acquisition times of this technique. Our preliminary data demonstrated that this methodology will provide the means to non-destructive detection method that could relate evidence to its source. This will be important in a wider crime prevention programme. In summary, intrinsic chemical specificity and enhanced visualising capability of FTIR spectroscopic imaging open a window of opportunities for counter-terrorism and crime-fighting, with applications ranging

  4. Are your Spectroscopic Data Being Used?

    Science.gov (United States)

    Gordon, Iouli E.; Rothman, Laurence S.; Wilzewski, Jonas

    2014-06-01

    Spectroscopy is an established and indispensable tool in science, industry, agriculture, medicine, surveillance, etc.. The potential user of spectral data, which is not available in HITRAN or other databases, searches the spectroscopy publications. After finding the desired publication, the user very often encounters the following problems: 1) They cannot find the data described in the paper. There can be many reasons for this: nothing is provided in the paper itself or supplementary material; the authors are not responding to any requests; the web links provided in the paper have long been broken; etc. 2) The data is presented in a reduced form, for instance through the fitted spectroscopic constants. While this is a long-standing practice among spectroscopists, there are numerous serious problems with this practice, such as users getting different energy and intensity values because of different representations of the solution to the Hamiltonian, or even just despairing of trying to generate usable line lists from the published constants. Properly providing the data benefits not only users but also the authors of the spectroscopic research. We will show that this increases citations to the spectroscopy papers and visibility of the research groups. We will also address the quite common issue when researchers obtain the data, but do not feel that they have time, interest or resources to write an article describing it. There are modern tools that would allow one to make these data available to potential users and still get credit for it. However, this is a worst case scenario recommendation, i.e., publishing the data in a peer-reviewed journal is still the preferred way. L. S. Rothman, I. E. Gordon, et al. "The HITRAN 2012 molecular spectroscopic database," JQSRT 113, 4-50 (2013).

  5. Vibrational spectroscopic study of fluticasone propionate

    Science.gov (United States)

    Ali, H. R. H.; Edwards, H. G. M.; Kendrick, J.; Scowen, I. J.

    2009-03-01

    Fluticasone propionate is a synthetic glucocorticoid with potent anti-inflammatory activity that has been used effectively in the treatment of chronic asthma. The present work reports a vibrational spectroscopic study of fluticasone propionate and gives proposed molecular assignments on the basis of ab initio calculations using BLYP density functional theory with a 6-31G* basis set and vibrational frequencies predicted within the quasi-harmonic approximation. Several spectral features and band intensities are explained. This study generated a library of information that can be employed to aid the process monitoring of fluticasone propionate.

  6. Nuclear data for geophysical spectroscopic logging

    International Nuclear Information System (INIS)

    Schweitzer, J.S.; Hertzog, R.C.; Soran, P.D.

    1987-01-01

    Nuclear geochemical analysis requires the quantitative measurement of elemental concentrations of trace elements, as well as major elements in widely varying concentrations. This requirement places extreme demands on the quality of the spectroscopic measurements, data rates, and relating observed γ-ray intensities to the original elemental concentration. The relationship between γ-ray intensities and elemental concentration is critically dependent on the specific reaction cross sections and their uncertainties. The elements of highest priority for subsurface geochemical analysis are considered with respect to the importance of competing reactions and the neutron energy regions that are most significant. (author)

  7. Laser spectroscopic analysis in atmospheric pollution research

    CSIR Research Space (South Africa)

    Forbes, PBC

    2008-01-01

    Full Text Available stream_source_info ForbesP_2008.pdf.txt stream_content_type text/plain stream_size 3174 Content-Encoding ISO-8859-1 stream_name ForbesP_2008.pdf.txt Content-Type text/plain; charset=ISO-8859-1 Laser spectroscopic... Department and a CSIR National Laser Centre rental pool programme grant-holder, is involved in research into a novel method of monitoring atmospheric PAHs. The rental pool programme gives South African tertiary education institutions access to an array...

  8. Automated reliability assessment for spectroscopic redshift measurements

    Science.gov (United States)

    Jamal, S.; Le Brun, V.; Le Fèvre, O.; Vibert, D.; Schmitt, A.; Surace, C.; Copin, Y.; Garilli, B.; Moresco, M.; Pozzetti, L.

    2018-03-01

    Context. Future large-scale surveys, such as the ESA Euclid mission, will produce a large set of galaxy redshifts (≥106) that will require fully automated data-processing pipelines to analyze the data, extract crucial information and ensure that all requirements are met. A fundamental element in these pipelines is to associate to each galaxy redshift measurement a quality, or reliability, estimate. Aim. In this work, we introduce a new approach to automate the spectroscopic redshift reliability assessment based on machine learning (ML) and characteristics of the redshift probability density function. Methods: We propose to rephrase the spectroscopic redshift estimation into a Bayesian framework, in order to incorporate all sources of information and uncertainties related to the redshift estimation process and produce a redshift posterior probability density function (PDF). To automate the assessment of a reliability flag, we exploit key features in the redshift posterior PDF and machine learning algorithms. Results: As a working example, public data from the VIMOS VLT Deep Survey is exploited to present and test this new methodology. We first tried to reproduce the existing reliability flags using supervised classification in order to describe different types of redshift PDFs, but due to the subjective definition of these flags (classification accuracy 58%), we soon opted for a new homogeneous partitioning of the data into distinct clusters via unsupervised classification. After assessing the accuracy of the new clusters via resubstitution and test predictions (classification accuracy 98%), we projected unlabeled data from preliminary mock simulations for the Euclid space mission into this mapping to predict their redshift reliability labels. Conclusions: Through the development of a methodology in which a system can build its own experience to assess the quality of a parameter, we are able to set a preliminary basis of an automated reliability assessment for

  9. Optical properties of metals by spectroscopic ellipsometry

    International Nuclear Information System (INIS)

    Arakawa, E.T.; Inagaki, T.; Williams, M.W.

    1979-01-01

    The use of spectroscopic ellipsometry for the accurate determination of the optical properties of liquid and solid metals is discussed and illustrated with previously published data for Li and Na. New data on liquid Sn and Hg from 0.6 to 3.7 eV are presented. Liquid Sn is Drude-like. The optical properties of Hg deviate from the Drude expressions, but simultaneous measurements of reflectance and ellipsometric parameters yield consistent results with no evidence for vectorial surface effects

  10. Isospin asymmetry dependence of the α spectroscopic factor for heavy nuclei

    International Nuclear Information System (INIS)

    Seif, W. M.; Shalaby, M.; Alrakshy, M. F.

    2011-01-01

    Both the valence nucleons (holes) and the isospin asymmetry dependencies of the preformation probability of an α-cluster inside parents radioactive nuclei are investigated. The calculations are employed in the framework of the density-dependent cluster model of an α-decay process for the even-even spherical parents nuclei with protons number around the closed shell Z 0 = 82 and neutrons number around the closed shells Z 0 = 82 and Z 0 = 126. The microscopic α-daughter nuclear interaction potential is calculated in the framework of the Hamiltonian energy density approach based on the SLy4 Skyrme-like effective interaction. Also, the calculations based on the realistic effective M3Y-Paris nucleon-nucleon force have been used to confirm the results. The calculations then proceed to find the assault frequency and the α penetration probability within the WKB approximation. The half-lives of the different mentioned α decays are then determined and have been used in turn to find the α spectroscopic factor. We found that the spectroscopic factor increases with increasing the isospin asymmetry of the parent nuclei if they have valence protons and neutrons. When the parent nuclei have neutron or proton holes in addition to the valence protons or neutrons, then the spectroscopic factor is found to decrease with increasing isospin asymmetry. The obtained results show also that the deduced spectroscopic factors follow individual linear behaviors as a function of the multiplication of the valence proton (N p ) and neutron (N n ) numbers. These linear dependencies are correlated with the closed shells core (Z 0 ,N 0 ). The same individual linear behaviors are obtained as a function of the multiplication of N p N n and the isospin asymmetry parameter, N p N n I. Moreover, the whole deduced spectroscopic factors are found to exhibit a nearly general linear trend with the function N p N n /(Z 0 +N 0 ).

  11. Evaluation of liver parenchyma and perfusion using dynamic contrast-enhanced computed tomography and contrast-enhanced ultrasonography in captive green iguanas (Iguana iguana) under general anesthesia.

    Science.gov (United States)

    Nardini, Giordano; Di Girolamo, Nicola; Leopardi, Stefania; Paganelli, Irene; Zaghini, Anna; Origgi, Francesco C; Vignoli, Massimo

    2014-05-13

    Contrast-enhanced diagnostic imaging techniques are considered useful in veterinary and human medicine to evaluate liver perfusion and focal hepatic lesions. Although hepatic diseases are a common occurrence in reptile medicine, there is no reference to the use of contrast-enhanced ultrasound (CEUS) and contrast-enhanced computed tomography (CECT) to evaluate the liver in lizards. Therefore, the aim of this study was to evaluate the pattern of change in echogenicity and attenuation of the liver in green iguanas (Iguana iguana) after administration of specific contrast media. An increase in liver echogenicity and density was evident during CEUS and CECT, respectively. In CEUS, the mean ± SD (median; range) peak enhancement was 19.9% ± 7.5 (18.3; 11.7-34.6). Time to peak enhancement was 134.0 ± 125.1 (68.4; 59.6-364.5) seconds. During CECT, first visualization of the contrast medium was at 3.6 ± 0.5 (4; 3-4) seconds in the aorta, 10.7 ± 2.2 (10.5; 7-14) seconds in the hepatic arteries, and 15 ± 4.5 (14.5; 10-24) seconds in the liver parenchyma. Time to peak was 14.1 ± 3.4 (13; 11-21) and 31 ± 9.6 (29; 23-45) seconds in the aorta and the liver parenchyma, respectively. CEUS and dynamic CECT are practical means to determine liver hemodynamics in green iguanas. Distribution of contrast medium in iguana differed from mammals. Specific reference ranges of hepatic perfusion for diagnostic evaluation of the liver in iguanas are necessary since the use of mammalian references may lead the clinician to formulate incorrect diagnostic suspicions.

  12. Computed tomography of the brain, hepatotoxic drugs and high alcohol consumption in male alcoholic patients and a random sample from the general male population

    Energy Technology Data Exchange (ETDEWEB)

    Muetzell, S. (Univ. Hospital of Uppsala (Sweden). Dept. of Family Medicine)

    1992-01-01

    Computed tomography (CT) of the brain was performed in a random sample of a total of 195 men and 211 male alcoholic patients admitted for the first time during a period of two years from the same geographically limited area of Greater Stockholm as the sample. Laboratory tests were performed, including liver and pancreatic tests. Toxicological screening was performed and the consumption of hepatotoxic drugs was also investigated. The groups were then subdivided with respect to alcohol consumption and use of hepatotoxic drugs: group IA, men from the random sample with low or moderate alcohol consumption and no use of hepatotoxic drugs; IB, men from the random sample with low or moderate alcohol consumption with use of hepatotoxic drugs; IIA, alcoholic inpatients with use of alcohol and no drugs; and IIB, alcoholic inpatients with use of alcohol and drugs. Group IIB was found to have a higher incidence of cortical and subcortical changes than group IA. Group IB had a higher incidence of subcortical changes than group IA, and they differed only in drug use. Groups IIN and IIA only differed in drug use, and IIB had a higher incidence of brian damage except for anterior horn index and wide cerebellar sulci indicating vermian atrophy. Significantly higher serum levels of bilirubin, GGT, ASAT, ALAT, CK LD, and amylase were found in IIB. The results indicate that drug use influences the incidence of cortical and subcortical aberrations, except anterior horn index. It is concluded that the groups with alcohol abuse who used hepatotoxic drugs showed a picture of cortical changes (wide transport sulci and clear-cut of high-grade cortical changes) and also of subcortical aberrations, expressed as an increased widening on the third ventricle.

  13. Computed tomography of the brain, hepatotoxic drugs and high alcohol consumption in male alcoholic patients and a random sample from the general male population

    International Nuclear Information System (INIS)

    Muetzell, S.

    1992-01-01

    Computed tomography (CT) of the brain was performed in a random sample of a total of 195 men and 211 male alcoholic patients admitted for the first time during a period of two years from the same geographically limited area of Greater Stockholm as the sample. Laboratory tests were performed, including liver and pancreatic tests. Toxicological screening was performed and the consumption of hepatotoxic drugs was also investigated. The groups were then subdivided with respect to alcohol consumption and use of hepatotoxic drugs: group IA, men from the random sample with low or moderate alcohol consumption and no use of hepatotoxic drugs; IB, men from the random sample with low or moderate alcohol consumption with use of hepatotoxic drugs; IIA, alcoholic inpatients with use of alcohol and no drugs; and IIB, alcoholic inpatients with use of alcohol and drugs. Group IIB was found to have a higher incidence of cortical and subcortical changes than group IA. Group IB had a higher incidence of subcortical changes than group IA, and they differed only in drug use. Groups IIN and IIA only differed in drug use, and IIB had a higher incidence of brian damage except for anterior horn index and wide cerebellar sulci indicating vermian atrophy. Significantly higher serum levels of bilirubin, GGT, ASAT, ALAT, CK LD, and amylase were found in IIB. The results indicate that drug use influences the incidence of cortical and subcortical aberrations, except anterior horn index. It is concluded that the groups with alcohol abuse who used hepatotoxic drugs showed a picture of cortical changes (wide transport sulci and clear-cut of high-grade cortical changes) and also of subcortical aberrations, expressed as an increased widening on the third ventricle

  14. Specialized computer architectures for computational aerodynamics

    Science.gov (United States)

    Stevenson, D. K.

    1978-01-01

    In recent years, computational fluid dynamics has made significant progress in modelling aerodynamic phenomena. Currently, one of the major barriers to future development lies in the compute-intensive nature of the numerical formulations and the relative high cost of performing these computations on commercially available general purpose computers, a cost high with respect to dollar expenditure and/or elapsed time. Today's computing technology will support a program designed to create specialized computing facilities to be dedicated to the important problems of computational aerodynamics. One of the still unresolved questions is the organization of the computing components in such a facility. The characteristics of fluid dynamic problems which will have significant impact on the choice of computer architecture for a specialized facility are reviewed.

  15. TRAINING FUTURE TEACHERS OF COMPUTER SCIENCE FOR WORKING OUT TECHNOLOGICAL CARDS OF LESSONS IN THE CONDITIONS OF REALIZATION OF THE FEDERAL STATE EDUCATIONAL STANDARD FOR GENERAL EDUCATION

    Directory of Open Access Journals (Sweden)

    Екатерина Николаевна Кувшинова

    2017-12-01

    Full Text Available This article is devoted to a problem of readiness of future teachers of informatics for development of flow charts of the lessons displaying the main requirements of Federal state educational standards of the main general education (FGOS of Ltd company to planning and the organization of educational process taking into account system and activity approach in training. Content of system and activity approach in training, the universal educational actions (UEA reveals. Main units of the flow chart of a lesson of informatics are considered. The substantial block of the flow chart of a lesson of informatics determined by a training material which provides achievement of the planned subject results of training, and also forming and development of UUD, all-educational skills, ICT competences, competences of educational and research and project activities is stated.Subject results of training to which the abilities specific to a subject, types of activity on receipt of new knowledge within a subject, to its transformation and application in educational, educational and project and social and project situations, forming of scientific type of thinking, scientific ideas of key theories, types and types of the relations, ownership of scientific terminology, key concepts, methods and acceptances belong [10] are analyzed.Step-by-step training of future teachers of informatics for development of flow charts of lessons is discussed.

  16. Algebraic computing

    International Nuclear Information System (INIS)

    MacCallum, M.A.H.

    1990-01-01

    The implementation of a new computer algebra system is time consuming: designers of general purpose algebra systems usually say it takes about 50 man-years to create a mature and fully functional system. Hence the range of available systems and their capabilities changes little between one general relativity meeting and the next, despite which there have been significant changes in the period since the last report. The introductory remarks aim to give a brief survey of capabilities of the principal available systems and highlight one or two trends. The reference to the most recent full survey of computer algebra in relativity and brief descriptions of the Maple, REDUCE and SHEEP and other applications are given. (author)

  17. Program MASTERCALC: an interactive computer program for radioanalytical computations. Description and operating instructions

    International Nuclear Information System (INIS)

    Goode, W.

    1980-10-01

    MASTERCALC is a computer program written to support radioanalytical computations in the Los Alamos Scientific Laboratory (LASL) Environmental Surveillance Group. Included in the program are routines for gross alpha and beta, 3 H, gross gamma, 90 Sr and alpha spectroscopic determinations. A description of MASTERCALC is presented and its source listing is included. Operating instructions and example computing sessions are given for each type of analysis

  18. Genome-Wide Study of Percent Emphysema on Computed Tomography in the General Population. The Multi-Ethnic Study of Atherosclerosis Lung/SNP Health Association Resource Study

    Science.gov (United States)

    Manichaikul, Ani; Hoffman, Eric A.; Smolonska, Joanna; Gao, Wei; Cho, Michael H.; Baumhauer, Heather; Budoff, Matthew; Austin, John H. M.; Washko, George R.; Carr, J. Jeffrey; Kaufman, Joel D.; Pottinger, Tess; Powell, Charles A.; Wijmenga, Cisca; Zanen, Pieter; Groen, Harry J. M.; Postma, Dirkje S.; Wanner, Adam; Rouhani, Farshid N.; Brantly, Mark L.; Powell, Rhea; Smith, Benjamin M.; Rabinowitz, Dan; Raffel, Leslie J.; Hinckley Stukovsky, Karen D.; Crapo, James D.; Beaty, Terri H.; Hokanson, John E.; Silverman, Edwin K.; Dupuis, Josée; O’Connor, George T.; Boezen, H. Marike; Rich, Stephen S.

    2014-01-01

    Rationale: Pulmonary emphysema overlaps partially with spirometrically defined chronic obstructive pulmonary disease and is heritable, with moderately high familial clustering. Objectives: To complete a genome-wide association study (GWAS) for the percentage of emphysema-like lung on computed tomography in the Multi-Ethnic Study of Atherosclerosis (MESA) Lung/SNP Health Association Resource (SHARe) Study, a large, population-based cohort in the United States. Methods: We determined percent emphysema and upper-lower lobe ratio in emphysema defined by lung regions less than −950 HU on cardiac scans. Genetic analyses were reported combined across four race/ethnic groups: non-Hispanic white (n = 2,587), African American (n = 2,510), Hispanic (n = 2,113), and Chinese (n = 704) and stratified by race and ethnicity. Measurements and Main Results: Among 7,914 participants, we identified regions at genome-wide significance for percent emphysema in or near SNRPF (rs7957346; P = 2.2 × 10−8) and PPT2 (rs10947233; P = 3.2 × 10−8), both of which replicated in an additional 6,023 individuals of European ancestry. Both single-nucleotide polymorphisms were previously implicated as genes influencing lung function, and analyses including lung function revealed independent associations for percent emphysema. Among Hispanics, we identified a genetic locus for upper-lower lobe ratio near the α-mannosidase–related gene MAN2B1 (rs10411619; P = 1.1 × 10−9; minor allele frequency [MAF], 4.4%). Among Chinese, we identified single-nucleotide polymorphisms associated with upper-lower lobe ratio near DHX15 (rs7698250; P = 1.8 × 10−10; MAF, 2.7%) and MGAT5B (rs7221059; P = 2.7 × 10−8; MAF, 2.6%), which acts on α-linked mannose. Among African Americans, a locus near a third α-mannosidase–related gene, MAN1C1 (rs12130495; P = 9.9 × 10−6; MAF, 13.3%) was associated with percent emphysema. Conclusions: Our results suggest that some genes previously identified as

  19. ORCODE.77: a computer routine to control a nuclear physics experiment by a PDP-15 + CAMAC system, written in assembler language and including many new routines of general interest

    International Nuclear Information System (INIS)

    Dickens, J.K.; McConnell, J.W.

    1977-01-01

    ORCODE.77 is a versatile data-handling computer routine written in MACRO (assembler) language for a PDP-15 computer with EAE (extended arithmetic capability) connected to a CAMAC interface. The Interrupt feature of the computer is utilized. Although the code is oriented for a specific experimental problem, there are many routines of general interest, including a CAMAC Scaler handler, an executive routine to interpret and act upon three-character teletype commands, concise routines to type out double-precision integers (both octal and decimal) and floating-point numbers and to read in integers and floating-point numbers, a routine to convert to and from PDP-15 FORTRAN-IV floating-point format, a routine to handle clock interrupts, and our own DECTAPE handling routine. Routines having specific applications which are applicable to other very similar applications include a display routine using CAMAC instructions, control of external mechanical equipment using CAMAC instructions, storage of data from an Analog-to-digital Converter, analysis of stored data into time-dependent pulse-height spectra, and a routine to read the contents of a Nuclear Data 5050 Analyzer and to prepare DECTAPE output of these data for subsequent analysis by a code written in PDP-15-compiled FORTRAN-IV

  20. Development of laser atomic spectroscopic technology

    International Nuclear Information System (INIS)

    Lee, Jong Min; Ohr, Young Gie; Cha, Hyung Ki

    1990-06-01

    Some preliminary results on the resonant ionization spectroscopy for Na and Pb atoms are presents both in theory and in experiment. A single color multiphoton ionization process is theoretically analysed in detail, for the resonant and non-resonant cases, and several parameters determining the overall ionization rate are summarized. In particular, the AC stark shift, the line width and the non-linear coefficient of ionization rate are recalculated using the perturbation theory in resolvent approach. On the other hand, the fundamental equipments for spectroscopic experiments have been designed and manufactured, which include a Nd:YAG laser, a GIM-type dye laser, a vacuum system ionization cells, a heat pipe oven, and an ion current measuring system. The characteristics of the above equipments have also been examined. Using the spectroscopic data available, several ionization schemes are considered and the relative merits for ionization have been discussed. Moreover, the effects due to the buffer gas pressure, laser intensity, vapor density and electrode voltage have been investigated in detail. The experiments will be extended to multi-color processes with several resonances, and the ultimate goal is to develop a ultrasensitive analytical method for pollutive heavy metal atoms using the resonant ionization spectroscopy. (author)

  1. EPSILON AURIGAE: AN IMPROVED SPECTROSCOPIC ORBITAL SOLUTION

    International Nuclear Information System (INIS)

    Stefanik, Robert P.; Torres, Guillermo; Lovegrove, Justin; Latham, David W.; Zajac, Joseph; Pera, Vivian E.; Mazeh, Tsevi

    2010-01-01

    A rare eclipse of the mysterious object ε Aurigae will occur in 2009-2011. We report an updated single-lined spectroscopic solution for the orbit of the primary star based on 20 years of monitoring at the CfA, combined with historical velocity observations dating back to 1897. There are 518 new CfA observations obtained between 1989 and 2009. Two solutions are presented. One uses the velocities outside the eclipse phases together with mid-times of previous eclipses, from photometry dating back to 1842, which provide the strongest constraint on the ephemeris. This yields a period of 9896.0 ± 1.6 days (27.0938 ± 0.0044 years) with a velocity semi-amplitude of 13.84 ± 0.23 km s -1 and an eccentricity of 0.227 ± 0.011. The middle of the current ongoing eclipse predicted by this combined fit is JD 2,455,413.8 ± 4.8, corresponding to 2010 August 5. If we use only the radial velocities, we find that the predicted middle of the current eclipse is nine months earlier. This would imply that the gravitating companion is not the same as the eclipsing object. Alternatively, the purely spectroscopic solution may be biased by perturbations in the velocities due to the short-period oscillations of the supergiant.

  2. Spectroscopic studies of pulsed-power plasmas

    International Nuclear Information System (INIS)

    Maron, Y.; Arad, R.; Dadusc, G.; Davara, G.; Duvall, R.E.; Fisher, V.; Foord, M.E.; Fruchtman, A.; Gregorian, L.; Krasik, Ya.

    1993-01-01

    Recently developed spectroscopic diagnostic techniques are used to investigate the plasma behavior in a Magnetically Insulated Ion Diode, a Plasma Opening Switch, and a gas-puffed Z-pinch. Measurements with relatively high spectral, temporal, and spatial resolutions are performed. The particle velocity and density distributions within a few tens of microns from the dielectric-anode surface are observed using laser spectroscopy. Collective fluctuating electric fields in the plasma are inferred from anisotropic Stark broadening. For the Plasma Opening Switch experiment, a novel gaseous plasma source was developed which is mounted inside the high-voltage inner conductor. The properties of this source, together with spectroscopic observations of the electron density and particle velocities of the injected plasma, are described. Emission line intensities and spectral profiles give the electron kinetic energies during the switch operation and the ion velocity distributions. Secondary plasma ejection from the electrodes is also studied. In the Z-pinch experiment, spectral emission-line profiles are studied during the implosion phase. Doppler line shifts and widths yield the radial velocity distributions for various charge states in various regions of the plasma. Effects of plasma ejection from the cathode are also studied

  3. Spectroscopic enhancement in nanoparticles embedded glasses

    Energy Technology Data Exchange (ETDEWEB)

    Sahar, M. R., E-mail: mrahim057@gmail.com; Ghoshal, S. K., E-mail: mrahim057@gmail.com [Advanced Optical Material Research Group, Department of Physics, Faculty of Science, Universiti Teknologi Malaysia, 81310, Skudai, Johor Bahru, Johor (Malaysia)

    2014-09-25

    This presentation provides an overview of the recent progress in the enhancement of the spectroscopic characteristics of the glass embedded with nanoparticles (NPs). Some of our research activities with few significantly new results are highlighted and facilely analyzed. The science and technology dealing with the manipulation of the physical properties of rare earth doped inorganic glasses by embedding metallic NPs or nanoclusters produce the so-called 'nanoglass'. Meanwhile, the spectroscopic enhancement relates the intensity of the luminescence measured at certain transition. The enhancement which expectedly due to the 'plasmonics wave' (referring to the coherent coupling of photons to free electron oscillations called plasmon) occurs at the interface between a conductor and a dielectric. Plasmonics being an emerging concept in advanced optical material of nanophotonics has given this material the ability to exploit the optical response at nanoscale and opened up a new avenue in metal-based glass optics. There is a vast array of plasmonic NPs concepts yet to be explored, with applications spanning solar cells, (bio) sensing, communications, lasers, solid-state lighting, waveguides, imaging, optical data transfer, display and even bio-medicine. Localized surface plasmon resonance (LSPR) can enhance the optical response of nanoglass by orders of magnitude as observed. The luminescence enhancement and surface enhanced Raman scattering (SERS) are new paradigm of research. The enhancement of luminescence due to the influence of metallic NPs is the recurring theme of this paper.

  4. Obtaining the Electron Angular Momentum Coupling Spectroscopic Terms, jj

    Science.gov (United States)

    Orofino, Hugo; Faria, Roberto B.

    2010-01-01

    A systematic procedure is developed to obtain the electron angular momentum coupling (jj) spectroscopic terms, which is based on building microstates in which each individual electron is placed in a different m[subscript j] "orbital". This approach is similar to that used to obtain the spectroscopic terms under the Russell-Saunders (LS) coupling…

  5. Iterative estimation of the background in noisy spectroscopic data

    International Nuclear Information System (INIS)

    Zhu, M.H.; Liu, L.G.; Cheng, Y.S.; Dong, T.K.; You, Z.; Xu, A.A.

    2009-01-01

    In this paper, we present an iterative filtering method to estimate the background of noisy spectroscopic data. The proposed method avoids the calculation of the average full width at half maximum (FWHM) of the whole spectrum and the peak regions, and it can estimate the background efficiently, especially for spectroscopic data with the Compton continuum.

  6. Optical constants of graphene measured by spectroscopic ellipsometry

    NARCIS (Netherlands)

    Weber, J.W.; Calado, V.E.; Van de Sanden, M.C.M.

    2010-01-01

    A mechanically exfoliated graphene flake ( ? 150×380??m2) on a silicon wafer with 98 nm silicon dioxide on top was scanned with a spectroscopic ellipsometer with a focused spot ( ? 100×55??m2) at an angle of 55°. The spectroscopic ellipsometric data were analyzed with an optical model in which the

  7. Optical constants of graphene measured by spectroscopic ellipsometry

    NARCIS (Netherlands)

    Weber, J.W.; Calado, V.E.; Sanden, van de M.C.M.

    2010-01-01

    A mechanically exfoliated graphene flake ( ~ 150×380 µm2) on a silicon wafer with 98 nm silicon dioxide on top was scanned with a spectroscopic ellipsometer with a focused spot ( ~ 100×55 µm2) at an angle of 55°. The spectroscopic ellipsometric data were analyzed with an optical model in which the

  8. Fundamental spectroscopic studies of carbenes and hydrocarbon radicals

    Energy Technology Data Exchange (ETDEWEB)

    Gottlieb, C.A.; Thaddeus, P. [Harvard Univ., Cambridge, MA (United States)

    1993-12-01

    Highly reactive carbenes and carbon-chain radicals are studied at millimeter wavelengths by observing their rotational spectra. The purpose is to provide definitive spectroscopic identification, accurate spectroscopic constants in the lowest vibrational states, and reliable structures of the key intermediates in reactions leading to aromatic hydrocarbons and soot particles in combustion.

  9. Computer algebra applications

    International Nuclear Information System (INIS)

    Calmet, J.

    1982-01-01

    A survey of applications based either on fundamental algorithms in computer algebra or on the use of a computer algebra system is presented. Recent work in biology, chemistry, physics, mathematics and computer science is discussed. In particular, applications in high energy physics (quantum electrodynamics), celestial mechanics and general relativity are reviewed. (Auth.)

  10. Volunteer Computing for Science Gateways

    OpenAIRE

    Anderson, David

    2017-01-01

    This poster offers information about volunteer computing for science gateways that offer high-throughput computing services. Volunteer computing can be used to get computing power. This increases the visibility of the gateway to the general public as well as increasing computing capacity at little cost.

  11. Theory of computation

    CERN Document Server

    Tourlakis, George

    2012-01-01

    Learn the skills and acquire the intuition to assess the theoretical limitations of computer programming Offering an accessible approach to the topic, Theory of Computation focuses on the metatheory of computing and the theoretical boundaries between what various computational models can do and not do—from the most general model, the URM (Unbounded Register Machines), to the finite automaton. A wealth of programming-like examples and easy-to-follow explanations build the general theory gradually, which guides readers through the modeling and mathematical analysis of computational pheno

  12. Spectroscopic Needs for Imaging Dark Energy Experiments

    International Nuclear Information System (INIS)

    Newman, Jeffrey A.; Abate, Alexandra; Abdalla, Filipe B.; Allam, Sahar; Allen, Steven W.; Ansari, Reza; Bailey, Stephen; Barkhouse, Wayne A.; Beers, Timothy C.; Blanton, Michael R.; Brodwin, Mark; Brownstein, Joel R.; Brunner, Robert J.; Carrasco-Kind, Matias; Cervantes-Cota, Jorge; Chisari, Nora Elisa; Colless, Matthew; Coupon, Jean; Cunha, Carlos E.; Frye, Brenda L.; Gawiser, Eric J.; Gehrels, Neil; Grady, Kevin; Hagen, Alex; Hall, Patrick B.; Hearin, Andrew P.; Hildebrandt, Hendrik; Hirata, Christopher M.; Ho, Shirley; Huterer, Dragan; Ivezic, Zeljko; Kneib, Jean-Paul; Kruk, Jeffrey W.; Lahav, Ofer; Mandelbaum, Rachel; Matthews, Daniel J.; Miquel, Ramon; Moniez, Marc; Moos, H. W.; Moustakas, John; Papovich, Casey; Peacock, John A.; Rhodes, Jason; Ricol, Jean-Stepane; Sadeh, Iftach; Schmidt, Samuel J.; Stern, Daniel K.; Tyson, J. Anthony; Von der Linden, Anja; Wechsler, Risa H.; Wood-Vasey, W. M.; Zentner, A.

    2015-01-01

    Ongoing and near-future imaging-based dark energy experiments are critically dependent upon photometric redshifts (a.k.a. photo-z's): i.e., estimates of the redshifts of objects based only on flux information obtained through broad filters. Higher-quality, lower-scatter photo-z's will result in smaller random errors on cosmological parameters; while systematic errors in photometric redshift estimates, if not constrained, may dominate all other uncertainties from these experiments. The desired optimization and calibration is dependent upon spectroscopic measurements for secure redshift information; this is the key application of galaxy spectroscopy for imaging-based dark energy experiments. Hence, to achieve their full potential, imaging-based experiments will require large sets of objects with spectroscopically-determined redshifts, for two purposes: Training: Objects with known redshift are needed to map out the relationship between object color and z (or, equivalently, to determine empirically-calibrated templates describing the rest-frame spectra of the full range of galaxies, which may be used to predict the color-z relation). The ultimate goal of training is to minimize each moment of the distribution of differences between photometric redshift estimates and the true redshifts of objects, making the relationship between them as tight as possible. The larger and more complete our ''training set'' of spectroscopic redshifts is, the smaller the RMS photo-z errors should be, increasing the constraining power of imaging experiments; Requirements: Spectroscopic redshift measurements for ∼30,000 objects over >∼15 widely-separated regions, each at least ∼20 arcmin in diameter, and reaching the faintest objects used in a given experiment, will likely be necessary if photometric redshifts are to be trained and calibrated with conventional techniques. Larger, more complete samples (i.e., with longer exposure times) can improve photo

  13. Thermal, spectroscopic, and ab initio structural characterization of carprofen polymorphs.

    Science.gov (United States)

    Bruni, Giovanna; Gozzo, Fabia; Capsoni, Doretta; Bini, Marcella; Macchi, Piero; Simoncic, Petra; Berbenni, Vittorio; Milanese, Chiara; Girella, Alessandro; Ferrari, Stefania; Marini, Amedeo

    2011-06-01

    Commercial and recrystallized polycrystalline samples of carprofen, a nonsteroidal anti-inflammatory drug, were studied by thermal, spectroscopic, and structural techniques. Our investigations demonstrated that recrystallized sample, stable at room temperature (RT), is a single polymorphic form of carprofen (polymorph I) that undergoes an isostructural polymorphic transformation by heating (polymorph II). Polymorph II remains then metastable at ambient conditions. Commercial sample is instead a mixture of polymorphs I and II. The thermodynamic relationships between the two polymorphs were determined through the construction of an energy/temperature diagram. The ab initio structural determination performed on synchrotron X-Ray powder diffraction patterns recorded at RT on both polymorphs allowed us to elucidate, for the first time, their crystal structure. Both crystallize in the monoclinic space group type P2(1) /c, and the unit cell similarity index and the volumetric isostructurality index indicate that the temperature-induced polymorphic transformation I → II is isostructural. Polymorphs I and II are conformational polymorphs, sharing a very similar hydrogen bond network, but with different conformation of the propanoic skeleton, which produces two different packing. The small conformational change agrees with the low value of transition enthalpy obtained by differential scanning calorimetry measurements and the small internal energy computed with density functional methods. Copyright © 2011 Wiley-Liss, Inc.

  14. Nonplanar property study of antifungal agent tolnaftate-spectroscopic approach

    Science.gov (United States)

    Arul Dhas, D.; Hubert Joe, I.; Roy, S. D. D.; Balachandran, S.

    2011-09-01

    Vibrational analysis of the thionocarbamate fungicide tolnaftate which is antidermatophytic, antitrichophytic and antimycotic agent, primarily inhibits the ergosterol biosynthesis in the fungus, was carried out using NIR FT-Raman and FTIR spectroscopic techniques. The equilibrium geometry, various bonding features, harmonic vibrational wavenumbers and torsional potential energy surface (PES) scan studies have been computed using density functional theory method. The detailed interpretation of the vibrational spectra has been carried out with the aid of VEDA.4 program. Vibrational spectra, natural bonding orbital (NBO) analysis and optimized molecular structure show the clear evidence for electronic interaction of thionocarbamate group with aromatic ring. Predicted electronic absorption spectrum from TD-DFT calculation has been compared with the UV-vis spectrum. The Mulliken population analysis on atomic charges and the HOMO-LUMO energy were also calculated. Vibrational analysis reveals that the simultaneous IR and Raman activation of the C-C stretching mode in the phenyl and naphthalene ring provide evidence for the charge transfer interaction between the donor and acceptor groups and is responsible for its bioactivity as a fungicide.

  15. Vibrational Spectroscopic Studies of Tenofovir Using Density Functional Theory Method

    Directory of Open Access Journals (Sweden)

    G. R. Ramkumaar

    2013-01-01

    Full Text Available A systematic vibrational spectroscopic assignment and analysis of tenofovir has been carried out by using FTIR and FT-Raman spectral data. The vibrational analysis was aided by electronic structure calculations—hybrid density functional methods (B3LYP/6-311++G(d,p, B3LYP/6-31G(d,p, and B3PW91/6-31G(d,p. Molecular equilibrium geometries, electronic energies, IR intensities, and harmonic vibrational frequencies have been computed. The assignments proposed based on the experimental IR and Raman spectra have been reviewed and complete assignment of the observed spectra have been proposed. UV-visible spectrum of the compound was also recorded and the electronic properties such as HOMO and LUMO energies and were determined by time-dependent DFT (TD-DFT method. The geometrical, thermodynamical parameters, and absorption wavelengths were compared with the experimental data. The B3LYP/6-311++G(d,p-, B3LYP/6-31G(d,p-, and B3PW91/6-31G(d,p-based NMR calculation procedure was also done. It was used to assign the 13C and 1H NMR chemical shift of tenofovir.

  16. The limit of detection for explosives in spectroscopic differential reflectometry

    Science.gov (United States)

    Dubroca, Thierry; Vishwanathan, Karthik; Hummel, Rolf E.

    2011-05-01

    In the wake of recent terrorist attacks, such as the 2008 Mumbai hotel explosion or the December 25th 2009 "underwear bomber", our group has developed a technique (US patent #7368292) to apply differential reflection spectroscopy to detect traces of explosives. Briefly, light (200-500 nm) is shone on a surface such as a piece of luggage at an airport. Upon reflection, the light is collected with a spectrometer combined with a CCD camera. A computer processes the data and produces in turn a differential reflection spectrum involving two adjacent areas of the surface. This differential technique is highly sensitive and provides spectroscopic data of explosives. As an example, 2,4,6, trinitrotoluene (TNT) displays strong and distinct features in differential reflectograms near 420 nm. Similar, but distinctly different features are observed for other explosives. One of the most important criteria for explosive detection techniques is the limit of detection. This limit is defined as the amount of explosive material necessary to produce a signal to noise ratio of three. We present here, a method to evaluate the limit of detection of our technique. Finally, we present our sample preparation method and experimental set-up specifically developed to measure the limit of detection for our technology. This results in a limit ranging from 100 nano-grams to 50 micro-grams depending on the method and the set-up parameters used, such as the detector-sample distance.

  17. A global fitting code for multichordal neutral beam spectroscopic data

    International Nuclear Information System (INIS)

    Seraydarian, R.P.; Burrell, K.H.; Groebner, R.J.

    1992-05-01

    Knowledge of the heat deposition profile is crucial to all transport analysis of beam heated discharges. The heat deposition profile can be inferred from the fast ion birth profile which, in turn, is directly related to the loss of neutral atoms from the beam. This loss can be measured spectroscopically be the decrease in amplitude of spectral emissions from the beam as it penetrates the plasma. The spectra are complicated by the motional Stark effect which produces a manifold of nine bright peaks for each of the three beam energy components. A code has been written to analyze this kind of data. In the first phase of this work, spectra from tokamak shots are fit with a Stark splitting and Doppler shift model that ties together the geometry of several spatial positions when they are fit simultaneously. In the second phase, a relative position-to-position intensity calibration will be applied to these results to obtain the spectral amplitudes from which beam atom loss can be estimated. This paper reports on the computer code for the first phase. Sample fits to real tokamak spectral data are shown

  18. THE SPECTROSCOPIC DIVERSITY OF TYPE Ia SUPERNOVAE

    International Nuclear Information System (INIS)

    Blondin, S.; Matheson, T.; Kirshner, R. P.; Mandel, K. S.; Challis, P.; Berlind, P.; Calkins, M.; Garnavich, P. M.; Jha, S. W.; Modjaz, M.; Riess, A. G.; Schmidt, B. P.

    2012-01-01

    We present 2603 spectra of 462 nearby Type Ia supernovae (SNe Ia), including 2065 previously unpublished spectra, obtained during 1993-2008 through the Center for Astrophysics Supernova Program. There are on average eight spectra for each of the 313 SNe Ia with at least two spectra. Most of the spectra were obtained with the FAST spectrograph at the Fred Lawrence Whipple Observatory 1.5 m telescope and reduced in a consistent manner, making this data set well suited for studies of SN Ia spectroscopic diversity. Using additional data from the literature, we study the spectroscopic and photometric properties of SNe Ia as a function of spectroscopic class using the classification schemes of Branch et al. and Wang et al. The width-luminosity relation appears to be steeper for SNe Ia with broader lines, although the result is not statistically significant with the present sample. Based on the evolution of the characteristic Si II λ6355 line, we propose improved methods for measuring velocity gradients, revealing a larger range than previously suspected, from ∼0 to ∼400 km s −1 day −1 considering the instantaneous velocity decline rate at maximum light. We find a weaker and less significant correlation between Si II velocity and intrinsic B – V color at maximum light than reported by Foley et al., owing to a more comprehensive treatment of uncertainties and host galaxy dust. We study the extent of nuclear burning and the presence of unburnt carbon in the outermost layers of the ejecta and report new detections of C II λ6580 in 23 early-time SN Ia spectra. The frequency of C II detections is not higher in SNe Ia with bluer colors or narrower light curves, in conflict with the recent results of Thomas et al. Based on nebular spectra of 27 SNe Ia, we find no relation between the FWHM of the iron emission feature at ∼4700 Å and Δm 15 (B) after removing the two low-luminosity SN 1986G and SN 1991bg, suggesting that the peak luminosity is not strongly dependent

  19. Thirty New Low-mass Spectroscopic Binaries

    Science.gov (United States)

    Shkolnik, Evgenya L.; Hebb, Leslie; Liu, Michael C.; Reid, I. Neill; Collier Cameron, Andrew

    2010-06-01

    As part of our search for young M dwarfs within 25 pc, we acquired high-resolution spectra of 185 low-mass stars compiled by the NStars project that have strong X-ray emission. By cross-correlating these spectra with radial velocity standard stars, we are sensitive to finding multi-lined spectroscopic binaries. We find a low-mass spectroscopic binary fraction of 16% consisting of 27 SB2s, 2 SB3s, and 1 SB4, increasing the number of known low-mass spectroscopic binaries (SBs) by 50% and proving that strong X-ray emission is an extremely efficient way to find M-dwarf SBs. WASP photometry of 23 of these systems revealed two low-mass eclipsing binaries (EBs), bringing the count of known M-dwarf EBs to 15. BD-22 5866, the ESB4, was fully described in 2008 by Shkolnik et al. and CCDM J04404+3127 B consists of two mid-M stars orbiting each other every 2.048 days. WASP also provided rotation periods for 12 systems, and in the cases where the synchronization time scales are short, we used P rot to determine the true orbital parameters. For those with no P rot, we used differential radial velocities to set upper limits on orbital periods and semimajor axes. More than half of our sample has near-equal-mass components (q > 0.8). This is expected since our sample is biased toward tight orbits where saturated X-ray emission is due to tidal spin-up rather than stellar youth. Increasing the samples of M-dwarf SBs and EBs is extremely valuable in setting constraints on current theories of stellar multiplicity and evolution scenarios for low-mass multiple systems. Based on observations collected at the W. M. Keck Observatory, the Canada-France-Hawaii Telescope and by the WASP Consortium. The Keck Observatory is operated as a scientific partnership between the California Institute of Technology, the University of California, and NASA, and was made possible by the generous financial support of the W. M. Keck Foundation. The CFHT is operated by the National Research Council of Canada

  20. Generalized connectivity of graphs

    CERN Document Server

    Li, Xueliang

    2016-01-01

    Noteworthy results, proof techniques, open problems and conjectures in generalized (edge-) connectivity are discussed in this book. Both theoretical and practical analyses for generalized (edge-) connectivity of graphs are provided. Topics covered in this book include: generalized (edge-) connectivity of graph classes, algorithms, computational complexity, sharp bounds, Nordhaus-Gaddum-type results, maximum generalized local connectivity, extremal problems, random graphs, multigraphs, relations with the Steiner tree packing problem and generalizations of connectivity. This book enables graduate students to understand and master a segment of graph theory and combinatorial optimization. Researchers in graph theory, combinatorics, combinatorial optimization, probability, computer science, discrete algorithms, complexity analysis, network design, and the information transferring models will find this book useful in their studies.