WorldWideScience

Sample records for fundamental constants part

  1. Variation of Fundamental Constants

    Science.gov (United States)

    Flambaum, V. V.

    2006-11-01

    Theories unifying gravity with other interactions suggest temporal and spatial variation of the fundamental ``constants'' in expanding Universe. The spatial variation can explain a fine tuning of the fundamental constants which allows humans (and any life) to appear. We appeared in the area of the Universe where the values of the fundamental constants are consistent with our existence. We present a review of recent works devoted to the variation of the fine structure constant α, strong interaction and fundamental masses. There are some hints for the variation in quasar absorption spectra. Big Bang nucleosynthesis, and Oklo natural nuclear reactor data. A very promising method to search for the variation of the fundamental constants consists in comparison of different atomic clocks. Huge enhancement of the variation effects happens in transition between accidentally degenerate atomic and molecular energy levels. A new idea is to build a ``nuclear'' clock based on the ultraviolet transition between very low excited state and ground state in Thorium nucleus. This may allow to improve sensitivity to the variation up to 10 orders of magnitude! Huge enhancement of the variation effects is also possible in cold atomic and molecular collisions near Feshbach resonance.

  2. Variation of fundamental constants

    CERN Document Server

    Flambaum, V V

    2006-01-01

    We present a review of recent works devoted to the variation of the fine structure constant alpha, strong interaction and fundamental masses. There are some hints for the variation in quasar absorption spectra, Big Bang nucleosynthesis, and Oklo natural nuclear reactor data. A very promising method to search for the variation of the fundamental constants consists in comparison of different atomic clocks. Huge enhancement of the variation effects happens in transition between accidentally degenerate atomic and molecular energy levels. A new idea is to build a ``nuclear'' clock based on the ultraviolet transition between very low excited state and ground state in Thorium nucleus. This may allow to improve sensitivity to the variation up to 10 orders of magnitude! Huge enhancement of the variation effects is also possible in cold atomic and molecular collisions near Feschbach resonance.

  3. Variation of fundamental constants: theory

    Science.gov (United States)

    Flambaum, Victor

    2008-05-01

    Theories unifying gravity with other interactions suggest temporal and spatial variation of the fundamental ``constants'' in expanding Universe. There are some hints for the variation of different fundamental constants in quasar absorption spectra and Big Bang nucleosynthesis data. A large number of publications (including atomic clocks) report limits on the variations. We want to study the variation of the main dimensionless parameters of the Standard Model: 1. Fine structure constant alpha (combination of speed of light, electron charge and Plank constant). 2. Ratio of the strong interaction scale (LambdaQCD) to a fundamental mass like electron mass or quark mass which are proportional to Higgs vacuum expectation value. The proton mass is propotional to LambdaQCD, therefore, the proton-to-electron mass ratio comes into this second category. We performed necessary atomic, nuclear and QCD calculations needed to study variation of the fundamental constants using the Big Bang Nucleosynthsis, quasar spectra, Oklo natural nuclear reactor and atomic clock data. The relative effects of the variation may be enhanced in transitions between narrow close levels in atoms, molecules and nuclei. If one will study an enhanced effect, the relative value of systematic effects (which are not enhanced) may be much smaller. Note also that the absolute magnitude of the variation effects in nuclei (e.g. in very narrow 7 eV transition in 229Th) may be 5 orders of magnitude larger than in atoms. A different possibility of enhancement comes from the inversion transitions in molecules where splitting between the levels is due to the quantum tunneling amplitude which has strong, exponential dependence on the electron to proton mass ratio. Our study of NH3 quasar spectra has already given the best limit on the variation of electron to proton mass ratio.

  4. Measurable values, numbers and fundamental physical constants: Is the Boltzmann constant Kb a fundamental physical constant?

    Directory of Open Access Journals (Sweden)

    Bormashenko Edward

    2009-01-01

    Full Text Available The status of fundamental physical constants is discussed. The nature of fundamental physical constants is cleared up, based on the analysis of the Boltzmann constant. A new definition of measurable values, 'mathematical' and 'physical' numbers and fundamental physical constants is proposed. Mathematical numbers are defined as values insensitive to the choice of both units and frames of reference, whereas 'physical numbers' are dimensionless values, insensitive to transformations of units and sensitive to the transformations of the frames of reference. Fundamental constants are classified as values sensitive to transformations of the units and insensitive to transformations of the frames of reference. It is supposed that a fundamental physical constant necessarily allows diminishing the number of independent etalons in a system of units.

  5. Natural Nuclear Reactor Oklo and Variation of Fundamental Constants Part 1: Computation of Neutronic of Fresh Core

    CERN Document Server

    Petrov, Yu V; Onegin, M S; Petrov, V Yu; Sakhnovskii, E G; Petrov, Yu.V.

    2006-01-01

    Using a modern methods of reactor physics we have performed the full-scale calculations of the natural reactor Oklo. For reliability we have used the recent version of two Monte Carlo codes: the Russian code MCU REA and world wide known code MCNP (USA). Both codes produce close results. We constructed computer model of zone RZ2 of reactor Oklo which takes into account all details of design and composition. The calculations were performed for the three fresh cores with different uranium contents. Multiplication factors, reactivities and neutron fluxes were calculated. We estimated also the temperature and void effects for the fresh core. As would be expected, we have found for the fresh core a great difference between reactor spectra and Maxwell's one, which was used before for averaging cross sections in the Oklo reactor. The averaged cross section of Sm and its dependence on the shift of resonance position (due to variation of fundamental constants) are significantly different from previous results. Contrary...

  6. Spatial Variations of Fundamental Constants

    CERN Document Server

    Barrow, John D; Barrow, John D.; Toole, Chris O'

    1999-01-01

    We show that observational limits on the possible time variation of constants of Nature are significantly affected by allowing for both space and time variation. Bekenstein's generalisation of Maxwell's equations to allow for cosmological variation of $alpha$ is investigated in a universe containing spherically symmetric inhomogeneities. The time variation of $alpha$ is determined by the local matter density and hence limits obtained in high-density geophysical enviroments are far more constraining than those obtained at high redshift. This new feature is expected to be a property of a wide class of theories for the variation of constants.

  7. The fundamental constants and quantum electrodynamics

    CERN Document Server

    Taylor, Barry N; Langenberg, D N

    1969-01-01

    Introduction ; review of experimental data ; least-squares adjustment to obtain values of the constants without QED theory ; implications for quantum electrodynamics ; final recommended set of fundamental constants ; summary and conclusions.

  8. Time-Varying Fundamental Constants

    Science.gov (United States)

    Olive, Keith

    2003-04-01

    Recent data from quasar absorption systems can be interpreted as arising from a time variation in the fine-structure constant. However, there are numerous cosmological, astro-physical, and terrestrial bounds on any such variation. These includes bounds from Big Bang Nucleosynthesis (from the ^4He abundance), the Oklo reactor (from the resonant neutron capture cross-section of Sm), and from meteoretic lifetimes of heavy radioactive isotopes. The bounds on the variation of the fine-structure constant are significantly strengthened in models where all gauge and Yukawa couplings vary in a dependent manner, as would be expected in unified theories. Models which are consistent with all data are severly challenged when Equivalence Principle constraints are imposed.

  9. The fundamental constants a mystery of physics

    CERN Document Server

    Fritzsch, Harald

    2009-01-01

    The speed of light, the fine structure constant, and Newton's constant of gravity — these are just three among the many physical constants that define our picture of the world. Where do they come from? Are they constant in time and across space? In this book, physicist and author Harald Fritzsch invites the reader to explore the mystery of the fundamental constants of physics in the company of Isaac Newton, Albert Einstein, and a modern-day physicist

  10. New Quasar Studies Keep Fundamental Physical Constant Constant

    Science.gov (United States)

    2004-03-01

    fundamental constant at play here, alpha. However, the observed distribution of the elements is consistent with calculations assuming that the value of alpha at that time was precisely the same as the value today. Over the 2 billion years, the change of alpha has therefore to be smaller than about 2 parts per 100 millions. If present at all, this is a rather small change indeed. But what about changes much earlier in the history of the Universe? To measure this we must find means to probe still further into the past. And this is where astronomy can help. Because, even though astronomers can't generally do experiments, the Universe itself is a huge atomic physics laboratory. By studying very remote objects, astronomers can look back over a long time span. In this way it becomes possible to test the values of the physical constants when the Universe had only 25% of is present age, that is, about 10,000 million years ago. Very far beacons To do so, astronomers rely on spectroscopy - the measurement of the properties of light emitted or absorbed by matter. When the light from a flame is observed through a prism, a rainbow is visible. When sprinkling salt on the flame, distinct yellow lines are superimposed on the usual colours of the rainbow, so-called emission lines. Putting a gas cell between the flame and the prism, one sees however dark lines onto the rainbow: these are absorption lines. The wavelength of these emission and absorption lines is directly related to the energy levels of the atoms in the salt or in the gas. Spectroscopy thus allows us to study atomic structure. The fine structure of atoms can be observed spectroscopically as the splitting of certain energy levels in those atoms. So if alpha were to change over time, the emission and absorption spectra of these atoms would change as well. One way to look for any changes in the value of alpha over the history of the Universe is therefore to measure the spectra of distant quasars, and compare the wavelengths of

  11. Trialogue on the number of fundamental constants

    CERN Document Server

    Duff, Michael J; Veneziano, Gabriele

    2002-01-01

    This paper consists of three separate articles on the number of fundamental dimensionful constants in physics. We started our debate in summer 1992 on the terrace of the famous CERN cafeteria. In the summer of 2001 we returned to the subject to find that our views still diverged and decided to explain our current positions. LBO develops the traditional approach with three constants, GV argues in favor of just two, while MJD advocates zero.

  12. Environment-Dependent Fundamental Physical Constants

    CERN Document Server

    Terazawa, Hidezumi

    2012-01-01

    A theory of special inconstancy, in which some fundamental physical constants such as the fine-structure and gravitational constants may vary, is proposed in pregeometry. In the special theory of inconstancy, the \\alpha-G relation of \\alpha=3\\pi/[16ln(4\\pi/5GM_W^2)] between the varying fine-structure and gravitaional constants (where M_W is the charged weak boson mass) is derived from the hypothesis that both of these constants are related to the same fundamental length scale in nature. Furthermore, it leads to the prediction of dot{{\\alpha}}/\\alpha=(-0.8\\pm2.5)\\times10^{-14}yr^{-1} from the most precise limit of dot{G}/G=(-0.6\\pm2.0)\\times10^{-12}yr^{-1} by Thorsett, which is not only consistent with the recent observation of dot{{\\alpha}}/\\alpha=(0.5\\pm0.5)\\times10^{-14}yr^{-1} by Webb et al. but also feasible for future experimental tests. Also a theory of general inconstancy, in which any fundamental physical constants may vary, is proposed in "more general relativity", by assuming that the space-time is ...

  13. Do the fundamental constants change with time ?

    CERN Document Server

    Kanekar, Nissim

    2008-01-01

    Comparisons between the redshifts of spectral lines from cosmologically-distant galaxies can be used to probe temporal changes in low-energy fundamental constants like the fine structure constant and the proton-electron mass ratio. In this article, I review the results from, and the advantages and disadvantages of, the best techniques using this approach, before focussing on a new method, based on conjugate satellite OH lines, that appears to be less affected by systematic effects and hence holds much promise for the future.

  14. Variation of fundamental constants: theory and observations

    CERN Document Server

    Flambaum, V V

    2007-01-01

    Review of recent works devoted to the variation of the fundamental constants is presented including atomic clocks, quasar absorption spectra, and Oklo natural nuclear reactor data. Assuming linear variation with time we can compare different results. From the quasar absorption spectra: $\\dot{\\mu}/\\mu=(1 \\pm 3) \\times 10^{-16}$ yr$^{-1}$. A combination of this result and the atomic clock results gives the best limt on variation of $\\alpha$: $\\dot{\\alpha}/\\alpha=(-0.8 \\pm 0.8) \\times 10^{-16}$ yr$^{-1}$. The Oklo natural reactor gives the best limit on the variation of $m_s/\\Lambda_{QCD}$ where $m_s$ is the strange quark mass. Huge enhancement of the relative variation effects happens in transitions between close atomic, molecular and nuclear energy levels. We suggest several new cases where the levels are very narrow. Large enhancement of the variation effects is also possible in cold atomic and molecular collisions near Feshbach resonance. Massive bodies (stars or galaxies) can also affect physical constants....

  15. Search for a Variation of Fundamental Constants

    Science.gov (United States)

    Ubachs, W.

    2013-06-01

    Since the days of Dirac scientists have speculated about the possibility that the laws of nature, and the fundamental constants appearing in those laws, are not rock-solid and eternal but may be subject to change in time or space. Such a scenario of evolving constants might provide an answer to the deepest puzzle of contemporary science, namely why the conditions in our local Universe allow for extreme complexity: the fine-tuning problem. In the past decade it has been established that spectral lines of atoms and molecules, which can currently be measured at ever-higher accuracies, form an ideal test ground for probing drifting constants. This has brought this subject from the realm of metaphysics to that of experimental science. In particular the spectra of molecules are sensitive for probing a variation of the proton-electron mass ratio μ, either on a cosmological time scale, or on a laboratory time scale. A comparison can be made between spectra of molecular hydrogen observed in the laboratory and at a high redshift (z=2-3), using the Very Large Telescope (Paranal, Chile) and the Keck telescope (Hawaii). This puts a constraint on a varying mass ratio Δμ/μ at the 10^{-5} level. The optical work can also be extended to include CO molecules. Further a novel direction will be discussed: it was discovered that molecules exhibiting hindered internal rotation have spectral lines in the radio-spectrum that are extremely sensitive to a varying proton-electron mass ratio. Such lines in the spectrum of methanol were recently observed with the radio-telescope in Effelsberg (Germany). F. van Weerdenburg, M.T. Murphy, A.L. Malec, L. Kaper, W. Ubachs, Phys. Rev. Lett. 106, 180802 (2011). A. Malec, R. Buning, M.T. Murphy, N. Milutinovic, S.L. Ellison, J.X. Prochaska, L. Kaper, J. Tumlinson, R.F. Carswell, W. Ubachs, Mon. Not. Roy. Astron. Soc. 403, 1541 (2010). E.J. Salumbides, M.L. Niu, J. Bagdonaite, N. de Oliveira, D. Joyeux, L. Nahon, W. Ubachs, Phys. Rev. A 86, 022510

  16. Inflationary Phase with Time Varying Fundamental Constants

    CERN Document Server

    Berman, M S; Berman, Marcelo S.; Trevisan, Luis A.

    2002-01-01

    Following Barrow, and Barrow and collaborators, we find a cosmological JBD model, with varying speed of light and varying fine structure constant, where the deceleration parameter is -1,causing acceleration of the Universe.Indeed, we have an exponential inflationary phase. Plancks time, energy, length,etc.,might have had different numerical values in the past, than those available in the litterature, due to the varying values for speed of light, and gravitational constant.

  17. Variations of fundamental constants and multidimensional gravity

    Science.gov (United States)

    Bronnikova, K. A.; Skvortsova, M. V.

    We try to explain the recently reported large-scale spatial variations of the fine structure constant α, in agreement with other cosmological observations, in the framework of curvature-nonlinear multidimensional gravity. The original theory is reduced to a scalar-tensor theory in four dimensions, and the corresponding isotropic cosmologies are considered in both Einstein and Jordan conformal frames. In the Jordan frame one obtains simultaneous variations of α and the gravitational constant G, equal in magnitude. Long-wave small inhomogeneous perturbations of isotropic models allow for explaining spatial variations of α.

  18. Quantum electrodynamics and the fundamental constants

    Directory of Open Access Journals (Sweden)

    Peter J. Mohr

    2000-07-01

    Full Text Available the results of critical experiments and the theoretical expressions for these results written in terms of the constants. Many of the theoretical expressions are based on quantum electrodynamics (QED, so the consistency of the comparison provides a critical test of the validity of the theory.

  19. Fundamental constants: The teamwork of precision

    Science.gov (United States)

    Myers, Edmund G.

    2014-02-01

    A new value for the atomic mass of the electron is a link in a chain of measurements that will enable a test of the standard model of particle physics with better than part-per-trillion precision. See Letter p.467

  20. Planck intermediate results XXIV. Constraints on variations in fundamental constants

    DEFF Research Database (Denmark)

    Ade, P. A. R.; Aghanim, N.; Arnaud, M.;

    2015-01-01

    Any variation in the fundamental physical constants, more particularly in the fine structure constant, a, or in the mass of the electron, me, affects the recombination history of the Universe and cause an imprint on the cosmic microwave background angular power spectra. We show that the Planck data...... of the electron, me, and in the simultaneous variation of the two constants. We examine in detail the degeneracies between fundamental constants and the cosmological parameters, in order to compare the limits obtained from Planck and WMAP and to determine the constraining power gained by including other...

  1. Systematic harmonic power laws inter-relating multiple fundamental constants

    Science.gov (United States)

    Chakeres, Donald; Buckhanan, Wayne; Andrianarijaona, Vola

    2017-01-01

    Power laws and harmonic systems are ubiquitous in physics. We hypothesize that 2, π, the electron, Bohr radius, Rydberg constant, neutron, fine structure constant, Higgs boson, top quark, kaons, pions, muon, Tau, W, and Z when scaled in a common single unit are all inter-related by systematic harmonic powers laws. This implies that if the power law is known it is possible to derive a fundamental constant's scale in the absence of any direct experimental data of that constant. This is true for the case of the hydrogen constants. We created a power law search engine computer program that randomly generated possible positive or negative powers searching when the product of logical groups of constants equals 1, confirming they are physically valid. For 2, π, and the hydrogen constants the search engine found Planck's constant, Coulomb's energy law, and the kinetic energy law. The product of ratios defined by two constants each was the standard general format. The search engine found systematic resonant power laws based on partial harmonic fraction powers of the neutron for all of the constants with products near 1, within their known experimental precision, when utilized with appropriate hydrogen constants. We conclude that multiple fundamental constants are inter-related within a harmonic power law system.

  2. High precision fundamental constants at the TeV scale

    CERN Document Server

    Moch, S; Alekhin, S; Blumlein, J; de la Cruz, L; Dittmaier, S; Dowling, M; Erler, J; Espinosa, J R; Fuster, J; Tormo, X Garcia i; Hoang, A H; Huss, A; Kluth, S; Mulders, M; Papanastasiou, A S; Piclum, J; Rabbertz, K; Schwinn, C; Schulze, M; Shintani, E; Uwer, P; Zerf, N

    2014-01-01

    This report summarizes the proceedings of the 2014 Mainz Institute for Theoretical Physics (MITP) scientific program on "High precision fundamental constants at the TeV scale". The two outstanding parameters in the Standard Model dealt with during the MITP scientific program are the strong coupling constant $\\alpha_s$ and the top-quark mass $m_t$. Lacking knowledge on the value of those fundamental constants is often the limiting factor in the accuracy of theoretical predictions. The current status on $\\alpha_s$ and $m_t$ has been reviewed and directions for future research have been identified.

  3. From the Rydberg constant to the fundamental constants metrology; De la constante de Rydberg a la metrologie des constantes fondamentales

    Energy Technology Data Exchange (ETDEWEB)

    Nez, F

    2005-06-15

    This document reviews the theoretical and experimental achievements of the author since the beginning of his scientific career. This document is dedicated to the spectroscopy of hydrogen, deuterium and helium atoms. The first part is divided into 6 sub-sections: 1) the principles of hydrogen spectroscopy, 2) the measurement of the 2S-nS/nD transitions, 3) other optical frequency measurements, 4) our contribution to the determination of the Rydberg constant, 5) our current experiment on the 1S-3S transition, 6) the spectroscopy of the muonic hydrogen. Our experiments have improved the accuracy of the Rydberg Constant by a factor 25 in 15 years and we have achieved the first absolute optical frequency measurement of a transition in hydrogen. The second part is dedicated to the measurement of the fine structure constant and the last part deals with helium spectroscopy and the search for optical references in the near infrared range. (A.C.)

  4. Fundamental constants and cosmic vacuum: the micro and macro connection

    CERN Document Server

    Fritzsch, Harald

    2015-01-01

    The idea that the vacuum energy density $\\rho_{\\Lambda}$ could be time dependent is a most reasonable one in the expanding Universe; in fact, much more reasonable than just a rigid cosmological constant for the entire cosmic history. Being $\\rho_{\\Lambda}=\\rho_{\\Lambda}(t)$ dynamical, it offers a possibility to tackle the cosmological constant problem in its various facets. Furthermore, for a long time (most prominently since Dirac's first proposal on a time variable gravitational coupling) the possibility that the fundamental "constants" of Nature are slowly drifting with the cosmic expansion has been continuously investigated. In the last two decades, and specially in recent times, mounting experimental evidence attests that this could be the case. In this paper, we consider the possibility that these two groups of facts might be intimately connected, namely that the observed acceleration of the Universe and the possible time variation of the fundamental constants are two manifestations of the same underlyi...

  5. Confronting Cosmology and New Physics with Fundamental Constants

    CERN Document Server

    Thompson, Rodger I

    2013-01-01

    The values of the fundamental constants such as $\\mu = m_P/m_e$, the proton to electron mass ratio and $\\alpha$, the fine structure constant, are sensitive to the product $\\sqrt{\\zeta_x^2(w+1)}$ where $\\zeta_x$ is a coupling constant between a rolling scalar field responsible for the acceleration of the expansion of the universe and the electromagnetic field with x standing for either $\\mu$ or $\\alpha$. The dark energy equation of state $w$ can assume values different than $-1$ in cosmologies where the acceleration of the expansion is due to a scalar field. In this case the value of both $\\mu$ and $\\alpha$ changes with time. The values of the fundamental constants, therefore, monitor the equation of state and are a valuable tool for determining $w$ as a function of redshift. In fact the rolling of the fundamental constants is one of the few definitive discriminators between acceleration due to a cosmological constant and acceleration due to a quintessence rolling scalar field. $w$ is often given in parameteri...

  6. Planck intermediate results. XXIV. Constraints on variation of fundamental constants

    CERN Document Server

    Ade, P A R; Arnaud, M.; Ashdown, M.; Aumont, J.; Baccigalupi, C.; Banday, A.J.; Barreiro, R.B.; Battaner, E.; Benabed, K.; Benoit-Levy, A.; Bernard, J.P.; Bersanelli, M.; Bielewicz, P.; Bond, J.R.; Borrill, J.; Bouchet, F.R.; Burigana, C.; Butler, R.C.; Calabrese, E.; Chamballu, A.; Chiang, H.C.; Christensen, P.R.; Clements, D.L.; Colombo, L.P.L.; Couchot, F.; Curto, A.; Cuttaia, F.; Danese, L.; Davies, R.D.; Davis, R.J.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Diego, J.M.; Dole, H.; Dore, O.; Dupac, X.; Ensslin, T.A.; Eriksen, H.K.; Fabre, O.; Finelli, F.; Forni, O.; Frailis, M.; Franceschi, E.; Galeotta, S.; Galli, S.; Ganga, K.; Giard, M.; Gonzalez-Nuevo, J.; Gorski, K.M.; Gregorio, A.; Gruppuso, A.; Hansen, F.K.; Hanson, D.; Harrison, D.L.; Henrot-Versille, S.; Hernandez-Monteagudo, C.; Herranz, D.; Hildebrandt, S.R.; Hivon, E.; Hobson, M.; Holmes, W.A.; Hornstrup, A.; Hovest, W.; Huffenberger, K.M.; Jaffe, A.H.; Jones, W.C.; Keihanen, E.; Keskitalo, R.; Kneissl, R.; Knoche, J.; Kunz, M.; Kurki-Suonio, H.; Lamarre, J.M.; Lasenby, A.; Lawrence, C.R.; Leonardi, R.; Lesgourgues, J.; Liguori, M.; Lilje, P.B.; Linden-Vornle, M.; Lopez-Caniego, M.; Lubin, P.M.; Macias-Perez, J.F.; Mandolesi, N.; Maris, M.; Martin, P.G.; Martinez-Gonzalez, E.; Masi, S.; Matarrese, S.; Mazzotta, P.; Meinhold, P.R.; Melchiorri, A.; Mendes, L.; Menegoni, E.; Mennella, A.; Migliaccio, M.; Miville-Deschenes, M.A.; Moneti, A.; Montier, L.; Morgante, G.; Moss, A.; Munshi, D.; Murphy, J.A.; Naselsky, P.; Nati, F.; Natoli, P.; Norgaard-Nielsen, H.U.; Noviello, F.; Novikov, D.; Novikov, I.; Oxborrow, C.A.; Pagano, L.; Pajot, F.; Paoletti, D.; Pasian, F.; Patanchon, G.; Perdereau, O.; Perotto, L.; Perrotta, F.; Piacentini, F.; Piat, M.; Pierpaoli, E.; Pietrobon, D.; Plaszczynski, S.; Pointecouteau, E.; Polenta, G.; Ponthieu, N.; Popa, L.; Pratt, G.W.; Prunet, S.; Rachen, J.P.; Rebolo, R.; Reinecke, M.; Remazeilles, M.; Renault, C.; Ricciardi, S.; Ristorcelli, I.; Rocha, G.; Roudier, G.; Rusholme, B.; Sandri, M.; Savini, G.; Scott, D.; Spencer, L.D.; Stolyarov, V.; Sudiwala, R.; Sutton, D.; Suur-Uski, A.S.; Sygnet, J.F.; Tauber, J.A.; Tavagnacco, D.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Tucci, M.; Uzan, J.P.; Valenziano, L.; Valiviita, J.; Van Tent, B.; Vielva, P.; Villa, F.; Wade, L.A.; Yvon, D.; Zacchei, A.; Zonca, A.

    2015-01-01

    Any variation of the fundamental physical constants, and more particularly of the fine structure constant, $\\alpha$, or of the mass of the electron, $m_e$, would affect the recombination history of the Universe and cause an imprint on the cosmic microwave background angular power spectra. We show that the Planck data allow one to improve the constraint on the time variation of the fine structure constant at redshift $z\\sim 10^3$ by about a factor of 5 compared to WMAP data, as well as to break the degeneracy with the Hubble constant, $H_0$. In addition to $\\alpha$, we can set a constraint on the variation of the mass of the electron, $m_{\\rm e}$, and on the simultaneous variation of the two constants. We examine in detail the degeneracies between fundamental constants and the cosmological parameters, in order to compare the limits obtained from Planck and WMAP and to determine the constraining power gained by including other cosmological probes. We conclude that independent time variations of the fine structu...

  7. Early Universe Constraints on Time Variation of Fundamental Constants

    CERN Document Server

    Landau, Susana J; Scoccola, Claudia G; Vucetich, Hector

    2008-01-01

    We study the time variation of fundamental constants in the early Universe. Using data from primordial light nuclei abundances, CMB and the 2dFGRS power spectrum, we put constraints on the time variation of the fine structure constant $\\alpha$, and the Higgs vacuum expectation value $$ without assuming any theoretical framework. A variation in $$ leads to a variation in the electron mass, among other effects. Along the same line, we study the variation of $\\alpha$ and the electron mass $m_e$. In a purely phenomenological fashion, we derive a relationship between both variations.

  8. Hydrogen molecular ions for improved determination of fundamental constants

    CERN Document Server

    Karr, J -Ph; Koelemeij, Jeroen; Korobov, Vladimir

    2016-01-01

    The possible use of high-resolution rovibrational spectroscopy of the hydrogen molecular ions H + 2 and HD + for an independent determination of several fundamental constants is analyzed. While these molecules had been proposed for metrology of nuclear-to-electron mass ratios, we show that they are also sensitive to the radii of the proton and deuteron and to the Rydberg constant at the level of the current discrepancies colloquially known as the proton size puzzle. The required level of accuracy, in the 10 --12 range, can be reached both by experiments, using Doppler-free two-photon spectroscopy schemes, and by theoretical predictions. It is shown how the measurement of several well-chosen rovibrational transitions may shed new light on the proton-radius puzzle, provide an alternative accurate determination of the Rydberg constant, and yield new values of the proton-to-electron and deuteron-to-proton mass ratios with one order of magnitude higher precision.

  9. Constraining fundamental constants of physics with quasar absorption line systems

    CERN Document Server

    Petitjean, Patrick; Chand, Hum; Ivanchik, Alexander; Noterdaeme, Pasquier; Gupta, Neeraj

    2009-01-01

    We summarize the attempts by our group and others to derive constraints on variations of fundamental constants over cosmic time using quasar absorption lines. Most upper limits reside in the range 0.5-1.5x10-5 at the 3sigma level over a redshift range of approximately 0.5-2.5 for the fine-structure constant, alpha, the proton-to-electron mass ratio, mu, and a combination of the proton gyromagnetic factor and the two previous constants, gp(alpha^2/mu)^nu, for only one claimed variation of alpha. It is therefore very important to perform new measurements to improve the sensitivity of the numerous methods to at least <0.1x10-5 which should be possible in the next few years. Future instrumentations on ELTs in the optical and/or ALMA, EVLA and SKA pathfinders in the radio will undoutedly boost this field by allowing to reach much better signal-to-noise ratios at higher spectral resolution and to perform measurements on molecules in the ISM of high redshift galaxies.

  10. Base units of the SI, fundamental constants and modern quantum physics.

    Science.gov (United States)

    Bordé, Christian J

    2005-09-15

    Over the past 40 years, a number of discoveries in quantum physics have completely transformed our vision of fundamental metrology. This revolution starts with the frequency stabilization of lasers using saturation spectroscopy and the redefinition of the metre by fixing the velocity of light c. Today, the trend is to redefine all SI base units from fundamental constants and we discuss strategies to achieve this goal. We first consider a kinematical frame, in which fundamental constants with a dimension, such as the speed of light c, the Planck constant h, the Boltzmann constant k(B) or the electron mass m(e) can be used to connect and redefine base units. The various interaction forces of nature are then introduced in a dynamical frame, where they are completely characterized by dimensionless coupling constants such as the fine structure constant alpha or its gravitational analogue alpha(G). This point is discussed by rewriting the Maxwell and Dirac equations with new force fields and these coupling constants. We describe and stress the importance of various quantum effects leading to the advent of this new quantum metrology. In the second part of the paper, we present the status of the seven base units and the prospects of their possible redefinitions from fundamental constants in an experimental perspective. The two parts can be read independently and they point to these same conclusions concerning the redefinitions of base units. The concept of rest mass is directly related to the Compton frequency of a body, which is precisely what is measured by the watt balance. The conversion factor between mass and frequency is the Planck constant, which could therefore be fixed in a realistic and consistent new definition of the kilogram based on its Compton frequency. We discuss also how the Boltzmann constant could be better determined and fixed to replace the present definition of the kelvin.

  11. Violation of fundamental symmetries and variation of fundamental constants in atomic phenomena

    Science.gov (United States)

    Flambaum, V. V.

    2007-06-01

    We present a review of recent works on variation of fundamental constants and violation of parity in atoms and nuclei. Theories unifying gravity with other interactions suggest temporal and spatial variation of the fundamental ``constants'' in expanding Universe. The spatial variation can explain fine tuning of the fundamental constants which allows humans (and any life) to appear. We appeared in the area of the Universe where the values of the fundamental constants are consistent with our existence. We describe recent works devoted to the variation of the fine structure constant α, strong interaction and fundamental masses (Higgs vacuum). There are some hints for the variation in quasar absorption spectra, Big Bang nucleosynthesis, and Oklo natural nuclear reactor data. A very promising method to search for the variation consists in comparison of different atomic clocks. Huge enhancement of the variation effects happens in transitions between very close atomic and molecular energy levels. A new idea is to build a ``nuclear'' clock based on UV transition in Thorium nucleus. This may allow to improve sensitivity to the variation up to 10 orders of magnitude! Measurements of violation of fundamental symmetries, parity (P) and time reversal (T), in atoms allows one to test unification theories in atomic experiments. We have developed an accurate method of many-body calculations - all-orders summation of dominating diagrams in residual e-e interaction. To calculate QED radiative corrections to energy levels and electromagnetic amplitudes in many-electron atoms and molecules we derived the ``radiative potential'' and the low-energy theorem. This method is simple and can be easily incorporated into any many-body theory approach. Using the radiative correction and many-body calculations we obtained the PNC amplitude EPNC = -0.898(1 +/- 0.5%) × 10-11ieaB(-QW/N). From the measurements of the PNC amplitude we extracted the Cs weak charge QW = -72.66(29)exp(36)theor. The

  12. Variation of the Fundamental Constants:. Theory and Observations

    Science.gov (United States)

    Flambaum, V. V.

    2007-10-01

    Review of recent works devoted to the variation of the fine structure constant α, strong interaction and fundamental masses (Higgs vacuum) is presented. The results from Big Bang nucleosynthesis, quasar absorption spectra, and Oklo natural nuclear reactor data give us the space-time variation on the Universe lifetime scale. Comparison of different atomic clocks gives us the present time variation. Assuming linear variation with time we can compare different results. The best limit on the variation of the electron-to-proton mass ratio μ = me/Mp and Xe = me/ΛQCD follows from the quasar absorption spectra:1 ˙ {μ }/μ = ˙ {X}e/X_e = (1 ± 3) × 10-16 yr-1. A combination of this result and the atomic clock results2,3 gives the best limt on variation of α : ˙ {α }/α = (-0.8 ± 0.8) × 10-16 yr-1. The Oklo natural reactor gives the best limit on the variation of Xs = ms/ΛQCD where ms is the strange quark mass:4,5 ∣ ˙ {X}s/X_s∣ < 10-18 yr-1. Note that the Oklo data can not give us any limit on the variation of a since the effect of α there is much smaller than the effect of Xs and should be neglected. Huge enhancement of the relative variation effects happens in transitions between close atomic, molecular and nuclear energy levels. We suggest several new cases where the levels are very narrow. Large enhancement of the variation effects is also possible in cold atomic and molecular collisions near Feshbach resonance. How changing physical constants and violation of local position invariance may occur? Light scalar fields very naturally appear in modern cosmological models, affecting parameters of the Standard Model (e.g. α). Cosmological variations of these scalar fields should occur because of drastic changes of matter composition in Universe: the latest such event is rather recent (about 5 billion years ago), from matter to dark energy domination. Massive bodies (stars or galaxies) can also affect physical constants. They have large scalar charge S

  13. Orbital effects of spatial variations of fundamental coupling constants

    CERN Document Server

    Iorio, Lorenzo

    2011-01-01

    We deal with the effects induced on the orbit of a test particle revolving around a central body by putative spatial variations of fundamental coupling constants $\\zeta$. In particular, we assume a dipole gradient for $\\zeta(\\bds r)/\\bar{\\zeta}$ along a generic direction $\\bds{\\hat{k}}$ in space. We analytically work out the long-term variations of all the six standard Keplerian orbital elements parameterizing the orbit of a test particle in a gravitationally bound two-body system. It turns out that, apart from the semi-major axis $a$, the eccentricity $e$, the inclination $I$, the longitude of the ascending node $\\Omega$, the longitude of pericenter $\\pi$ and the mean anomaly $\\mathcal{M}$ undergo non-zero long-term changes. By using the usual decomposition along the radial ($R$), transverse ($T$) and normal ($N$) directions, we also analytically work out the long-term changes $\\Delta R,\\Delta T,\\Delta N$ and $\\Delta v_R,\\Delta v_T,\\Delta v_N$ experienced by the position and the velocity vectors $\\bds r$ and...

  14. Plasma Astrophysics, Part I Fundamentals and Practice

    CERN Document Server

    Somov, Boris V

    2012-01-01

    This two-part book is devoted to classic fundamentals and current practices and perspectives of modern plasma astrophysics. This first part uniquely covers all the basic principles and practical tools required for understanding and work in plasma astrophysics. More than 25% of the text is updated from the first edition, including new figures, equations and entire sections on topics such as magnetic reconnection and the Grad-Shafranov equation. The book is aimed at professional researchers in astrophysics, but it will also be useful to graduate students in space sciences, geophysics, applied physics and mathematics, especially those seeking a unified view of plasma physics and fluid mechanics.

  15. CODATA Recommended Values of the Fundamental Physical Constants: 2014*

    Science.gov (United States)

    Mohr, Peter J.; Newell, David B.; Taylor, Barry N.

    2016-12-01

    This paper gives the 2014 self-consistent set of values of the constants and conversion factors of physics and chemistry recommended by the Committee on Data for Science and Technology (CODATA). These values are based on a least-squares adjustment that takes into account all data available up to 31 December 2014. Details of the data selection and methodology of the adjustment are described. The recommended values may also be found at http://physics.nist.gov/constants.

  16. CODATA Recommended Values of the Fundamental Physical Constants: 2014

    CERN Document Server

    Mohr, Peter J; Taylor, Barry N

    2015-01-01

    This report gives the 2014 self-consistent set of values of the constants and conversion factors of physics and chemistry recommended by the Committee on Data for Science and Technology (CODATA). These values are based on a least-squares adjustment that takes into account all data available up to 31 December 2014. The recommended values may also be found on the World Wide Web at physics.nist.gov/constants.

  17. Plasma Astrophysics, Part I Fundamentals and Practice

    CERN Document Server

    Somov, Boris V

    2006-01-01

    This well-illustrated monograph is devoted to classic fundamentals, current practice, and perspectives of modern plasma astrophysics. The first part is unique in covering all the basic principles and practical tools required for understanding and working in plasma astrophysics. The second part presents the physics of magnetic reconnection and flares of electromagnetic origin in space plasmas within the solar system; single and double stars, relativistic objects, accretion disks, and their coronae are also covered. This book is designed mainly for professional researchers in astrophysics. However, it will also be interesting and useful to graduate students in space sciences, geophysics, as well as advanced students in applied physics and mathematics seeking a unified view of plasma physics and fluid mechanics.

  18. Search for variation of fundamental constants and violations of fundamental symmetries using isotope comparisons

    CERN Document Server

    Berengut, J C; Kava, E M

    2011-01-01

    Atomic microwave clocks based on hyperfine transitions, such as the caesium standard, tick with a frequency that is proportional to the magnetic moment of the nucleus. This magnetic moment varies strongly between isotopes of the same atom, while all atomic electron parameters remain the same. Therefore the comparison of two microwave clocks based on different isotopes of the same atom can be used to constrain variation of fundamental constants. In this paper we calculate the neutron and proton contributions to the nuclear magnetic moments, as well as their sensitivity to any potential quark mass variation, in a number of isotopes of experimental interest including 201,199Hg and 87,85Rb, where experiments are underway. We also include a brief treatment of the dependence of the hyperfine transitions to variation in nuclear radius, which in turn is proportional to any change in quark mass. Our calculations of expectation-values of proton and neutron spin in nuclei are also needed to interpret measurements of vio...

  19. CODATA Recommended Values of the Fundamental Physical Constants: 2010

    CERN Document Server

    Mohr, Peter J; Newell, David B

    2012-01-01

    This paper gives the 2010 self-consistent set of values of the basic constants and conversion factors of physics and chemistry recommended by the Committee on Data for Science and Technology (CODATA) for international use. The 2010 adjustment takes into account the data considered in the 2006 adjustment as well as the data that became available from 1 January 2007, after the closing date of that adjustment, until 31 December 2010, the closing date of the new adjustment. Further, it describes in detail the adjustment of the values of the constants, including the selection of the final set of input data based on the results of least-squares analyses. The 2010 set replaces the previously recommended 2006 CODATA set and may also be found on the World Wide Web at physics.nist.gov/constants.

  20. Constraints on Alternate Universes: Stars and habitable planets with different fundamental constants

    CERN Document Server

    Adams, Fred C

    2015-01-01

    This paper develops constraints on the values of the fundamental constants that allow universes to be habitable. We focus on the fine structure constant $\\alpha$ and the gravitational structure constant $\\alpha_G$, and find the region in the $\\alpha$-$\\alpha_G$ plane that supports working stars and habitable planets. This work is motivated, in part, by the possibility that different versions of the laws of physics could be realized within other universes. The following constraints are enforced: [A] long-lived stable nuclear burning stars exist, [B] planetary surface temperatures are hot enough to support chemical reactions, [C] stellar lifetimes are long enough to allow biological evolution, [D] planets are massive enough to maintain atmospheres, [E] planets are small enough in mass to remain non-degenerate, [F] planets are massive enough to support sufficiently complex biospheres, [G] planets are smaller in mass than their host stars, and [H] stars are smaller in mass than their host galaxies. This paper del...

  1. Current Status of the Problem of Cosmological Variability of Fundamental Physical Constants

    Science.gov (United States)

    Varshslovich, D.A.; Ivanchik, A.V.; Orlov, A.V.; Potekhin, A.Y.; Petitjean, P.

    We review the current status of the problem of cosmological variability of fundamental physical constants, provided by modern laboratory experiments, Oklo phenomena analysis, and especially astronomical observations.

  2. Planck intermediate results XXIV. Constraints on variations in fundamental constants

    DEFF Research Database (Denmark)

    Ade, P. A. R.; Aghanim, N.; Arnaud, M.

    2015-01-01

    cosmological probes. We conclude that independent time variations of the fine structure constant and of the mass of the electron are constrained by Planck to Δ Α/Α = (3.6±3.7) x 10-3 and Δ me/me = (4 ±11) x 10-3 at the 68% confidence level. We also investigate the possibility of a spatial variation of the fine...

  3. Lorentz violation in brane cosmology, accelerated expansion and fundamental constants

    CERN Document Server

    Ahmadi, F; Sepangi, H R

    2006-01-01

    The notion of Lorentz violation in four dimensions is extended to a 5-dimensional brane-world scenario by utilizing a dynamical vector field assumed to point in the bulk direction, with Lorentz invariance holding on the brane. The cosmological consequences of this theory consisting of the time variation in the gravitational coupling $G$ and cosmological term $\\Lambda_4$ are explored. The brane evolution is addressed by studying the generalized Friedmann and Raychaudhuri equations. The behavior of the expansion scale factor is then considered for different possible scenarios where the bulk cosmological constant is zero, positive or negative.

  4. Can we test Dark Energy with Running Fundamental Constants ?

    CERN Document Server

    Doran, M

    2004-01-01

    We investigate a link between the running of the fine structure constant $\\alpha$ and a time evolving scalar dark energy field. Employing a versatile parameterization for the equation of state, we exhaustively cover the space of dark energy models. Under the assumption that the change in $\\alpha$ is to first order given by the evolution of the Quintessence field, we show that current Oklo, Quasi Stellar Objects and Equivalence Principle observations restrict the model parameters considerably stronger than observations of the Cosmic Microwave Background, Large Scale Structure and Supernovae Ia combined.

  5. Can we test dark energy with running fundamental constants?

    Science.gov (United States)

    Doran, Michael

    2005-04-01

    We investigate a link between the running of the fine structure constant α and a time evolving scalar dark energy field. Employing a versatile parametrization for the equation of state, we exhaustively cover the space of dark energy models. Under the assumption that the change in α is to first order given by the evolution of the quintessence field, we show that current Oklo, quasi-stellar object and equivalence principle observations restrict the model parameters considerably more strongly than observations of the cosmic microwave background, large scale structure and supernovae Ia combined.

  6. Limits on the space-time variations of fundamental constants

    CERN Document Server

    Levshakov, S A; Reimers, D; Molaro, P

    2013-01-01

    We report on new tests that improve our previous (2009-2010) estimates of the electron-to-proton mass ratio variation, mu = m_e/m_p. Subsequent observations (2011-2013) at the Effelsberg 100-m telescope of a sample of eight molecular cores from the Milky Way disk reveal systematic errors in the measured sky frequencies varying with an amplitude +/-0.01 km/s during the exposure time. The averaged offset between the radial velocities of the NH3(1,1), HC3N(2-1), HC5N(9-8), HC7N(16-15), HC7N(21-20), and HC7N(23-22) transitions gives Delta V = 0.002 +/- 0.015 km/s (3 sigma C.L.). This value, when interpreted in terms of Delta mu/mu = (mu_obs - mu_lab)/mu_lab constraints the mu-variation at the level of Delta mu/mu < 2x10^{-8} (3 sigma C.L.), which is the most stringent limit on the fractional changes in mu based on radio astronomical observations. If variation of the fine-structure constant alpha is coupled with mu, then within the grand unification model one may expect locally the spacial changes |Delta alpha/...

  7. Mineral scale management. Part II, Fundamental chemistry

    Science.gov (United States)

    Alan W. Rudie; Peter W. Hart

    2006-01-01

    The mineral scale that deposits in digesters and bleach plants is formed by a chemical precipitation process.As such, it is accurately modeled using the solubility product equilibrium constant. Although solubility product identifies the primary conditions that must be met for a scale problem to exist, the acid-base equilibria of the scaling anions often control where...

  8. Manifestations of a spatial variation of fundamental constants on atomic clocks, Oklo, meteorites, and cosmological phenomena

    CERN Document Server

    Berengut, J C

    2010-01-01

    The remarkable detection of a spatial variation in the fine-structure constant, alpha, from quasar absorption systems must be independently confirmed by complementary searches. In this letter, we discuss how terrestrial measurements of time-variation of the fundamental constants in the laboratory, meteorite data, and analysis of the Oklo nuclear reactor can be used to corroborate the spatial variation seen by astronomers. Furthermore, we show that spatial variation of the fundamental constants may be observable as spatial anisotropy in the cosmic microwave background, the accelerated expansion (dark energy), and large-scale structure of the Universe.

  9. Time-variability of the coupling constants of fundamental particles and Oklo phenomena

    Energy Technology Data Exchange (ETDEWEB)

    Fujii, Yasunori [Nihon Fukushi Univ., Handa, Aichi (Japan); Iwamoto, Akira; Hidaka, Hiroshi

    2000-09-01

    About 60 years ago, Dirac, P.A.M. presented that gravitational constant was not a constant but varied with a time in universe. As it has not obtained any determining proof experimentally, a fundamental concept on physical constants was disturbed since then, which has been succeeded to trials on the present integral theory. In special, some interesting researches on what is called coupling constants of fundamental particles, such as if fundamental charge of an electron changes actually, are continued. As proof on this change was not established, the observing and experimental upper values contain some important suggestions. The most serious result as its upper limit was obtained as well by an investigation on a surprising fact (a natural reactor) that uranium naturally reached a criticality at a place (Oklo) on the earth two billion years ago. Here were introduced on some their recent researches. (G.K.)

  10. [Aerosinusitis: part 1: Fundamentals, pathophysiology and prophylaxis].

    Science.gov (United States)

    Weber, R; Kühnel, T; Graf, J; Hosemann, W

    2014-01-01

    The relevance of aerosinusitis stems from the high number of flight passengers and the impaired fitness for work of the flight personnel. The frontal sinus is more frequently affected than the maxillary sinus and the condition generally occurs during descent. Sinonasal diseases and anatomic variations leading to obstruction of paranasal sinus ventilation favor the development of aerosinusitis. This Continuing Medical Education (CME) article is based on selective literature searches of the PubMed database (search terms: "aerosinusitis", "barosinusitis", "barotrauma" AND "sinus", "barotrauma" AND "sinusitis", "sinusitis" AND "flying" OR "aviator"). Additionally, currently available monographs and further articles that could be identified based on the publication reviews were also included. Part 1 presents the pathophysiology, symptoms, risk factors, epidemiology and prophylaxis of aerosinusitis. In part 2, diagnosis, conservative and surgical treatment will be discussed.

  11. Can dark matter induce cosmological evolution of the fundamental constants of Nature?

    CERN Document Server

    Stadnik, Y V

    2015-01-01

    Traditional theories, which predict the cosmological evolution of the fundamental constants of Nature, assume that the underlying fields, which give rise to this evolution, are unnaturally light. We demonstrate that massive fields, such as dark matter, also directly produce a cosmological evolution of the fundamental constants. We consider the specific model of a scalar dark matter field $\\phi$, which interacts with Standard Model particles via quadratic couplings in $\\phi$. In this particular model, cosmological evolution of the fundamental constants arises due to changes in $\\left$ in time and space. The most stringent constraints on the physical parameters of the present model come from measurements of the neutron-proton mass difference at the time of the weak interaction freeze-out.

  12. Variation of fundamental constants in space and time: theory and observations

    CERN Document Server

    Flambaum, V V

    2008-01-01

    Review of recent works devoted to the temporal and spatial variation of the fundamental constants and dependence of the fundamental constants on the gravitational potential (violation of local position invariance) is presented. We discuss the variation of the fine structure constant $\\alpha=e^2/\\hbar c$, strong interaction and fundamental masses (Higgs vacuum), e.g. the electron-to-proton mass ratio $\\mu=m_e/M_p$ or $X_e=m_e/\\Lambda_{QCD}$ and $X_q=m_q/\\Lambda_{QCD}$. We also present new results from Big Bang nucleosynthesis and Oklo natural nuclear reactor data and propose new measurements of enhanced effects in atoms, nuclei and molecules, both in quasar and laboratory spectra.

  13. Effects of variation of fundamental constants from Big Bang to atomic clocks

    Science.gov (United States)

    Flambaum, Victor

    2004-05-01

    Theories unifying gravity with other interactions suggest temporal and spatial variation of the fundamental "constants" in expanding Universe. I discuss effects of variation of the fine structure constant, strong interaction, quark mass and gravitational constant. The measurements of these variations cover the lifespan of the Universe from few minutes after Big Bang to the present time and give controversial results. There are some hints for the variations in Big Bang nucleosynthesis, quasar absorption spectra and Oklo natural nuclear reactor data. A very promising method to search for the variation of the fundamental constants consists in comparison of different atomic clocks. A billion times enhancement of the variation effects happens in transitions between accidentally degenerate atomic energy levels.

  14. Running vacuum in the Universe and the time variation of the fundamental constants of Nature

    Energy Technology Data Exchange (ETDEWEB)

    Fritzsch, Harald [Nanyang Technological University, Institute for Advanced Study, Singapore (Singapore); Universitaet Muenchen, Physik-Department, Munich (Germany); Sola, Joan [Nanyang Technological University, Institute for Advanced Study, Singapore (Singapore); Universitat de Barcelona, Departament de Fisica Quantica i Astrofisica, Barcelona, Catalonia (Spain); Universitat de Barcelona (ICCUB), Institute of Cosmos Sciences, Barcelona, Catalonia (Spain); Nunes, Rafael C. [Universidade Federal de Juiz de Fora, Dept. de Fisica, Juiz de Fora, MG (Brazil)

    2017-03-15

    We compute the time variation of the fundamental constants (such as the ratio of the proton mass to the electron mass, the strong coupling constant, the fine-structure constant and Newton's constant) within the context of the so-called running vacuum models (RVMs) of the cosmic evolution. Recently, compelling evidence has been provided that these models are able to fit the main cosmological data (SNIa+BAO+H(z)+LSS+BBN+CMB) significantly better than the concordance ΛCDM model. Specifically, the vacuum parameters of the RVM (i.e. those responsible for the dynamics of the vacuum energy) prove to be nonzero at a confidence level >or similar 3σ. Here we use such remarkable status of the RVMs to make definite predictions on the cosmic time variation of the fundamental constants. It turns out that the predicted variations are close to the present observational limits. Furthermore, we find that the time evolution of the dark matter particle masses should be crucially involved in the total mass variation of our Universe. A positive measurement of this kind of effects could be interpreted as strong support to the ''micro-macro connection'' (viz. the dynamical feedback between the evolution of the cosmological parameters and the time variation of the fundamental constants of the microscopic world), previously proposed by two of us (HF and JS). (orig.)

  15. Direct test of the time-independence of fundamental nuclear constants using the Oklo natural reactor

    CERN Document Server

    Shlyakhter, A I

    The positions of neutron resonances have been shown to be highly sensitive to the variation of fundamental nuclear constants. The analysis of the measured isotopic shifts in the natural fossil reactor at Oklo gives the following restrictions on the possible rates of the interaction constants variation: strong ~2x10^-19 yr^-1, electromagnetic ~5x10^-18 yr^-1, weak ~10^-12 yr^-1. These limits permit to exclude all the versions of nuclear constants contemporary variation discussed in the literature. URL: http://alexonline.info >. For more recent analyses see hep-ph/9606486, hep-ph/0205206 and astro-ph/0204069 .

  16. Modernizing the SI: implications of recent progress with the fundamental constants

    CERN Document Server

    Fletcher, Nick; Stock, Michael; Milton, Martin JT

    2015-01-01

    Recent proposals to re-define some of the base units of the SI make use of definitions that refer to fixed numerical values of certain constants. We review these proposals in the context of the latest results of the least-squares adjustment of the fundamental constants and against the background of the difficulty experienced with communicating the changes. We show that the benefit of a definition of the kilogram made with respect to the atomic mass constant (mu) may now be significantly stronger than when the choice was first considered 10 years ago.

  17. Exploring variations in the fundamental constants with ELTs: the CODEX spectrograph on OWL

    Science.gov (United States)

    Molaro, Paolo; Murphy, Michael T.; Levshakov, Sergei A.

    Cosmological variations in the fine structure constant, α, can be probed through precise velocity measurements of metallic absorption lines from intervening gas clouds seen in spectra of distant quasars. Data from the Keck/HIRES instrument support a variation in α of 6 parts per million. Such a variation would have profound implications, possibly providing a window into the extra spatial dimensions required by unified theories such as string/M-theory. However, recent results from VLT/UVES suggest no variation in α. The COsmic Dynamics EXperiment (CODEX) spectrograph currently being designed for the ESO OWL telescope (Pasquini et al. 2005) with a resolution high enough to properly resolve even the narrowest of metallic absorption lines, R > 150000, will achieve a 2-to-3 order-of-magnitude precision increase in Δα/α. This will rival the precision available from the Oklo natural fission reactor and upcoming satellite-borne atomic clock experiments. Given the vital constraints on fundamental physics possible, the ELT community must consider such a high-resolution optical spectrograph like CODEX.

  18. ACADEMIC TRAINING: Low Energy Experiments that Measure Fundamental Constants and Test Basic Symmetries

    CERN Multimedia

    Françoise Benz

    2002-01-01

    17, 18, 19 , 21 June LECTURE SERIES from 11.00 to 12.00 hrs - Auditorium, bldg. 500 Low Energy Experiments that Measure Fundamental Constants and Test Basic Symmetries by G. GABRIELSE / Professor of Physics and Chair of the Harvard Physics Department, Spokesperson for the ATRAP Collaboration Lecture 1: Particle Traps: the World's Tiniest Accelerators A single elementary particle, or a single ion, can be confined in a tiny accelerator called a particle trap. A single electron was held this way for more than ten months, and antiprotons for months. Mass spectroscopy of exquisite precision is possible with such systems. CERN's TRAP Collaboration thereby compared the charge-to-mass ratios of the antiproton and proton to a precision of 90 parts per trillion, by far the most stringent CPT test done with a baryon system. The important ratio of the masses of the electron and proton have been similarly measured, as have a variety of ions masses, and the neutron mass is most accurately known from such measurements. An i...

  19. Comments on redefinition of SI units based on fundamental physical constants with fixed values

    CERN Document Server

    Khruschov, V V

    2011-01-01

    Advantages and disadvantages of fixation of fundamental physical constants' values for definition of SI units are considered. The case with a new definition of the mass unit on the base of a fixed value of the Avogadro constant is studied in detail. Criteria on choosing of a optimum FPC set with fixed values for the redefinition of the SI units are suggested. The minimal optimum FPC set that is consistent with the criteria is presented. The set comprises the speed of light, the constant of the ground state hyperfine transition of the caesium-133 atom, the Avogadro constant, the mass of the carbon-12 atom and the absolute magnitude of the electron charge. Comment on the redefinition of the kelvin is also made.

  20. Running vacuum in the Universe and the time variation of the fundamental constants of Nature

    CERN Document Server

    Fritzsch, Harald; Sola, Joan

    2016-01-01

    We compute the time variation of the fundamental constants (such as the ratio of the proton mass to the electron mass, the strong coupling constant, the fine structure constant and Newton's constant) within the context of the so-called running vacuum models (RVM's) of the cosmic evolution. Recently, compelling evidence has been provided showing that these models are able to fit the main cosmological data (SNIa+BAO+H(z)+LSS+BBN+CMB) significantly better than the concordance $\\Lambda$CDM model. Specifically, the vacuum parameters of the RVM (i.e. those responsible for the dynamics of the vacuum energy) prove to be nonzero at a confidence level of $\\gtrsim3\\sigma$. Here we use such remarkable status of the RVM's to make definite predictions on the cosmic time variation of the fundamental constants. It turns out that the predicted variations are close to the present observational limits. Furthermore, we find that the time variation of the dark matter particles should be necessarily involved in the total mass vari...

  1. Variation of fundamental constants in space and time: Theory and observations

    Science.gov (United States)

    Flambaum, V. V.

    2008-10-01

    Review of recent works devoted to the temporal and spatialvariation of the fundamental constants and dependence of the fundamentalconstants on the gravitational potential (violation of local position invariance) is presented. We discuss the variation of the fine structure constant α=e2/ħc, strong interaction andfundamental masses (Higgs vacuum), e.g. the electron-to-proton mass ratioμ=me/Mp or Xe=me/ΛQCD and Xq=mq/ΛQCD.We also present new results from Big Bang nucleosynthesisand Oklo natural nuclear reactor data and propose new measurements of enhanced effects in atoms, nuclei and molecules, both in quasar and laboratory spectra.

  2. Can Dark Matter Induce Cosmological Evolution of the Fundamental Constants of Nature?

    Science.gov (United States)

    Stadnik, Y V; Flambaum, V V

    2015-11-13

    We demonstrate that massive fields, such as dark matter, can directly produce a cosmological evolution of the fundamental constants of nature. We show that a scalar or pseudoscalar (axionlike) dark matter field ϕ, which forms a coherently oscillating classical field and interacts with standard model particles via quadratic couplings in ϕ, produces "slow" cosmological evolution and oscillating variations of the fundamental constants. We derive limits on the quadratic interactions of ϕ with the photon, electron, and light quarks from measurements of the primordial (4)He abundance produced during big bang nucleosynthesis and recent atomic dysprosium spectroscopy measurements. These limits improve on existing constraints by up to 15 orders of magnitude. We also derive limits on the previously unconstrained linear and quadratic interactions of ϕ with the massive vector bosons from measurements of the primordial (4)He abundance.

  3. The Oklo Natural Reactor and the Time Variability of the Fundamental Constants of Nature

    Energy Technology Data Exchange (ETDEWEB)

    Lamoreaux, Steve (LANL)

    2005-11-07

    Natural nuclear reactors? Changes in the speed of light? If either of these concepts seem implausible to you now they certainly won't once Dr. Steve Lamoreaux (LANL) delivers his SLAC Colloquium lecture in the Panofsky Auditorium on November 7th at 4:15 pm entitled The Oklo Natural Reactor and the Time Variability of the Fundamental Constants of Nature. This lecture is a rare opportunity to learn not only about Oklo's incredible natural nuclear reactors but also to gain understanding about how the present-day study of these sites may alter our understanding of fundamental constants such as the speed of light. This event is a must-see for the curious!

  4. Searching for dark matter and variation of fundamental constants with laser and maser interferometry.

    Science.gov (United States)

    Stadnik, Y V; Flambaum, V V

    2015-04-24

    Any slight variations in the fundamental constants of nature, which may be induced by dark matter or some yet-to-be-discovered cosmic field, would characteristically alter the phase of a light beam inside an interferometer, which can be measured extremely precisely. Laser and maser interferometry may be applied to searches for the linear-in-time drift of the fundamental constants, detection of topological defect dark matter through transient-in-time effects, and for a relic, coherently oscillating condensate, which consists of scalar dark matter fields, through oscillating effects. Our proposed experiments require either minor or no modifications of existing apparatus, and offer extensive reach into important and unconstrained spaces of physical parameters.

  5. Spectroscopy of antiprotonic helium atoms and its contribution to the fundamental physical constants

    CERN Document Server

    Hayano, R S

    2010-01-01

    Proceedings of the Japan Academy, Series B Vol. 86 (2010) No. 1 P 1-10 Language: Next Article http://dx.doi.org/10.2183/pjab.86.1 JST.JSTAGE/pjab/86.1 Reviews Spectroscopy of antiprotonic helium atoms and its contribution to the fundamental physical constants Ryugo S. HAYANO1) 1) Department of Physics, The University of Tokyo Released 2010/01/14 Keywords: antiproton, CERN, fundamental physical constants, laser spectroscopy Full Text PDF [1604K] Abstracts References(25) Antiprotonic helium atom, a metastable neutral system consisting of an antiproton, an electron and a helium nucleus, was serendipitously discovered, and has been studied at CERN’s antiproton decelerator facility. Its transition frequencies have recently been measured to nine digits of precision by laser spectroscopy. By comparing these experimental results with three-body QED calculations, the antiproton-to-electron massratio was determined as 1836.152674(5). This result contributed to the CODATA recommended val...

  6. The New SI and fundamental constants: different meanings assigned to the same data

    CERN Document Server

    Pavese, Franco

    2016-01-01

    This note discusses the role of fundamental constants in the proposed New SI formulation of the definition of the International System of Units, namely in the present official documents and in some relevant literature. The meaning assigned to their use is found substantially different even among the advocates of the proposal. Some reasons are discussed why it is urgent that this basic issue is clarified.

  7. Sensitivity of rotational transitions in CH and CD to a possible variation of fundamental constants

    CERN Document Server

    de Nijs, Adrian J; Bethlem, Hendrick L

    2012-01-01

    The sensitivity of rotational transitions in CH and CD to a possible variation of fundamental constants has been investigated. Largely enhanced sensitivity coefficients are found for specific transitions which are due to accidental degeneracies between the different fine-structure manifolds. These degeneracies occur when the spin-orbit coupling constant is close to four times the rotational constant. CH and particularly CD match this condition closely. Unfortunately, an analysis of the transition strengths shows that the same condition that leads to an enhanced sensitivity suppresses the transition strength, making these transitions too weak to be of relevance for testing the variation of fundamental constants over cosmological time scales. We propose a test in CH based on the comparison between the rotational transitions between the e and f components of the Omega'=1/2,J=1/2 and Omega'=3/2,J=3/2 levels at 532 and 536 GHz and other rotational or Lambda-doublet transitions in CH involving the same absorbing gr...

  8. Observational Constraints on $f(T)$ gravity from varying fundamental constants

    CERN Document Server

    Nunes, Rafael C; Pan, Supriya; Saridakis, Emmanuel N

    2016-01-01

    We use observations related to the variation of fundamental constants, in order to impose constraints on the viable and most used $f(T)$ gravity models. In particular, for the fine-structure constant we use direct measurements obtained by different spectrographic methods, while for the effective Newton's constant we use a model-dependent reconstruction, using direct observational Hubble parameter data, in order to investigate its temporal evolution. We consider two $f(T)$ models and we quantify their deviation from $\\Lambda$CDM cosmology through a sole parameter. Our analysis reveals that this parameter can be slightly different from its $\\Lambda$CDM value, however the best-fit value is very close to the $\\Lambda$CDM one. Hence, $f(T)$ gravity is consistent with observations, nevertheless, as every modified gravity, it may exhibit only small deviations from $\\Lambda$CDM cosmology, a feature that must be taken into account in any $f(T)$ model-building.

  9. Searching for Scalar Dark Matter in Atoms and Astrophysical Phenomena: Variation of Fundamental Constants

    CERN Document Server

    Stadnik, Yevgeny V; Flambaum, Victor V; Dzuba, Vladimir A

    2015-01-01

    We propose to search for scalar dark matter via its effects on the electromagnetic fine-structure constant and particle masses. Scalar dark matter that forms an oscillating classical field produces `slow' linear-in-time drifts and oscillating variations of the fundamental constants, while scalar dark matter that forms topological defects produces transient-in-time variations of the constants of Nature. These variations can be sought for with atomic clock, laser interferometer and pulsar timing measurements. Atomic spectroscopy and Big Bang nucleosynthesis measurements already give improved bounds on the quadratic interaction parameters of scalar dark matter with the photon, electron, and light quarks by up to 15 orders of magnitude, while Big Bang nucleosynthesis measurements provide the first such constraints on the interaction parameters of scalar dark matter with the massive vector bosons.

  10. Competing bounds on the present-day time variation of fundamental constants

    CERN Document Server

    Dent, Thomas; Wetterich, Christof

    2008-01-01

    We compare the sensitivity of a recent bound on time variation of the fine structure constant from optical clocks with bounds on time varying fundamental constants from atomic clocks sensitive to the electron-to-proton mass ratio, from radioactive decay rates in meteorites, and from the Oklo natural reactor. Tests of the Weak Equivalence Principle also lead to comparable bounds on present variations of constants. The "winner in sensitivity" depends on what relations exist between the variations of different couplings in the standard model of particle physics, which may arise from the unification of gauge interactions. WEP tests are currently the most sensitive within unified scenarios. A detection of time variation in atomic clocks would favour dynamical dark energy and put strong constraints on the dynamics of a cosmological scalar field.

  11. Competing bounds on the present-day time variation of fundamental constants

    Science.gov (United States)

    Dent, Thomas; Stern, Steffen; Wetterich, Christof

    2009-04-01

    We compare the sensitivity of a recent bound on time variation of the fine structure constant from optical clocks with bounds on time-varying fundamental constants from atomic clocks sensitive to the electron-to-proton mass ratio, from radioactive decay rates in meteorites, and from the Oklo natural reactor. Tests of the weak equivalence principle also lead to comparable bounds on present variations of constants. The “winner in sensitivity” depends on what relations exist between the variations of different couplings in the standard model of particle physics, which may arise from the unification of gauge interactions. Weak equivalence principle tests are currently the most sensitive within unified scenarios. A detection of time variation in atomic clocks would favor dynamical dark energy and put strong constraints on the dynamics of a cosmological scalar field.

  12. Observational constraints on f(T) gravity from varying fundamental constants

    Energy Technology Data Exchange (ETDEWEB)

    Nunes, Rafael C.; Bonilla, Alexander [Universidade Federal de Juiz de Fora, Departamento de Fisica, Juiz de Fora, MG (Brazil); Pan, Supriya [Indian Institute of Science Education and Research, Kolkata, Department of Physical Sciences, Mohanpur, West Bengal (India); Saridakis, Emmanuel N. [Pontificia Universidad de Catolica de Valparaiso, Instituto de Fisica, Valparaiso (Chile); National Technical University of Athens, Physics Division, Athens (Greece); Baylor University, CASPER, Physics Department, Waco, TX (United States)

    2017-04-15

    We use observations related to the variation of fundamental constants, in order to impose constraints on the viable and most used f(T) gravity models. In particular, for the fine-structure constant we use direct measurements obtained by different spectrographic methods, while for the effective Newton constant we use a model-dependent reconstruction, using direct observational Hubble parameter data, in order to investigate its temporal evolution. We consider two f(T) models and we quantify their deviation from Λ CDM cosmology through a sole parameter. Our analysis reveals that this parameter can be slightly different from its Λ CDM value, however, the best-fit value is very close to the Λ CDM one. Hence, f(T) gravity is consistent with observations, nevertheless, as every modified gravity, it may exhibit only small deviations from Λ CDM cosmology, a feature that must be taken into account in any f(T) model-building. (orig.)

  13. A Different Look at Dark Energy and the Time Variation of Fundamental Constants

    Energy Technology Data Exchange (ETDEWEB)

    Weinstein, Marvin; /SLAC

    2011-02-07

    This paper makes the simple observation that a fundamental length, or cutoff, in the context of Friedmann-Lemaitre-Robertson-Walker (FRW) cosmology implies very different things than for a static universe. It is argued that it is reasonable to assume that this cutoff is implemented by fixing the number of quantum degrees of freedom per co-moving volume (as opposed to a Planck volume) and the relationship of the vacuum-energy of all of the fields in the theory to the cosmological constant (or dark energy) is re-examined. The restrictions that need to be satisfied by a generic theory to avoid conflicts with current experiments are discussed, and it is shown that in any theory satisfying these constraints knowing the difference between w and minus one allows one to predict w. It is argued that this is a robust result and if this prediction fails the idea of a fundamental cutoff of the type being discussed can be ruled out. Finally, it is observed that, within the context of a specific theory, a co-moving cutoff implies a predictable time variation of fundamental constants. This is accompanied by a general discussion of why this is so, what are the strongest phenomenological limits upon this predicted variation, and which limits are in tension with the idea of a co-moving cutoff. It is pointed out, however, that a careful comparison of the predicted time variation of fundamental constants is not possible without restricting to a particular model field-theory and that is not done in this paper.

  14. Testing the variation of fundamental constants by astrophysical methods: overview and prospects

    CERN Document Server

    Levshakov, S A

    2016-01-01

    By measuring the fundamental constants in astrophysical objects one can test basic physical principles as space-time invariance of physical laws along with probing the applicability limits of the standard model of particle physics. The latest constraints on the fine structure constant alpha and the electron-to-proton mass ratio mu obtained from observations at high redshifts and in the Milky Way disk are reviewed. In optical range, the most accurate measurements have already reached the sensitivity limit of available instruments, and further improvements will be possible only with next generation of telescopes and receivers. New methods of the wavelength calibration should be realized to control systematic errors at the sub-pixel level. In radio sector, the main tasks are the search for galactic and extragalactic objects suitable for precise molecular spectroscopy as well as high resolution laboratory measurements of molecular lines to provide accurate frequency standards. The expected progress in the optical...

  15. Fundamental constant observational bounds on the variability of the QCD scale

    Science.gov (United States)

    Thompson, Rodger I.

    2017-06-01

    Many physical theories beyond the Standard Model predict time variations of basic physics parameters. Direct measurement of the time variations of these parameters is very difficult or impossible to achieve. By contrast, measurements of fundamental constants are relatively easy to achieve, both in the laboratory and by astronomical spectra of atoms and molecules in the early universe. In this work, measurements of the proton to electron mass ratio μ and the fine structure constant α are combined to place mildly model-dependent limits on the fractional variation of the quantum chromodynamic scale and the sum of the fractional variations of the Higgs vacuum expectation value (VEV) and the Yukawa couplings on time-scales of more than half the age of the universe. The addition of another model parameter allows the fractional variation of the Higgs VEV and the Yukawa couplings to be computed separately. Limits on their variation are found at the level of less than 5 × 10-5 over the past 7 Gyr. A model-dependent relation between the expected fractional variation of α relative to μ tightens the limits to 10-7 over the same time span. Limits on the present day rate of change of the constants and parameters are then calculated using slow roll quintessence. A primary result of this work is that studies of the dimensionless fundamental constants such as α and μ, whose values depend on the values of the physics parameters, are excellent monitors of the limits on the time variation of these parameters.

  16. Dependence of the triple-alpha process on the fundamental constants of nature

    CERN Document Server

    Epelbaum, Evgeny; Lähde, Timo A; Lee, Dean; Meißner, Ulf-G

    2013-01-01

    We present an ab initio calculation of the quark mass dependence of the ground state energies of ^4He, ^8Be and ^{12}C, and of the energy of the Hoyle state in ^{12}C. These investigations are performed within the framework of lattice chiral Effective Field Theory. We address the sensitivity of the production rate of carbon and oxygen in red giant stars to the fundamental constants of nature by considering the impact of variations in the light quark masses and the electromagnetic fine-structure constant on the reaction rate of the triple-alpha process. As carbon and oxygen are essential to life as we know it, we also discuss the implications of our findings for an anthropic view of the Universe. We find strong evidence that the physics of the triple-alpha process is driven by alpha clustering, and that shifts in the fundamental parameters at the \\simeq 2 - 3 % level are unlikely to be detrimental to the development of life. Tolerance against much larger changes cannot be ruled out at present, given the relati...

  17. Dependence of the triple-alpha process on the fundamental constants of nature

    Energy Technology Data Exchange (ETDEWEB)

    Epelbaum, Evgeny; Krebs, Hermann [Ruhr-Universitaet Bochum, Institut fuer Theoretische Physik II, Bochum (Germany); Laehde, Timo A. [Forschungszentrum Juelich, Institut fuer Kernphysik, Institute for Advanced Simulation, Juelich (Germany); Lee, Dean [North Carolina State University, Department of Physics, Raleigh, NC (United States); Meissner, Ulf G. [Forschungszentrum Juelich, Institut fuer Kernphysik, Institute for Advanced Simulation, Juelich (Germany); Forschungszentrum Juelich, JARA - High Performance Computing, Juelich (Germany); Universitaet Bonn, Helmholtz-Institut fuer Strahlen- und Kernphysik and Bethe Center for Theoretical Physics, Bonn (Germany)

    2013-07-15

    We present an ab initio calculation of the quark mass dependence of the ground state energies of {sup 4}He, {sup 8}Be and {sup 12}C, and of the energy of the Hoyle state in {sup 12}C. These investigations are performed within the framework of lattice chiral Effective Field Theory. We address the sensitivity of the production rate of carbon and oxygen in red giant stars to the fundamental constants of nature by considering the impact of variations in the light quark masses and the electromagnetic fine-structure constant on the reaction rate of the triple-alpha process. As carbon and oxygen are essential to life as we know it, we also discuss the implications of our findings for an anthropic view of the Universe. We find strong evidence that the physics of the triple-alpha process is driven by alpha clustering, and that shifts in the fundamental parameters at the {approx_equal} 2-3% level are unlikely to be detrimental to the development of life. Tolerance against much larger changes cannot be ruled out at present, given the relatively limited knowledge of the quark mass dependence of the two-nucleon S -wave scattering parameters. Lattice QCD is expected to provide refined estimates of the scattering parameters in the future. (orig.)

  18. Manifestations of Dark matter and variation of the fundamental constants in atomic and astrophysical phenomena

    Science.gov (United States)

    Flambaum, Victor

    2016-05-01

    Low-mass boson dark matter particles produced after Big Bang form classical field and/or topological defects. In contrast to traditional dark matter searches, effects produced by interaction of an ordinary matter with this field and defects may be first power in the underlying interaction strength rather than the second or fourth power (which appears in a traditional search for the dark matter). This may give a huge advantage since the dark matter interaction constant is extremely small. Interaction between the density of the dark matter particles and ordinary matter produces both `slow' cosmological evolution and oscillating variations of the fundamental constants including the fine structure constant alpha and particle masses. Recent atomic dysprosium spectroscopy measurements and the primordial helium abundance data allowed us to improve on existing constraints on the quadratic interactions of the scalar dark matter with the photon, electron and light quarks by up to 15 orders of magnitude. Limits on the linear and quadratic interactions of the dark matter with W and Z bosons have been obtained for the first time. In addition to traditional methods to search for the variation of the fundamental constants (atomic clocks, quasar spectra, Big Bang Nucleosynthesis, etc) we discuss variations in phase shifts produced in laser/maser interferometers (such as giant LIGO, Virgo, GEO600 and TAMA300, and the table-top silicon cavity and sapphire interferometers), changes in pulsar rotational frequencies (which may have been observed already in pulsar glitches), non-gravitational lensing of cosmic radiation and the time-delay of pulsar signals. Other effects of dark matter and dark energy include apparent violation of the fundamental symmetries: oscillating or transient atomic electric dipole moments, precession of electron and nuclear spins about the direction of Earth's motion through an axion condensate, and axion-mediated spin-gravity couplings, violation of Lorentz

  19. An upper limit to the variation in the fundamental constants at redshift z = 5.2

    CERN Document Server

    Levshakov, S A; Boone, F; Agafonova, I I; Reimers, D; Kozlov, M G

    2012-01-01

    Aims. We constrain a hypothetical variation in the fundamental physical constants over the course of cosmic time. Methods. We use unique observations of the CO(7-6) rotational line and the [CI] 3P_2 - 3P_1 fine-structure line towards a lensed galaxy at redshift z = 5.2 to constrain temporal variations in the constant F = alpha^2/mu, where mu is the electron-to-proton mass ratio and alpha is the fine-structure constant. The relative change in F between z = 0 and z = 5.2, dFF = (F_obs - F_lab)/F_lab, is estimated from the radial velocity offset, dV = V_rot - V_fs, between the rotational transitions in carbon monoxide and the fine-structure transition in atomic carbon. Results. We find a conservative value dV = 1 +/- 5 km/s (1sigma C.L.), which when interpreted in terms of dFF gives dFF < 2x10^-5. Independent methods restrict the mu-variations at the level of dmm < 1x10^-7 at z = 0.7 (look-back time t_z0.7 = 6.4 Gyr). Assuming that temporal variations in mu, if any, are linear, this leads to an upper limit...

  20. Quantum gravity, Clifford algebras, fuzzy set theory and the fundamental constants of nature

    Energy Technology Data Exchange (ETDEWEB)

    El Naschie, M.S

    2004-05-01

    In a recent paper entitled 'Quantum gravity from descriptive set theory', published in Chaos, Solitons and Fractals, we considered following the P-adic quantum theory, the possibility of abandoning the Archimedean axiom and introducing a fundamental physical limitation on the smallest length in quantum spacetime. Proceeding that way we arrived at the conclusion that maximising the Hawking-Bekenstein informational content of spacetime makes the existence of a transfinite geometry for physical 'spacetime' plausible or even inevitable. Subsequently we introduced a mathematical description of a transfinite, non-Archimedean geometry using descriptive set theory where a similar conclusion regarding the transfiniteness of quantum spacetime may be drawn from the existence of the Unruh temperature. In particular we introduced a straight forward logarithmic gauge transformation linking, as far as we are aware for the first time, classical gravity with the electroweak via a version of informational entropy. That way we found using {epsilon}{sup ({infinity}}{sup )} and complexity theory that {alpha}-bar{sub G}=(2){sup {alpha}-bar{sub ew}}{sup -1}=1.7x10{sup 38} where {alpha}-bar{sub G} is the dimensionless Newton gravity constant and {alpha}-bar{sub ew}=128 is the fine structure constant at the electroweak unification scale. The present work is concerned with more or less the same category of fundamental questions pertinent to quantum gravity. However we switch our mathematical apparatus to a combination of Clifford algebras and set theory. In doing that, the central and vital role of the work of D. Finkelstein becomes much more tangible and clearer than in most of our previous works.

  1. Fundamental Limits of Wideband Localization - Part II: Cooperative Networks

    CERN Document Server

    Shen, Yuan; Win, Moe Z

    2010-01-01

    The availability of positional information is of great importance in many commercial, governmental, and military applications. Localization is commonly accomplished through the use of radio communication between mobile devices (agents) and fixed infrastructure (anchors). However, precise determination of agent positions is a challenging task, especially in harsh environments due to radio blockage or limited anchor deployment. In these situations, cooperation among agents can significantly improve localization accuracy and reduce localization outage probabilities. A general framework of analyzing the fundamental limits of wideband localization has been developed in Part I of the paper. Here, we build on this framework and establish the fundamental limits of wideband cooperative location-aware networks. Our analysis is based on the waveforms received at the nodes, in conjunction with Fisher information inequality. We provide a geometrical interpretation of equivalent Fisher information for cooperative networks....

  2. A Different Look at Dark Energy and the Time Variation of Fundamental Constants

    CERN Document Server

    Weinstein, Marvin

    2011-01-01

    This paper makes the simple observation that a fundamental length, or cutoff, in the context of Friedmann-Lema\\^itre-Robertson-Walker (FRW) cosmology implies very different things than for a static universe. It is argued that it is reasonable to assume that this cutoff is implemented by fixing the number of quantum degrees of freedom per co-moving volume (as opposed to a Planck volume) and the relationship of the vacuum-energy of all of the fields in the theory to the cosmological constant (or dark energy) is re-examined. The restrictions that need to be satisfied by a generic theory to avoid conflicts with current experiments are discussed, and it is shown that in any theory satisfying these constraints knowing the difference between $w$ and minus one allows one to predict $\\dot{w}$. It is argued that this is a robust result and if this prediction fails the idea of a fundamental cutoff of the type being discussed can be ruled out. Finally, it is observed that, within the context of a specific theory, a co-mo...

  3. Fundamental Limits of Wideband Localization - Part I: A General Framework

    CERN Document Server

    Shen, Yuan

    2010-01-01

    The availability of positional information is of great importance in many commercial, public safety, and military applications. The coming years will see the emergence of location-aware networks with sub-meter accuracy, relying on accurate range measurements provided by wide bandwidth transmissions. In this two-part paper, we determine the fundamental limits of localization accuracy of wideband wireless networks in harsh multipath environments. We first develop a general framework to characterize the localization accuracy of a given node here and then extend our analysis to cooperative location-aware networks in Part II. In this paper, we characterize localization accuracy in terms of a performance measure called the squared position error bound (SPEB), and introduce the notion of equivalent Fisher information to derive the SPEB in a succinct expression. This methodology provides insights into the essence of the localization problem by unifying localization information from individual anchors and information ...

  4. Fundamental U-Theory of Time. Part 1

    Directory of Open Access Journals (Sweden)

    Yuvraj J. Gopaul

    2016-02-01

    Full Text Available The Fundamental U-Theory of Time (Part 1 is an original theory that aims to unravel the mystery of what exactly is ‘time’. To date very few explanations, from the branches of physics or cosmology, have succeeded to provide an accurate and comprehensive depiction of time. Most explanations have only managed to provide partial understanding or at best, glimpses of its true nature. The U-Theory uses ‘Thought Experiments’ to uncover the determining characteristics of time. In part 1 of this theory, the focus is not on the mathematics as it is on the accuracy of the depiction of time. Moreover, it challenges current views on theoretical physics, particularly on the idea of ‘time travel’. Notably, it is a theory seeking to present a fresh approach for reviewing Einstein’s Theory of Relativity, while unlocking new pathways for upcoming research in the field of physics and cosmology.

  5. Microwave and submillimeter molecular transitions and their dependence on fundamental constants

    Energy Technology Data Exchange (ETDEWEB)

    Kozlov, Mikhail G. [Petersburg Nuclear Physics Institute, Gatchina (Russian Federation); St. Petersburg Electrotechnical University ' ' LETI' ' , St. Petersburg (Russian Federation); Levshakov, Sergei A. [St. Petersburg Electrotechnical University ' ' LETI' ' , St. Petersburg (Russian Federation); Ioffe Physical-Technical Institute, St. Petersburg (Russian Federation)

    2013-07-15

    Microwave and submillimeter molecular transition frequencies between nearly degenerated rotational levels, tunneling transitions, and mixed tunneling-rotational transitions show an extremely high sensitivity to the values of the fine-structure constant, {alpha}, and the electron-to-proton mass ratio, {mu}. This review summarizes the theoretical background on quantum-mechanical calculations of the sensitivity coefficients of such transitions to tiny changes in {alpha} and {mu} for a number of molecules which are usually observed in Galactic and extragalactic sources, and discusses the possibility of testing the space- and time-invariance of fundamental constants through comparison between precise laboratory measurements of the molecular rest frequencies and their astronomical counterparts. In particular, diatomic radicals CH, OH, NH{sup +}, and a linear polyatomic radical C{sub 3}H in {Pi} electronic ground state, polyatomic molecules NH{sub 3}, ND{sub 3}, NH{sub 2}D, NHD{sub 2}, H{sub 2}O{sub 2}, H{sub 3}O{sup +}, CH{sub 3}OH, and CH{sub 3}NH{sub 2} in their tunneling and tunneling-rotational modes are considered. It is shown that sensitivity coefficients strongly depend on the quantum numbers of the corresponding transitions. This can be used for astrophysical tests of Einstein's Equivalence Principle all over the Universe at an unprecedented level of sensitivity of 10{sup -9}, which is a limit three to two orders of magnitude lower as compared to the current constraints on cosmological variations of {alpha} and {mu}: {Delta}{alpha}/{alpha} < 10{sup -6}, {Delta} {mu} / {mu} < 10{sup -7}. (copyright 2013 by WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)

  6. Probing QED and fundamental constants through laser spectroscopy of vibrational transitions in HD+

    CERN Document Server

    Biesheuvel, J; Hilico, L; Eikema, K S E; Ubachs, W; Koelemeij, J C J

    2016-01-01

    The simplest molecules in nature, molecular hydrogen ions in the form of H2+ and HD+, provide an important benchmark system for tests of quantum electrodynamics in complex forms of matter. Here, we report on such a test based on a frequency measurement of a vibrational overtone transition in HD+ by laser spectroscopy. We find that the theoretical and experimental frequencies are equal to within 0.6(1.1) parts per billion, which represents the most stringent test of molecular theory so far. Our measurement not only confirms the validity of high-order quantum electrodynamics in molecules, but also enables the long predicted determination of the proton-to-electron mass ratio from a molecular system, as well as improved constraints on hypothetical fifth forces and compactified higher dimensions at the molecular scale. With the perspective of comparisons between theory and experiment at the 0.01 part-per-billion level, our work demonstrates the potential of molecular hydrogen ions as a probe of fundamental physica...

  7. Diagnostic and interventional musculoskeletal ultrasound: part 1. Fundamentals.

    Science.gov (United States)

    Smith, Jay; Finnoff, Jonathan T

    2009-01-01

    Musculoskeletal ultrasound involves the use of high-frequency sound waves to image soft tissues and bony structures in the body for the purposes of diagnosing pathology or guiding real-time interventional procedures. Recently, an increasing number of physicians have integrated musculoskeletal ultrasound into their practices to facilitate patient care. Technological advancements, improved portability, and reduced costs continue to drive the proliferation of ultrasound in clinical medicine. This increased interest creates a need for education pertaining to all aspects of musculoskeletal ultrasound. The primary purpose of this article is to review diagnostic ultrasound technology and its potential clinical applications in the evaluation and treatment of patients with neurologic and musculoskeletal disorders. After reviewing this article, physicians should be able to (1) list the advantages and disadvantages of ultrasound compared with other available imaging modalities, (2) describe how ultrasound machines produce images using sound waves, (3) discuss the steps necessary to acquire and optimize an ultrasound image, (4) understand the different ultrasound appearances of tendons, nerves, muscles, ligaments, blood vessels, and bones, and (5) identify multiple applications for diagnostic and interventional musculoskeletal ultrasound in musculoskeletal practice. Part 1 of this 2-part article reviews the fundamentals of clinical ultrasonographic imaging, including relevant physics, equipment, training, image optimization, and scanning principles for diagnostic and interventional purposes.

  8. Exploring variations in the fundamental constants with ELTs: The CODEX spectrograph on OWL

    CERN Document Server

    Molaro, P; Levshakov, S; Molaro, Paolo; Murphy, Michael T.; Levshakov, Sergei

    2006-01-01

    Cosmological variations in the fine structure constant, alpha, can be probed through precise velocity measurements of metallic absorption lines from intervening gas clouds seen in spectra of distant quasars. Data from the Keck/HIRES instrument support a variation in alpha of 6 parts per million. Such a variation would have profound implications, possibly providing a window into the extra spatial dimensions required by unified theories such as string/M-theory. However, recent results from VLT/UVES suggest no variation in alpha. The COsmic Dynamics EXperiment (CODEX) spectrograph currently being designed for the ESO OWL telescope (Pasquini et al 2005) with a resolution high enough to properly resolve even the narrowest of metallic absorption lines, R>150,000, will achieve a 2-to-3 order-of-magnitude precision increase in Delta\\alpha/alpha. This will rival the precision available from the Oklo natural fission reactor and upcoming satellite-borne atomic clock experiments. Given the vital constraints on fundamenta...

  9. Data stewardship - a fundamental part of the scientific method (Invited)

    Science.gov (United States)

    Foster, C.; Ross, J.; Wyborn, L. A.

    2013-12-01

    This paper emphasises the importance of data stewardship as a fundamental part of the scientific method, and the need to effect cultural change to ensure engagement by earth scientists. It is differentiated from the science of data stewardship per se. Earth System science generates vast quantities of data, and in the past, data analysis has been constrained by compute power, such that sub-sampling of data often provided the only way to reach an outcome. This is analogous to Kahneman's System 1 heuristic, with its simplistic and often erroneous outcomes. The development of HPC has liberated earth sciences such that the complexity and heterogeneity of natural systems can be utilised in modelling at any scale, global, or regional, or local; for example, movement of crustal fluids. Paradoxically, now that compute power is available, it is the stewardship of the data that is presenting the main challenges. There is a wide spectrum of issues: from effectively handling and accessing acquired data volumes [e.g. satellite feeds per day/hour]; through agreed taxonomy to effect machine to machine analyses; to idiosyncratic approaches by individual scientists. Except for the latter, most agree that data stewardship is essential. Indeed it is an essential part of the science workflow. As science struggles to engage and inform on issues of community importance, such as shale gas and fraccing, all parties must have equal access to data used for decision making; without that, there will be no social licence to operate or indeed access to additional science funding (Heidorn, 2008). The stewardship of scientific data is an essential part of the science process; but often it is regarded, wrongly, as entirely in the domain of data custodians or stewards. Geoscience Australia has developed a set of six principles that apply to all science activities within the agency: Relevance to Government Collaborative science Quality science Transparent science Communicated science Sustained

  10. Kinetic performance limits of constant pressure versus constant flow rate gradient elution separations. Part I: theory.

    Science.gov (United States)

    Broeckhoven, K; Verstraeten, M; Choikhet, K; Dittmann, M; Witt, K; Desmet, G

    2011-02-25

    We report on a general theoretical assessment of the potential kinetic advantages of running LC gradient elution separations in the constant-pressure mode instead of in the customarily used constant-flow rate mode. Analytical calculations as well as numerical simulation results are presented. It is shown that, provided both modes are run with the same volume-based gradient program, the constant-pressure mode can potentially offer an identical separation selectivity (except from some small differences induced by the difference in pressure and viscous heating trajectory), but in a significantly shorter time. For a gradient running between 5 and 95% of organic modifier, the decrease in analysis time can be expected to be of the order of some 20% for both water-methanol and water-acetonitrile gradients, and only weakly depending on the value of V(G)/V₀ (or equivalently t(G)/t₀). Obviously, the gain will be smaller when the start and end composition lie closer to the viscosity maximum of the considered water-organic modifier system. The assumptions underlying the obtained results (no effects of pressure and temperature on the viscosity or retention coefficient) are critically reviewed, and can be inferred to only have a small effect on the general conclusions. It is also shown that, under the adopted assumptions, the kinetic plot theory also holds for operations where the flow rate varies with the time, as is the case for constant-pressure operation. Comparing both operation modes in a kinetic plot representing the maximal peak capacity versus time, it is theoretically predicted here that both modes can be expected to perform equally well in the fully C-term dominated regime (where H varies linearly with the flow rate), while the constant pressure mode is advantageous for all lower flow rates. Near the optimal flow rate, and for linear gradients running from 5 to 95% organic modifier, time gains of the order of some 20% can be expected (or 25-30% when accounting for

  11. Atomic spectroscopy and highly accurate measurement: determination of fundamental constants; Spectroscopie atomique et mesures de grande precision: determination de constantes fonfamentales

    Energy Technology Data Exchange (ETDEWEB)

    Schwob, C

    2006-12-15

    This document reviews the theoretical and experimental achievements of the author concerning highly accurate atomic spectroscopy applied for the determination of fundamental constants. A pure optical frequency measurement of the 2S-12D 2-photon transitions in atomic hydrogen and deuterium has been performed. The experimental setting-up is described as well as the data analysis. Optimized values for the Rydberg constant and Lamb shifts have been deduced (R = 109737.31568516 (84) cm{sup -1}). An experiment devoted to the determination of the fine structure constant with an aimed relative uncertainty of 10{sup -9} began in 1999. This experiment is based on the fact that Bloch oscillations in a frequency chirped optical lattice are a powerful tool to transfer coherently many photon momenta to the atoms. We have used this method to measure accurately the ratio h/m(Rb). The measured value of the fine structure constant is {alpha}{sub -1} = 137.03599884 (91) with a relative uncertainty of 6.7*10{sup -9}. The future and perspectives of this experiment are presented. This document presented before an academic board will allow his author to manage research work and particularly to tutor thesis students. (A.C.)

  12. Type Ia Supernovae Progenitor Problem and the Variation of Fundamental Constants

    Directory of Open Access Journals (Sweden)

    Rybicki M.

    2016-01-01

    Full Text Available Cosmological observations strongly suggest our universe is the interior of an expanding black hole. If the constant mass of the universe is assumed then from the equation for Schwarzschild radius: r S = 2 Gmc it follows that proportionality constant Gc depends linearly on the universe’s radius R u , identified with r S , i.e. Gc Because the Chandrasekhar limit M Ch relates to the speed of light and to the Newton’s constant as M Ch ( c = G 3 = 2 so expansion involves gradual decrease of M Ch . In result, a single white dwarf can alone become the Type Ia supernova progenitor, which provides a complementary solution to single-degenerate and double-degenerate models for SNe Ia. Both alternative scenarios: G R u and c R are analyzed in regard of their consistence with observations, and their consequences to cosmology.

  13. Rovibrational spectroscopic constants and fundamental vibrational frequencies for isotopologues of cyclic and bent singlet HC{sub 2}N isomers

    Energy Technology Data Exchange (ETDEWEB)

    Inostroza, Natalia; Fortenberry, Ryan C.; Lee, Timothy J. [NASA Ames Research Center, Moffett Field, CA 94035-1000 (United States); Huang, Xinchuan, E-mail: Timothy.J.Lee@nasa.gov [SETI Institute, 189 Bernardo Avenue, Suite 100, Mountain View, CA 94043 (United States)

    2013-12-01

    Through established, highly accurate ab initio quartic force fields, a complete set of fundamental vibrational frequencies, rotational constants, and rovibrational coupling and centrifugal distortion constants have been determined for both the cyclic 1 {sup 1} A' and bent 2 {sup 1} A' DCCN, H{sup 13}CCN, HC{sup 13}CN, and HCC{sup 15}N isotopologues of HCCN. Spectroscopic constants are computed for all isotopologues using second-order vibrational perturbation theory (VPT2), and the fundamental vibrational frequencies are computed with VPT2 and vibrational configuration interaction (VCI) theory. Agreement between VPT2 and VCI results is quite good, with the fundamental vibrational frequencies of the bent isomer isotopologues in accord to within a 0.1-3.2 cm{sup –1} range. Similar accuracies are present for the cyclic isomer isotopologues. The data generated here serve as a reference for astronomical observations of these closed-shell, highly dipolar molecules using new, high-resolution telescopes and as reference for laboratory studies where isotopic labeling may lead to elucidation of the formation mechanism for the known interstellar molecule: X {sup 3} A' HCCN.

  14. Fundamentals of Physics, Part 2 (Chapters 12-20)

    Science.gov (United States)

    Halliday, David; Resnick, Robert; Walker, Jearl

    2003-12-01

    Engines. 20-8 A Statistical View of Entropy. Review & Summary Questions Problems. Appendices. A The International System of Units (SI). B Some Fundamental Constants of Physics. C Some Astronomical Data. D Conversion Factors. E Mathematical Formulas. F Properties of the Elements. G Periodic Table of the Elements. Answers to Checkpoints and Odd-Numbered Questions and Problems. Index.

  15. Supersymmetry, Cosmological Constant and Inflation: Towards a fundamental cosmic picture via "running vacuum"

    Science.gov (United States)

    Mavromatos, Nick E.

    2016-11-01

    On the occasion of a century from the proposal of General relativity by Einstein, I attempt to tackle some open issues in modern cosmology, via a toy but non-trivial model. Specifically, I would like to link together: (i) the smallness of the cosmological constant today, (ii) the evolution of the universe from an inflationary era after the bigbang till now, and (iii) local supersymmetry in the gravitational sector (supergravity) with a broken spectrum at early eras, by making use of the concept of the "running vacuum" in the context of a simple toy model of four-dimensional N = 1 supergravity. The model is characterised by dynamically broken local supersymmetry, induced by the formation of gravitino condensates in the early universe. As I will argue, there is a Starobinsky-type inflationary era characterising the broken supersymmetry phase in this model, which is compatible with the current cosmological data, provided a given constraint is satisfied among some tree-level parameters of the model and the renormalised cosmological constant of the de Sitter background used in the analysis. Applying the "running vacuum" concept, then, to the effective field theory at the exit of inflation, makes a smooth connection (in cosmic time) with the radiation dominance epoch and subsequently with the current era of the Universe, characterised by a small (but dominant) cosmological-constant contribution to the cosmic energy density. In this approach, the smallness of the cosmological constant today is attributed to the failure (due to quantum gravity non-perturbative effects) of the aforementioned constraint.

  16. Nuclear Data in Oklo and Time-Variability of Fundamental Coupling Constants

    CERN Document Server

    Fujii, Y; Fukahori, T; Ohnuki, T; Nakagawa, M; Hidaka, H; Oura, Y; Møller, P; Fujii, Yasunori; Iwamoto, Akira; Fukahori, Tokio; Ohnuki, Toshihiko; Nakagawa, Masayuki; Hidaka, Hiroshi; Oura, Yasuji; Moller, Peter

    2001-01-01

    We re-examined Shlyakhter's analysis of the Sm data in Oklo. With a special care of minimizing contamination due to the inflow of the isotope after the end of the reactor activity, we confirmed that his result on the time-variability of the fine-structure constant, $|\\dot{\\alpha}/\\alpha |\\lsim 10^{-17}{\\rm y}^{-1}$, was basically correct. In addition to this upper bound, however, we obtained another result that indicates a different value of $\\alpha$ 2 billion years ago. We add comments on the recent result from QSO's.

  17. Investigation of the Fundamental Constants Stability Based on the Reactor Oklo Burn-Up Analysis

    Science.gov (United States)

    Onegin, M. S.; Yudkevich, M. S.; Gomin, E. A.

    2012-12-01

    The burn-up of few samples of the natural Oklo reactor zones 3, 5 was calculated using the modern Monte Carlo code. We reconstructed the neutron spectrum in the core by means of the isotope ratios: 147Sm/148Sm and 176Lu/175Lu. These ratios unambiguously determine the water content and core temperature. The isotope ratio of the 149Sm in the sample calculated using this spectrum was compared with experimental one. The disagreement between these two values allows one to limit a possible shift of the low lying resonance of 149Sm. Then, these limits were converted to the limits for the change of the fine structure constant α. We have found out, that for the rate of α change, the inequality ěrt˙ {α }/α ěrt<= 5× 10-18 is fulfilled, which is one order higher than our previous limit.

  18. Investigation of the fundamental constants stability based on the reactor Oklo burn-up analysis

    CERN Document Server

    Onegin, M S

    2010-01-01

    The burn-up for SC56-1472 sample of the natural Oklo reactor zone 3 was calculated using the modern Monte Carlo codes. We reconstructed the neutron spectrum in the core by means of the isotope ratios: $^{147}$Sm/$^{148}$Sm and $^{176}$Lu/$^{175}$Lu. These ratios unambiguously determine the spectrum index and core temperature. The effective neutron absorption cross section of $^{149}$Sm calculated using this spectrum was compared with experimental one. The disagreement between these two values allows to limit a possible shift of the low laying resonance of $^{149}$Sm even more . Then, these limits were converted to the limits for the change of the fine structure constant $\\alpha$. We found that for the rate of $\\alpha$ change the inequality $|\\delta \\dot{\\alpha}/\\alpha| \\le 5\\cdot 10^{-18}$ is fulfilled, which is of the next higher order than our previous limit.

  19. Supersymmetry, Cosmological Constant and Inflation: Towards a fundamental cosmic picture via "running vacuum"

    CERN Document Server

    Mavromatos, Nick E

    2016-01-01

    On the occasion of a century from the proposal of General relativity by Einstein, I attempt to tackle some open issues in modern cosmology, via a toy but non-trivial model. Specifically, I would like to link together: (i) the smallness of the cosmological constant today, (ii) the evolution of the universe from an inflationary era after the big-bang till now, and (iii) local supersymmetry in the gravitational sector (supergravity) with a broken spectrum at early eras, by making use of the concept of the "running vacuum" in the context of a simple toy model of four-dimensional N=1 supergravity. The model is characterised by dynamically broken local supersymmetry, induced by the formation of gravitino condensates in the early universe. As I will argue, there is a Starobinsky-type inflationary era characterising the broken supersymmetry phase in this model, which is compatible with the current cosmological data, provided a given constraint is satisfied among some tree-level parameters of the model and the renorma...

  20. Investigation of the fundamental constants stability based on the reactor Oklo burn-up analysis

    CERN Document Server

    Onegin, M S

    2014-01-01

    New severe constraints on the variation of the fine structure constant have been obtained from reactor Oklo analysis in our previous work. We investigate here how these constraints confine the parameter of BSBM model of varying $\\alpha$. Integrating the coupled system of equations from the Big Bang up to the present time and taking into account the Oklo limits we have obtained the following margin on the combination of the parameters of BSBM model: $$ |\\zeta_m (\\frac{l}{l_{pl}})^2|<6\\cdot 10^{-7}, $$ where $l_{pl}=(\\frac{G\\hbar}{c^3})^{\\frac{1}{2}} \\approx 1.6 \\cdot 10^{-33}$ cm is a Plank length and $l$ is the characteristic length of the BSBM model. The natural value of the parameter $\\zeta_m$ - the fraction of electromagnetic energy in matter - is about $10^{-4}$. As a result it is followed from our analysis that the characteristic length $l$ of BSBM theory should be considerably smaller than the Plank length to fulfill the Oklo constraints on $\\alpha$ variation.

  1. Measuring Changes in the Fundamental Constants with Redshifted Radio Absorption Lines

    CERN Document Server

    Curran, S J; Darling, J K

    2004-01-01

    Strong evidence has recently emerged for a variation in the fine structure constant, $\\alpha\\equiv e^2/\\hbar c$, over the history of the Universe. This was concluded from a detailed study of the relative positions of redshifted optical quasar absorption spectra. However, {\\it radio} absorption lines at high redshift offer a much higher sensitivity to a cosmological change in $\\alpha$ than optical lines. Furthermore, through the comparison of various radio transitions, \\HI, OH and millimetre molecular (e.g. CO) lines, any variations in the proton g-factor, $g_p$, and the ratio of electron/proton masses, $\\mu\\equiv m_e/m_p$, may also be constrained. Presently, however, systems exhibiting redshifted radio lines are rare with the bias being towards those associated with optically selected QSOs. With its unprecedented sensitivity, large bandwidth and wide field of view, the SKA will prove paramount in surveying the sky for absorbers unbiased by dust extinction. This is expected to yield whole new samples of \\HI ~a...

  2. Fundamental and overtone vibrational spectroscopy, enthalpy of hydrogen bond formation and equilibrium constant determination of the methanol-dimethylamine complex.

    Science.gov (United States)

    Du, Lin; Mackeprang, Kasper; Kjaergaard, Henrik G

    2013-07-07

    We have measured gas phase vibrational spectra of the bimolecular complex formed between methanol (MeOH) and dimethylamine (DMA) up to about 9800 cm(-1). In addition to the strong fundamental OH-stretching transition we have also detected the weak second overtone NH-stretching transition. The spectra of the complex are obtained by spectral subtraction of the monomer spectra from spectra recorded for the mixture. For comparison, we also measured the fundamental OH-stretching transition in the bimolecular complex between MeOH and trimethylamine (TMA). The enthalpies of hydrogen bond formation (ΔH) for the MeOH-DMA and MeOH-TMA complexes have been determined by measurements of the fundamental OH-stretching transition in the temperature range from 298 to 358 K. The enthalpy of formation is found to be -35.8 ± 3.9 and -38.2 ± 3.3 kJ mol(-1) for MeOH-DMA and MeOH-TMA, respectively, in the 298 to 358 K region. The equilibrium constant (Kp) for the formation of the MeOH-DMA complex has been determined from the measured and calculated transition intensities of the OH-stretching fundamental transition and the NH-stretching second overtone transition. The transition intensities were calculated using an anharmonic oscillator local mode model with dipole moment and potential energy curves calculated using explicitly correlated coupled cluster methods. The equilibrium constant for formation of the MeOH-DMA complex was determined to be 0.2 ± 0.1 atm(-1), corresponding to a ΔG value of about 4.0 kJ mol(-1).

  3. Identification of Parts Failures. FOS: Fundamentals of Service.

    Science.gov (United States)

    John Deere Co., Moline, IL.

    This parts failures identification manual is one of a series of power mechanics texts and visual aids covering theory of operation, diagnosis of trouble problems, and repair of automotive and off-the-road construction and agricultural equipment. Materials provide basic information with many illustrations for use by vocational students and teachers…

  4. The New SI and the CODATA recommended values of the fundamental constants 2014 (arXiv:1507.07956)

    CERN Document Server

    Pavese, Franco

    2015-01-01

    This note's aim is to point out some standing features of the present CODATA method in the light of the recent CODATA table of the 2014 recommended values for the fundamental constants published on paper arXiv:1507.07956. A comprehensive discussion on this and related issues is becoming very important in view of the foreseen revision of the SI, presently planned for 2018. These features may raise doubts on a possible mixing of physical reasons of general validity in science with some needs specific of metrology concerning the base units of the International System (SI) of measurement units.

  5. Enhanced effects of variation of the fundamental constants in laser interferometers and application to dark matter detection

    CERN Document Server

    Stadnik, Y V

    2015-01-01

    We outline new laser interferometer measurements to search for variation of the electromagnetic fine-structure constant $\\alpha$ and particle masses (including a non-zero photon mass). We propose a strontium optical lattice clock -- silicon single-crystal cavity interferometer as a novel small-scale platform for these new measurements. Multiple passages of a light beam inside an interferometer enhance the effects due to variation of the fundamental constants by the mean number of passages ($N_{\\textrm{eff}} \\sim 10^2$ for a large-scale gravitational-wave detector, such as LIGO, Virgo, GEO600 or TAMA300, while $N_{\\textrm{eff}} \\sim 10^5$ for a strontium clock -- silicon cavity interferometer). Our proposed laser interferometer measurements may be implemented as an extremely precise tool in the direct detection of scalar dark matter that forms an oscillating classical field or topological defects.

  6. Writing biomedical manuscripts part I: fundamentals and general rules.

    Science.gov (United States)

    Ohwovoriole, A E

    2011-01-01

    It is a professional obligation for health researchers to investigate and communicate their findings to the medical community. The writing of a publishable scientific manuscript can be a daunting task for the beginner and to even some established researchers. Many manuscripts fail to get off the ground and/or are rejected. The writing task can be made easier and the quality improved by using and following simple rules and leads that apply to general scientific writing .The manuscript should follow a standard structure:(e.g. (Abstract) plus Introduction, Methods, Results, and Discussion/Conclusion, the IMRAD model. The authors must also follow well established fundamentals of good communication in science and be systematic in approach. The manuscript must move from what is currently known to what was unknown that was investigated using a hypothesis, research question or problem statement. Each section has its own style of structure and language of presentation. The beginning of writing a good manuscript is to do a good study design and to pay attention to details at every stage. Many manuscripts are rejected because of errors that can be avoided if the authors follow simple guidelines and rules. One good way to avoid potential disappointment in manuscript writing is to follow the established general rules along with those of the journal in which the paper is to be published. An important injunction is to make the writing precise, clear, parsimonious, and comprehensible to the intended audience. The purpose of this article is to arm and encourage potential biomedical authors with tools and rules that will enable them to write contemporary manuscripts, which can stand the rigorous peer review process. The expectations of standard journals, and common pitfalls the major elements of a manuscript are covered.

  7. Higgs potential from extended Brans–Dicke theory and the time-evolution of the fundamental constants

    Science.gov (United States)

    Solà, Joan; Karimkhani, Elahe; Khodam-Mohammadi, A.

    2017-01-01

    Despite the enormous significance of the Higgs potential in the context of the standard model of electroweak interactions and in grand unified theories, its ultimate origin is fundamentally unknown and must be introduced by hand in accordance with the underlying gauge symmetry and the requirement of renormalizability. Here we propose a more physical motivation for the structure of the Higgs potential, which we derive from a generalized Brans–Dicke (BD) theory containing two interacting scalar fields. One of these fields is coupled to curvature as in the BD formulation, whereas the other is coupled to gravity both derivatively and non-derivatively through the curvature scalar and the Ricci tensor. By requiring that the cosmological solutions of the model are consistent with observations, we show that the effective scalar field potential adopts the Higgs potential form with a mildly time-evolving vacuum expectation value. This residual vacuum dynamics could be responsible for the possible time variation of the fundamental constants, and is reminiscent of former Bjorken’s ideas on the cosmological constant problem.

  8. Best Constants for Moser-Trudinger Inequalities, Fundamental Solutions and One-Parameter Representation Formulas on Groups of Heisenberg Type

    Institute of Scientific and Technical Information of China (English)

    COHN William S.; LU Guo Zhen

    2002-01-01

    We derive the explicit fundamental solutions for a class of degenerate (or singular) oneparameter subelliptic differential operators on groups of Heisenberg (H) type. This extends the result of Kaplan for the sub-Laplacian on H-type groups, which in turn generalizes Folland's result on the Heisenberg group. As an application, we obtain a one-parameter representation formula for Sobolev functions of compact support on H-type groups. By choosing the parameter equal to the homogeneous dimension Q and using the Moser-Trudinger inequality for the convolutional type operator on stratified groups obtained in [18], we get the following theorem which gives the best constant for the MoserTrudinger inequality for Sobolev functions on H-type groups.Let G be any group of Heisenberg type whose Lie algebra is generated by m left invariant vectorfields and with a q-dimensional center. Let Q = m + 2q, Q′= Q/Q-1 andAQ= [(1/4)q-1/2πq+m/2Γ(Q+m/4)/ QΓ(m/2)Γ(Q/2)] 1/Q-1Then,F∈sup C∞U(Ω) { 1/|Ω|∫Ωexp (AQ(F(u)/‖ GF‖Q)Q′)du}<∞,with AQ as the sharp constant, where G denotes the subelliptic gradient on G.This continues the research originated in our earlier study of the best constants in Moser-Teudinger inequalities and fundamental solutions for one-parameter subelliptic operators on the Heisenberg group[18].

  9. Natural nuclear reactor at Oklo and variation of fundamental constants: Computation of neutronics of a fresh core

    Science.gov (United States)

    Petrov, Yu. V.; Nazarov, A. I.; Onegin, M. S.; Petrov, V. Yu.; Sakhnovsky, E. G.

    2006-12-01

    Using modern methods of reactor physics, we performed full-scale calculations of the Oklo natural reactor. For reliability, we used recent versions of two Monte Carlo codes: the Russian code MCU-REA and the well-known international code MCNP. Both codes produced similar results. We constructed a computer model of the Oklo reactor zone RZ2 which takes into account all details of design and composition. The calculations were performed for three fresh cores with different uranium contents. Multiplication factors, reactivities, and neutron fluxes were calculated. We also estimated the temperature and void effects for the fresh core. As would be expected, we found for the fresh core a significant difference between reactor and Maxwell spectra, which had been used before for averaging cross sections in the Oklo reactor. The averaged cross section of 62149Sm and its dependence on the shift of a resonance position Er (due to variation of fundamental constants) are significantly different from previous results. Contrary to the results of previous papers, we found no evidence of a change of the samarium cross section: a possible shift of the resonance energy is given by the limits -73⩽ΔEr⩽62 meV. Following tradition, we have used formulas of Damour and Dyson to estimate the rate of change of the fine structure constant α. We obtain new, more accurate limits of -4×10-17⩽α·/α⩽3×10-17yr-1. Further improvement of the accuracy of the limits can be achieved by taking account of the core burn-up. These calculations are in progress.

  10. The IAU 2009 System of Astronomical Constants: The Report of the IAU Working Group on Numerical Standards for Fundamental Astronomy

    Science.gov (United States)

    2011-01-01

    of an observer located on the rotating geoid . It is specified in IAU 2000 Resolution B1.9 as a defining constant. The value for LG is taken from...compatible, TT-compatible, and TDB-compatible values are provided. Potential of the geoid , W0 The potential of the geoid , W0, is taken from the

  11. Possible determination of the physical parameters of the first living cells based on the fundamental physical constants

    Science.gov (United States)

    Atanasov, Atanas Todorov

    2016-12-01

    Here is developed the hypothesis that the cell parameters of unicellular organisms (Prokaryotes and Eukaryotes) are determined by the gravitational constant (G, N.m2 /kg2), Planck constant (h, J.s) and growth rate of cells. By scaling analyses it was shown that the growth rate vgr(m/s) of unicellular bacteria and protozoa is relatively constant parameter, ranging in a narrow window of 10-12 - 10-10 m/s, in comparison to the diapason of cell mass, ranging 10 orders of magnitudes from 10-17 kg in bacteria to 10-7 kg in amoebas. By dimensional analyses it was shown that the combination between the growth rate of cells, gravitational constant and Planck constant gives equations with dimension of mass M(vgr)=(h.vgr/G)½ in kg, length L(v gr)=(hṡG/vgr3)1/2 in meter, time T(vgr)=(hṡG/vgr5)1/2 in seconds, and density ρ ((vgr)=vgr.3.5/hG2 in kg/m3 . For growth rate vgr in diapason of 1×10-11 m/s - 1×10-9.5 m/s the calculated numerical values for mass (3×10-18 -1×10-16 kg), length (5×10-8 -1×10-5 m), time (1×102 -1×106 s) and density (1×10-1 - 1×104 kg/m3) overlaps with diapason of experimentally measured values for cell mass (3×10-18 -1×10-15 kg), volume to surface ratio (1×10-7 -1×10-4 m), doubling time (1×103 -1×107 s), and density (1050 - 1300 kg/m3) in bacteria and protozoa. These equations show that appearance of the first living cells could be mutually connected to the physical constants.

  12. Enhanced effects of variation of the fundamental constants in laser interferometers and application to dark-matter detection

    Science.gov (United States)

    Stadnik, Y. V.; Flambaum, V. V.

    2016-06-01

    We outline laser interferometer measurements to search for variation of the electromagnetic fine-structure constant α and particle masses (including a nonzero photon mass). We propose a strontium optical lattice clock—silicon single-crystal cavity interferometer as a small-scale platform for these measurements. Our proposed laser interferometer measurements, which may also be performed with large-scale gravitational-wave detectors, such as LIGO, Virgo, GEO600, or TAMA300, may be implemented as an extremely precise tool in the direct detection of scalar dark matter that forms an oscillating classical field or topological defects.

  13. Higgs potential from extended Brans-Dicke theory and the time-evolution of the fundamental constants

    CERN Document Server

    Sola, Joan; Khodam-Mohammadi, A

    2016-01-01

    Despite the enormous significance of the Higgs potential in the context of the Standard Model of electroweak interactions and in Grand Unified Theories, its ultimate origin is fundamentally unknown and must be introduced by hand in accordance with the underlying gauge symmetry and the requirement of renormalizability. Here we propose a more physical motivation for the structure of the Higgs potential, which we link to gravity, and more specifically to an extended Brans-Dicke (BD) theory containing two interacting scalar fields. One of these fields is coupled to curvature as in the BD formulation, whereas the other is coupled to gravity both derivatively and non-derivatively through the curvature scalar and the Ricci tensor. By requiring that the cosmological solutions of the model are consistent with observations, we show that the effective scalar field potential adopts the Higgs potential form with a mildly time-evolving vacuum expectation value. Such residual vacuum dynamics could be responsible for the pos...

  14. Design of a rotary reactor for chemical-looping combustion. Part 1: Fundamentals and design methodology

    KAUST Repository

    Zhao, Zhenlong

    2014-04-01

    Chemical-looping combustion (CLC) is a novel and promising option for several applications including carbon capture (CC), fuel reforming, H 2 generation, etc. Previous studies demonstrated the feasibility of performing CLC in a novel rotary design with micro-channel structures. In the reactor, a solid wheel rotates between the fuel and air streams at the reactor inlet, and depleted air and product streams at exit. The rotary wheel consists of a large number of micro-channels with oxygen carriers (OC) coated on the inner surface of the channel walls. In the CC application, the OC oxidizes the fuel while the channel is in the fuel zone to generate undiluted CO2, and is regenerated while the channel is in the air zone. In this two-part series, the effect of the reactor design parameters is evaluated and its performance with different OCs is compared. In Part 1, the design objectives and criteria are specified and the key parameters controlling the reactor performance are identified. The fundamental effects of the OC characteristics, the design parameters, and the operating conditions are studied. The design procedures are presented on the basis of the relative importance of each parameter, enabling a systematic methodology of selecting the design parameters and the operating conditions with different OCs. Part 2 presents the application of the methodology to the designs with the three commonly used OCs, i.e., nickel, copper, and iron, and compares the simulated performances of the designs. © 2013 Elsevier Ltd. All rights reserved.

  15. Recent developments in modeling of hot rolling processes: Part I - Fundamentals

    Science.gov (United States)

    Hirt, Gerhard; Bambach, Markus; Seuren, Simon; Henke, Thomas; Lohmar, Johannes

    2013-05-01

    The numerical simulation of industrial rolling processes has gained substantial relevance over the past decades. A large variety of models have been put forward to simulate single and multiple rolling passes taking various interactions between the process, the microstructure evolution and the rolling mill into account. On the one hand, these include sophisticated approaches which couple models on all scales from the product's microstructure level up to the elastic behavior of the roll stand. On the other hand, simplified but fast models are used for on-line process control and automatic pass schedule optimization. This publication gives a short overview of the fundamental equations used in modeling of hot rolling of metals. Part II of this paper will present selected applications of hot rolling simulations.

  16. 40 CFR Appendix Vi to Part 265 - Compounds With Henry's Law Constant Less Than 0.1 Y/X

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 25 2010-07-01 2010-07-01 false Compounds With Henry's Law Constant Less Than 0.1 Y/X VI Appendix VI to Part 265 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... Law Constant Less Than 0.1 Y/X Compound name CAS No. Acetaldol 107-89-1 Acetamide 60-35-5...

  17. Finite element modeling of borehole heat exchanger systems. Part 1. Fundamentals

    Science.gov (United States)

    Diersch, H.-J. G.; Bauer, D.; Heidemann, W.; Rühaak, W.; Schätzl, P.

    2011-08-01

    Single borehole heat exchanger (BHE) and arrays of BHE are modeled by using the finite element method. The first part of the paper derives the fundamental equations for BHE systems and their finite element representations, where the thermal exchange between the borehole components is modeled via thermal transfer relations. For this purpose improved relationships for thermal resistances and capacities of BHE are introduced. Pipe-to-grout thermal transfer possesses multiple grout points for double U-shape and single U-shape BHE to attain a more accurate modeling. The numerical solution of the final 3D problems is performed via a widely non-sequential (essentially non-iterative) coupling strategy for the BHE and porous medium discretization. Four types of vertical BHE are supported: double U-shape (2U) pipe, single U-shape (1U) pipe, coaxial pipe with annular (CXA) and centred (CXC) inlet. Two computational strategies are used: (1) The analytical BHE method based on Eskilson and Claesson's (1988) solution, (2) numerical BHE method based on Al-Khoury et al.'s (2005) solution. The second part of the paper focusses on BHE meshing aspects, the validation of BHE solutions and practical applications for borehole thermal energy store systems.

  18. High resolution infrared synchrotron study of CH2D81Br: ground state constants and analysis of the ν5, ν6 and ν9 fundamentals

    Science.gov (United States)

    Baldacci, A.; Stoppa, P.; Visinoni, R.; Wugt Larsen, R.

    2012-09-01

    The high resolution infrared absorption spectrum of CH2D81Br has been recorded by Fourier transform spectroscopy in the range 550-1075 cm-1, with an unapodized resolution of 0.0025 cm-1, employing a synchrotron radiation source. This spectral region is characterized by the ν6 (593.872 cm-1), ν5 (768.710 cm-1) and ν9 (930.295 cm-1) fundamental bands. The ground state constants up to sextic centrifugal distortion terms have been obtained for the first time by ground-state combination differences from the three bands and subsequently employed for the evaluation of the excited state parameters. Watson's A-reduced Hamiltonian in the Ir representation has been used in the calculations. The ν 6 = 1 level is essentially free from perturbation whereas the ν 5 = 1 and ν 9 = 1 states are mutually interacting through a-type Coriolis coupling. Accurate spectroscopic parameters of the three excited vibrational states and a high-order coupling constant which takes into account the interaction between ν5 and ν9 have been determined.

  19. 40 CFR Appendix III to Part 86 - Constant Volume Sampler Flow Calibration

    Science.gov (United States)

    2010-07-01

    ... monoxide is poisonous!). Critical flow orifice devices can also be used for constant flow metering. 2... fittings on the intake side of sample transfer pumps on both the CVS and analyzer console....

  20. Chairside CAD/CAM materials. Part 1: Measurement of elastic constants and microstructural characterization.

    Science.gov (United States)

    Belli, Renan; Wendler, Michael; de Ligny, Dominique; Cicconi, Maria Rita; Petschelt, Anselm; Peterlik, Herwig; Lohbauer, Ulrich

    2017-01-01

    A deeper understanding of the mechanical behavior of dental restorative materials requires an insight into the materials elastic constants and microstructure. Here we aim to use complementary methodologies to thoroughly characterize chairside CAD/CAM materials and discuss the benefits and limitations of different analytical strategies. Eight commercial CAM/CAM materials, ranging from polycrystalline zirconia (e.max ZirCAD, Ivoclar-Vivadent), reinforced glasses (Vitablocs Mark II, VITA; Empress CAD, Ivoclar-Vivadent) and glass-ceramics (e.max CAD, Ivoclar-Vivadent; Suprinity, VITA; Celtra Duo, Dentsply) to hybrid materials (Enamic, VITA; Lava Ultimate, 3M ESPE) have been selected. Elastic constants were evaluated using three methods: Resonant Ultrasound Spectroscopy (RUS), Resonant Beam Technique (RBT) and Ultrasonic Pulse-Echo (PE). The microstructures were characterized using Scanning Electron Microscopy (SEM), Energy Dispersive X-ray Spectroscopy (EDX), Raman Spectroscopy and X-ray Diffraction (XRD). Young's modulus (E), Shear modulus (G), Bulk modulus (B) and Poisson's ratio (ν) were obtained for each material. E and ν reached values ranging from 10.9 (Lava Ultimate) to 201.4 (e.max ZirCAD) and 0.173 (Empress CAD) to 0.47 (Lava Ultimate), respectively. RUS showed to be the most complex and reliable method, while the PE method the easiest to perform but most unreliable. All dynamic methods have shown limitations in measuring the elastic constants of materials showing high damping behavior (hybrid materials). SEM images, Raman spectra and XRD patterns were made available for each material, showing to be complementary tools in the characterization of their crystal phases. Here different methodologies are compared for the measurement of elastic constants and microstructural characterization of CAD/CAM restorative materials. The elastic properties and crystal phases of eight materials are herein fully characterized. Copyright © 2016 The Academy of Dental Materials

  1. Fundamentals of Manufacturing Technologies for Aircraft Engine Parts Made of TiAl Based Alloys

    Directory of Open Access Journals (Sweden)

    Szkliniarz W.

    2016-09-01

    Full Text Available The study presents fundamentals of manufacturing technologies for aircraft engine construction elements, made of light, intermetallic TiAl based alloy, which is characterized by high relative strength and good creep and oxidation resistance. For smelting of alloy, the vacuum metallurgy methods were used, including application of induction furnace equipped with special crucibles made of isostatic-pressed, high-density graphite. To produce good quality construction element for aircraft engine, such as low-pressure turbine blade, there were methods of gravity casting from a very high temperature to the preheated shell moulds applied.

  2. Attack Detection and Identification in Cyber-Physical Systems -- Part I: Models and Fundamental Limitations

    CERN Document Server

    Pasqualetti, Fabio; Bullo, Francesco

    2012-01-01

    Cyber-physical systems integrate computation, communication, and physical capabilities to interact with the physical world and humans. Besides failures of components, cyber-physical systems are prone to malignant attacks, and specific analysis tools as well as monitoring mechanisms need to be developed to enforce system security and reliability. This paper proposes a unified framework to analyze the resilience of cyber-physical systems against attacks cast by an omniscient adversary. We model cyber-physical systems as linear descriptor systems, and attacks as exogenous unknown inputs. Despite its simplicity, our model captures various real-world cyber-physical systems, and it includes and generalizes many prototypical attacks, including stealth, (dynamic) false-data injection and replay attacks. First, we characterize fundamental limitations of static, dynamic, and active monitors for attack detection and identification. Second, we provide constructive algebraic conditions to cast undetectable and unidentifia...

  3. Fundamentals in Biostatistics for Research in Pediatric Dentistry: Part I - Basic Concepts.

    Science.gov (United States)

    Garrocho-Rangel, J A; Ruiz-Rodríguez, M S; Pozos-Guillén, A J

    The purpose of this report was to provide the reader with some basic concepts in order to better understand the significance and reliability of the results of any article on Pediatric Dentistry. Currently, Pediatric Dentists need the best evidence available in the literature on which to base their diagnoses and treatment decisions for the children's oral care. Basic understanding of Biostatistics plays an important role during the entire Evidence-Based Dentistry (EBD) process. This report describes Biostatistics fundamentals in order to introduce the basic concepts used in statistics, such as summary measures, estimation, hypothesis testing, effect size, level of significance, p value, confidence intervals, etc., which are available to Pediatric Dentists interested in reading or designing original clinical or epidemiological studies.

  4. Fundamentally excited flow past a surface-mounted rib. Part I: Turbulent structure characterisation

    Indian Academy of Sciences (India)

    P K Panigrahi

    2001-10-01

    Different data analysis techniques for characterisation of the turbulent flow past a surface-mounted rib are reviewed. Deficiencies of the existing techniques are explained and modified techniques for determination of coherent structure magnitude and phase jitter are suggested. The effect of fundamental excitation on the flow is studied by using these turbulent signal analysis techniques. The appropriate length scale for characterizing the large-scale structures present in the reattaching shear layer of the surface-mounted rib is found to be the momentum thickness at the downstream edge of the rib, and the corresponding Strouhal number is 0.013. This is in contrast to a rib in the free stream, where the rib height is the correct scaling parameter. The post reattachment region is observed to be dominated by large-scale structures contrary to the traditional belief that large eddies break into small scales at the reattachment location. Low magnitude of phase jitter in the near field region is observed, indicating coherence of the flow structures. Phase decorrelation begins to occur beyond three rib heights from the downstream edge of the rib. From the quadrant analysis results, the outer edge of the shear layer is observed to be dominated by large-scale ejection motions.

  5. Estimation of brittleness index using dynamic and static elastic constants in the Haenam Basin, Southwestern Part of Korean Peninsula

    Science.gov (United States)

    Hwang, Seho; Shin, Jehyun; Kim, Jongman; Won, Byeongho; Song, Wonkyoung; Kim, Changryol; Ki, Jungseok

    2014-05-01

    One of the most important physical properties is the measurement of the elastic constants of the formation in the evaluation of shale gas. Normally the elastic constants by geophysical well logging and the laboratory test are used in the design of hydraulic fracturing . The three inches diameter borehole of the depth of 505 m for the evaluation of shale gas drilled and was fully cored at the Haenan Basin, southwestern part of Korea Peninsula. We performed a various laboratory tests and geophysical well logging using slime hole logging system. Geophysical well logs include the radioactive logs such as natural gamma log, density log and neutron log, and monopole and dipole sonic log, and image logs. Laboratory tests are the axial compression test, elastic wave velocities and density, and static elastic constants measurements for 21 shale and sandstone cores. We analyzed the relationships between the physical properties by well logs and laboratory test as well as static elastic constants by laboratory tests. In the case of an sonic log using a monopole source of main frequency 23 kHz, measuring P-wave velocity was performed reliably. When using the dipole excitation of low frequency, the signal to noise ratio of the measured shear wave was very low. But when measuring using time mode in a predetermined depth, the signal to noise ratio of measured data relatively improved to discriminate the shear wave. P-wave velocities by laboratory test and sonic logging agreed well overall, but S-wave velocities didn't. The reason for the discrepancy between the laboratory test and sonic log is mainly the low signal to noise ratio of sonic log data by low frequency dipole source, and measuring S-wave in the small diameter borehole is still challenge. The relationship between the P-wave velocity and two dynamic elastic constants, Young's modulus and Poisson's ratio, shows a good correlation. And the relationship between the static elastic constants and dynamic elastic constants also

  6. Circularly-symmetric complex normal ratio distribution for scalar transmissibility functions. Part I: Fundamentals

    Science.gov (United States)

    Yan, Wang-Ji; Ren, Wei-Xin

    2016-12-01

    Recent advances in signal processing and structural dynamics have spurred the adoption of transmissibility functions in academia and industry alike. Due to the inherent randomness of measurement and variability of environmental conditions, uncertainty impacts its applications. This study is focused on statistical inference for raw scalar transmissibility functions modeled as complex ratio random variables. The goal is achieved through companion papers. This paper (Part I) is dedicated to dealing with a formal mathematical proof. New theorems on multivariate circularly-symmetric complex normal ratio distribution are proved on the basis of principle of probabilistic transformation of continuous random vectors. The closed-form distributional formulas for multivariate ratios of correlated circularly-symmetric complex normal random variables are analytically derived. Afterwards, several properties are deduced as corollaries and lemmas to the new theorems. Monte Carlo simulation (MCS) is utilized to verify the accuracy of some representative cases. This work lays the mathematical groundwork to find probabilistic models for raw scalar transmissibility functions, which are to be expounded in detail in Part II of this study.

  7. Optimizing drug delivery systems using systematic "design of experiments." Part I: fundamental aspects.

    Science.gov (United States)

    Singh, Bhupinder; Kumar, Rajiv; Ahuja, Naveen

    2005-01-01

    , postulation of mathematical models for various chosen response characteristics, fitting experimental data into these model(s), mapping and generating graphic outcomes, and design validation using model-based response surface methodology. The broad topic of DoE optimization methodology is covered in two parts. Part I of the review attempts to provide thought-through and thorough information on diverse DoE aspects organized in a seven-step sequence. Besides dealing with basic DoE terminology for the novice, the article covers the niceties of several important experimental designs, mathematical models, and optimum search techniques using numeric and graphical methods, with special emphasis on computer-based approaches, artificial neural networks, and judicious selection of designs and models.

  8. Design, construction and tests of a crystal bender which provides constant position of the central part of the crystal

    CERN Document Server

    Artemiev, A I; Busetto, E; Franc, F; Hrdy, J; Mrazek, D; Savoia, A

    2001-01-01

    We propose a new scheme for a crystal bender. This scheme provides the constant position of the central point of a rectangular crystal during the bending process. The measurements show that the full scatter of the position of the central part of a sample during the bending process is slightly less than 100 microns. The measured profile of a sample bent to a radius of about 50 cm was compared with a circle fitted by the least square method. The relative difference between measured and fitted radii appeared to be about 10 sup - sup 5.

  9. Fundamentals in Biostatistics for Investigation in Pediatric Dentistry: Part II -Biostatistical Methods.

    Science.gov (United States)

    Pozos-Guillén, Amaury; Ruiz-Rodríguez, Socorro; Garrocho-Rangel, Arturo

    The main purpose of the second part of this series was to provide the reader with some basic aspects of the most common biostatistical methods employed in health sciences, in order to better understand the validity, significance and reliability of the results from any article on Pediatric Dentistry. Currently, as mentioned in the first paper, Pediatric Dentists need basic biostatistical knowledge to be able to apply it when critically appraise a dental article during the Evidence-based Dentistry (EBD) process, or when participating in the development of a clinical study with dental pediatric patients. The EBD process provides a systematic approach of collecting, review and analyze current and relevant published evidence about oral health care in order to answer a particular clinical question; then this evidence should be applied in everyday practice. This second report describes the most commonly used statistical methods for analyzing and interpret collected data, and the methodological criteria to be considered when choosing the most appropriate tests for a specific study. These are available to Pediatric Dentistry practicants interested in reading or designing original clinical or epidemiological studies.

  10. On understanding the very different science premises meaningful to CAM versus orthodox medicine: Part II--applications of Part I fundamentals to five different space-time examples.

    Science.gov (United States)

    Tiller, William A

    2010-04-01

    In Part I of this pair of articles, the fundamental experimental observations and theoretical perspectives were provided for one to understand the key differences between our normal, uncoupled state of physical reality and the human consciousness-induced coupled state of physical reality. Here in Part II, the thermodynamics of complementary and alternative medicine, which deals with the partially coupled state of physical reality, is explored via the use of five different foci of relevance to today's science and medicine: (1) homeopathy; (2) the placebo effect; (3) long-range, room temperature, macroscopic size-scale, information entanglement; (4) an explanation for dark matter/energy plus human levitation possibility; and (5) electrodermal diagnostic devices. The purpose of this pair of articles is to clearly differentiate the use and limitations of uncoupled state physics in both nature and today's orthodox medicine from coupled state physics in tomorrow's complementary and alternative medicine.

  11. Identification of the predicted 5s-4f level crossing optical lines with applications to metrology and searches for the variation of fundamental constants.

    Science.gov (United States)

    Windberger, A; Crespo López-Urrutia, J R; Bekker, H; Oreshkina, N S; Berengut, J C; Bock, V; Borschevsky, A; Dzuba, V A; Eliav, E; Harman, Z; Kaldor, U; Kaul, S; Safronova, U I; Flambaum, V V; Keitel, C H; Schmidt, P O; Ullrich, J; Versolato, O O

    2015-04-17

    We measure optical spectra of Nd-like W, Re, Os, Ir, and Pt ions of particular interest for studies of a possibly varying fine-structure constant. Exploiting characteristic energy scalings we identify the strongest lines, confirm the predicted 5s-4f level crossing, and benchmark advanced calculations. We infer two possible values for optical M2/E3 and E1 transitions in Ir^{17+} that have the highest predicted sensitivity to a variation of the fine-structure constant among stable atomic systems. Furthermore, we determine the energies of proposed frequency standards in Hf^{12+} and W^{14+}.

  12. The organization of biological sequences into constrained and unconstrained parts determines fundamental properties of genotype-phenotype maps.

    Science.gov (United States)

    Greenbury, S F; Ahnert, S E

    2015-12-01

    Biological information is stored in DNA, RNA and protein sequences, which can be understood as genotypes that are translated into phenotypes. The properties of genotype-phenotype (GP) maps have been studied in great detail for RNA secondary structure. These include a highly biased distribution of genotypes per phenotype, negative correlation of genotypic robustness and evolvability, positive correlation of phenotypic robustness and evolvability, shape-space covering, and a roughly logarithmic scaling of phenotypic robustness with phenotypic frequency. More recently similar properties have been discovered in other GP maps, suggesting that they may be fundamental to biological GP maps, in general, rather than specific to the RNA secondary structure map. Here we propose that the above properties arise from the fundamental organization of biological information into 'constrained' and 'unconstrained' sequences, in the broadest possible sense. As 'constrained' we describe sequences that affect the phenotype more immediately, and are therefore more sensitive to mutations, such as, e.g. protein-coding DNA or the stems in RNA secondary structure. 'Unconstrained' sequences, on the other hand, can mutate more freely without affecting the phenotype, such as, e.g. intronic or intergenic DNA or the loops in RNA secondary structure. To test our hypothesis we consider a highly simplified GP map that has genotypes with 'coding' and 'non-coding' parts. We term this the Fibonacci GP map, as it is equivalent to the Fibonacci code in information theory. Despite its simplicity the Fibonacci GP map exhibits all the above properties of much more complex and biologically realistic GP maps. These properties are therefore likely to be fundamental to many biological GP maps.

  13. The organization of biological sequences into constrained and unconstrained parts determines fundamental properties of genotype–phenotype maps

    Science.gov (United States)

    Greenbury, S. F.; Ahnert, S. E.

    2015-01-01

    Biological information is stored in DNA, RNA and protein sequences, which can be understood as genotypes that are translated into phenotypes. The properties of genotype–phenotype (GP) maps have been studied in great detail for RNA secondary structure. These include a highly biased distribution of genotypes per phenotype, negative correlation of genotypic robustness and evolvability, positive correlation of phenotypic robustness and evolvability, shape-space covering, and a roughly logarithmic scaling of phenotypic robustness with phenotypic frequency. More recently similar properties have been discovered in other GP maps, suggesting that they may be fundamental to biological GP maps, in general, rather than specific to the RNA secondary structure map. Here we propose that the above properties arise from the fundamental organization of biological information into ‘constrained' and ‘unconstrained' sequences, in the broadest possible sense. As ‘constrained' we describe sequences that affect the phenotype more immediately, and are therefore more sensitive to mutations, such as, e.g. protein-coding DNA or the stems in RNA secondary structure. ‘Unconstrained' sequences, on the other hand, can mutate more freely without affecting the phenotype, such as, e.g. intronic or intergenic DNA or the loops in RNA secondary structure. To test our hypothesis we consider a highly simplified GP map that has genotypes with ‘coding' and ‘non-coding' parts. We term this the Fibonacci GP map, as it is equivalent to the Fibonacci code in information theory. Despite its simplicity the Fibonacci GP map exhibits all the above properties of much more complex and biologically realistic GP maps. These properties are therefore likely to be fundamental to many biological GP maps. PMID:26609063

  14. Measurement of the Positive Muon Lifetime and Determination of the Fermi Constant to Part-per-Million Precision

    CERN Document Server

    Webber, D M; ~Peng, Q; Battu, S; Carey, R M; Chitwood, D B; Crnkovic, J; Debevec, P T; Dhamija, S; Earle, W; Gafarov, A; Giovanetti, K; Gorringe, T P; Gray, F E; Hartwig, Z; Hertzog, D W; Johnson, B; Kammel, P; Kiburg, B; Kizilgul, S; Kunkle, J; Lauss, B; Logashenko, I; Lynch, K R; McNabb, R; Miller, J P; Mulhauser, F; Onderwater, C J G; Phillips, J; Rath, S; Roberts, B L; Winter, P; Wolfe, B

    2010-01-01

    We report a measurement of the positive muon lifetime to a precision of 1.0~parts per million (ppm); it is the most precise particle lifetime ever measured. The experiment used a time-structured, low-energy muon beam and a segmented plastic scintillator array to record more than 2 x 10^{12} decays. Two different stopping target configurations were employed in independent data-taking periods. The combined results give tau_{mu^+}(MuLan) = 2196980.3(2.2)~ps, more than 15 times as precise as any previous experiment. The muon lifetime gives the most precise value for the Fermi constant: G_F(MuLan) = 1.1663788 (7) x 10^-5 GeV^-2 (0.6~ppm). It is also used to extract the mu^-p singlet capture rate, which determines the proton's weak induced pseudoscalar coupling g_P.

  15. Critical evaluation of equilibrium constants involving 8-hydroxyquinoline and its metal chelates critical evaluation of equilibrium constants in solution : part b : equilibrium constants of liquid-liquid distribution systems

    CERN Document Server

    Stary, J

    1979-01-01

    Critical Evaluation of Equilibrium Constants Involving 8-Hydroxyquinoline and Its Metal Chelates presents and evaluates the published data on the solubility, dissociation, and liquid-liquid distribution of oxine and its metal chelates to recommend the most reliable numerical data. This book explores the dissociation constants of oxine in aqueous solutions.Organized into four chapters, this book begins with an overview of the characteristics of 8-hydroxyquinoline (oxine). This text then examines the total solubility of oxine in aqueous solution at different pH values. Other chapters consider th

  16. Membrane filtration studies of aquatic humic substances and their metal species: a concise overview. Part 2. Evaluation of conditional stability constants by using ultrafiltration.

    Science.gov (United States)

    Nifant'eva, T I; Shkinev, V M; Spivakov, B Y; Burba, P

    1999-02-01

    The assessment of conditional stability constants of aquatic humic substance (HS) metal complexes is overviewed with special emphasis on the application of ultrafiltration methods. Fundamentals and limitations of stability functions in the case of macromolecular and polydisperse metal-HS species in aquatic environments are critically discussed. The review summarizes the advantages and application of ultrafiltration for metal-HS complexation studies, discusses the comparibility and reliability of stability constants. The potential of ultrafiltration procedures for characterizing the lability of metal-HS species is also stressed.

  17. Fundamental ecology is fundamental.

    Science.gov (United States)

    Courchamp, Franck; Dunne, Jennifer A; Le Maho, Yvon; May, Robert M; Thébaud, Christophe; Hochberg, Michael E

    2015-01-01

    The primary reasons for conducting fundamental research are satisfying curiosity, acquiring knowledge, and achieving understanding. Here we develop why we believe it is essential to promote basic ecological research, despite increased impetus for ecologists to conduct and present their research in the light of potential applications. This includes the understanding of our environment, for intellectual, economical, social, and political reasons, and as a major source of innovation. We contend that we should focus less on short-term, objective-driven research and more on creativity and exploratory analyses, quantitatively estimate the benefits of fundamental research for society, and better explain the nature and importance of fundamental ecology to students, politicians, decision makers, and the general public. Our perspective and underlying arguments should also apply to evolutionary biology and to many of the other biological and physical sciences. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.

  18. Nucleosynthesis and the variation of fundamental couplings

    OpenAIRE

    2004-01-01

    We determine the influence of a variation of the fundamental ``constants'' on the predicted helium abundance in Big Bang Nucleosynthesis. The analytic estimate is performed in two parts: the first step determines the dependence of the helium abundance on the nuclear physics parameters, while the second step relates those parameters to the fundamental couplings of particle physics. This procedure can incorporate in a flexible way the time variation of several couplings within a grand unified t...

  19. Fundamental Astronomy

    CERN Document Server

    Karttunen, Hannu; Oja, Heikki; Poutanen, Markku; Donner, Karl Johan

    2007-01-01

    Fundamental Astronomy gives a well-balanced and comprehensive introduction to the topics of classical and modern astronomy. While emphasizing both the astronomical concepts and the underlying physical principles, the text provides a sound basis for more profound studies in the astronomical sciences. The fifth edition of this successful undergraduate textbook has been extensively modernized and extended in the parts dealing with the Milky Way, extragalactic astronomy and cosmology as well as with extrasolar planets and the solar system (as a consequence of recent results from satellite missions and the new definition by the International Astronomical Union of planets, dwarf planets and small solar-system bodies). Furthermore a new chapter on astrobiology has been added. Long considered a standard text for physical science majors, Fundamental Astronomy is also an excellent reference and entrée for dedicated amateur astronomers.

  20. Fundamentally updating fundamentals.

    Science.gov (United States)

    Armstrong, Gail; Barton, Amy

    2013-01-01

    Recent educational research indicates that the six competencies of the Quality and Safety Education for Nurses initiative are best introduced in early prelicensure clinical courses. Content specific to quality and safety has traditionally been covered in senior level courses. This article illustrates an effective approach to using quality and safety as an organizing framework for any prelicensure fundamentals of nursing course. Providing prelicensure students a strong foundation in quality and safety in an introductory clinical course facilitates early adoption of quality and safety competencies as core practice values.

  1. Transfer efficiency of angular momentum in sum-frequency generation and control of its spin and orbital parts by varying polarization and frequency of fundamental beams

    Science.gov (United States)

    Perezhogin, I. A.; Grigoriev, K. S.; Potravkin, N. N.; Cherepetskaya, E. B.; Makarov, V. A.

    2017-08-01

    Considering sum-frequency generation in an isotropic chiral nonlinear medium, we analyze the transfer of the spin angular momentum of fundamental elliptically polarized Gaussian light beams to the signal beam, which appears as the superposition of two Laguerre-Gaussian modes with both spin and orbital angular momentum. Only for the circular polarization of the fundamental radiation is its angular momentum fully transferred to the sum-frequency beam; otherwise, part of it can be transferred to the medium. Its value, as well as the ratio of spin and orbital contributions in the signal beam, depends on the fundamental frequency ratio and the polarization of the incident beams. Higher energy conversion efficiency in sum-frequency generation does not always correspond to higher angular momentum conversion efficiency.

  2. Determination of the gravitational constant G

    Institute of Scientific and Technical Information of China (English)

    HU Zhong-kun; LIU Qi; LUO Jun

    2006-01-01

    A precise knowledge of the Newtonian gravitational constant G has an important role in physics and is of considerable meteorological interest.Although G was the first physical constant to be introduced and measured in the history of science,it is still the least precisely determined of all the fundamental constants of nature.The 2002 CODATA recommended value for G,G=(6.6742±0.0010)×10-11m3·kg-1·s-2,has an uncertainty of 150 parts per million (ppm),much larger than that of all other fundamental constants.Reviewed here is the status of our knowledge of the absolute value of G,methods for determining G,and recent high precision experiments for determining G.

  3. Slow Crack Growth of Brittle Materials With Exponential Crack-Velocity Formulation. Part 3; Constant Stress and Cyclic Stress Experiments

    Science.gov (United States)

    Choi, Sung R.; Nemeth, Noel N.; Gyekenyesi, John P.

    2002-01-01

    The previously determined life prediction analysis based on an exponential crack-velocity formulation was examined using a variety of experimental data on advanced structural ceramics tested under constant stress and cyclic stress loading at ambient and elevated temperatures. The data fit to the relation between the time to failure and applied stress (or maximum applied stress in cyclic loading) was very reasonable for most of the materials studied. It was also found that life prediction for cyclic stress loading from data of constant stress loading in the exponential formulation was in good agreement with the experimental data, resulting in a similar degree of accuracy as compared with the power-law formulation. The major limitation in the exponential crack-velocity formulation, however, was that the inert strength of a material must be known a priori to evaluate the important slow-crack-growth (SCG) parameter n, a significant drawback as compared with the conventional power-law crack-velocity formulation.

  4. How to increase treatment effectiveness and efficiency in psychiatry: creative psychopharmacotherapy - part 1: definition, fundamental principles and higher effectiveness polypharmacy.

    Science.gov (United States)

    Jakovljević, Miro

    2013-09-01

    Psychopharmacotherapy is a fascinating field that can be understood in many different ways. It is both a science and an art of communication with a heavily subjective dimension. The advent of a significant number of the effective and well tolerated mental health medicines during and after 1990s decade of the brain has increased our possibilities to treat major mental disorders in more successful ways with much better treatment outcome including full recovery. However, there is a huge gap between our possibilities for achieving high treatment effectiveness and not satisfying results in day-to-day clinical practice. Creative approach to psychopharmacotherapy could advance everyday clinical practice and bridge the gap. Creative psychopharmacotherapy is a concept that incorporates creativity as its fundamental tool. Creativity involves the intention and ability to transcend limiting traditional ideas, rules, patterns and relationships and to create meaningful new ideas, interpretations, contexts and methods in clinical psychopharmacology.

  5. Slow Crack Growth of Brittle Materials With Exponential Crack-Velocity Formulation. Part 2; Constant Stress Rate Experiments

    Science.gov (United States)

    Choi, Sung R.; Nemeth, Noel N.; Gyekenyesi, John P.

    2002-01-01

    The previously determined life prediction analysis based on an exponential crack-velocity formulation was examined using a variety of experimental data on glass and advanced structural ceramics in constant stress rate and preload testing at ambient and elevated temperatures. The data fit to the relation of strength versus the log of the stress rate was very reasonable for most of the materials. Also, the preloading technique was determined equally applicable to the case of slow-crack-growth (SCG) parameter n greater than 30 for both the power-law and exponential formulations. The major limitation in the exponential crack-velocity formulation, however, was that the inert strength of a material must be known a priori to evaluate the important SCG parameter n, a significant drawback as compared with the conventional power-law crack-velocity formulation.

  6. Design, construction, and measurements of a bender, which provide constant position of the central part of the crystal monocromator during bending process (abstract)

    Science.gov (United States)

    Artemev, A.; Artemiev, N.; Busetto, E.; Franc, F.; Hrdý, J.; Mrázek, D.; Savoia, A.

    2002-03-01

    The bender described is a part of the double crystal monochromator for the QUICKEXAFS spectrometer, which is under development for ELETTRA. Technical specification for the bender include the following: constant position of the central point of the crystal during the bending process, the full size of the crystal must be as small as possible in the dynamic bending process, and the bender must be highly vacuum compatible. To satisfy all of these demands we propose a new scheme for a bender of rectangular crystal with four living supports. The bender has two inner wings, which bear two inner crystal supports, and two outer wings, which bear two outer supports. These wings are moved by two cones placed on the same rod. The cones have different profiles. In order to keep the central part of the crystal in a constant position during the bending process these cone profiles are designed and manufactured in a very special way. Measurements of the bender characteristics were made with the help of a test bench. Instead of a Si single crystal we used a bronze plate of the same size, which had very similar elastic constants. The full scattering of the position of the central part of the bronze plate during the bending process is less then 100 μm. The measured profile of the plate bent to about 506 mm was compared with a circle fitted by the least square method. The relative difference between measured and fitted radius appeared to be about 10-5.

  7. On decay constants and orbital distance to the Sun—part III: beta plus and electron capture decay

    Science.gov (United States)

    Pommé, S.; Stroh, H.; Paepen, J.; Van Ammel, R.; Marouli, M.; Altzitzoglou, T.; Hult, M.; Kossert, K.; Nähle, O.; Schrader, H.; Juget, F.; Bailat, C.; Nedjadi, Y.; Bochud, F.; Buchillier, T.; Michotte, C.; Courte, S.; van Rooy, M. W.; van Staden, M. J.; Lubbe, J.; Simpson, B. R. S.; Fazio, A.; De Felice, P.; Jackson, T. W.; Van Wyngaardt, W. M.; Reinhard, M. I.; Golya, J.; Bourke, S.; Roy, T.; Galea, R.; Keightley, J. D.; Ferreira, K. M.; Collins, S. M.; Ceccatelli, A.; Verheyen, L.; Bruggeman, M.; Vodenik, B.; Korun, M.; Chisté, V.; Amiot, M.-N.

    2017-02-01

    The hypothesis that seasonal changes in proximity to the Sun cause variation of decay constants at permille level has been tested for radionuclides disintegrating through electron capture and beta plus decay. Activity measurements of 22Na, 54Mn, 55Fe, 57Co, 65Zn, 82+85Sr, 90Sr, 109Cd, 124Sb, 133Ba, 152Eu, and 207Bi sources were repeated over periods from 200 d up to more than four decades at 14 laboratories across the globe. Residuals from the exponential nuclear decay curves were inspected for annual oscillations. Systematic deviations from a purely exponential decay curve differ from one data set to another and appear attributable to instabilities in the instrumentation and measurement conditions. Oscillations in phase with Earth’s orbital distance to the sun could not be observed within 10-4-10-5 range precision. The most stable activity measurements of β + and EC decaying sources set an upper limit of 0.006% or less to the amplitude of annual oscillations in the decay rate. There are no apparent indications for systematic oscillations at a level of weeks or months.

  8. Fundamental Phenomena on Fuel Decomposition and Boundary-Layer Combustion Precesses with Applications to Hybrid Rocket Motors. Part 1; Experimental Investigation

    Science.gov (United States)

    Kuo, Kenneth K.; Lu, Yeu-Cherng; Chiaverini, Martin J.; Johnson, David K.; Serin, Nadir; Risha, Grant A.; Merkle, Charles L.; Venkateswaran, Sankaran

    1996-01-01

    This final report summarizes the major findings on the subject of 'Fundamental Phenomena on Fuel Decomposition and Boundary-Layer Combustion Processes with Applications to Hybrid Rocket Motors', performed from 1 April 1994 to 30 June 1996. Both experimental results from Task 1 and theoretical/numerical results from Task 2 are reported here in two parts. Part 1 covers the experimental work performed and describes the test facility setup, data reduction techniques employed, and results of the test firings, including effects of operating conditions and fuel additives on solid fuel regression rate and thermal profiles of the condensed phase. Part 2 concerns the theoretical/numerical work. It covers physical modeling of the combustion processes including gas/surface coupling, and radiation effect on regression rate. The numerical solution of the flowfield structure and condensed phase regression behavior are presented. Experimental data from the test firings were used for numerical model validation.

  9. Selectivity and delignification kinetics for oxidative short-term lime pretreatment of poplar wood, Part I: Constant-pressure.

    Science.gov (United States)

    Sierra-Ramírez, Rocío; Garcia, Laura A; Holtzapple, Mark Thomas

    2011-07-01

    Kinetic models applied to oxygen bleaching of paper pulp focus on the degradation of polymers, either lignin or carbohydrates. Traditionally, they separately model different moieties that degrade at three different rates: rapid, medium, and slow. These models were successfully applied to lignin and carbohydrate degradation of poplar wood submitted to oxidative pretreatment with lime at the following conditions: temperature 110-180°C, total pressure 7.9-21.7 bar, and excess lime loading of 0.5 g Ca(OH)2 per gram dry biomass. These conditions were held constant for 1-6 h. The models properly fit experimental data and were used to determine pretreatment selectivity in two fashions: differential and integral. By assessing selectivity, the detrimental effect of pretreatment on carbohydrates at high temperatures and at low lignin content was determined. The models can be used to identify pretreatment conditions that selectively remove lignin while preserving carbohydrates. Lignin removal≥50% with glucan preservation≥90% was observed for differential glucan selectivities between ∼10 and ∼30 g lignin degraded per gram glucan degraded. Pretreatment conditions complying with these reference values were preferably observed at 140°C, total pressure≥14.7 bars, and for pretreatment times between 2 and 6 h depending on the total pressure (the higher the pressure, the less time). They were also observed at 160°C, total pressure of 14.7 and 21.7 bars, and pretreatment time of 2 h. Generally, at 110°C lignin removal is insufficient and at 180°C carbohydrates do not preserve well.

  10. Sinterização de cerâmicas em microondas. Parte I: aspectos fundamentais Microwave sintering of ceramics. Part I: fundamental aspects

    Directory of Open Access Journals (Sweden)

    R. R. Menezes

    2007-03-01

    Full Text Available O processamento de materiais baseado no aquecimento por meio de energia de microondas vem ganhado a cada dia mais destaque e importância em várias aplicações industriais, em virtude de uma série de vantagens em potenciais frente aos métodos convencionais de aquecimento. Na sinterização de materiais cerâmicos o uso de microondas permite redução no tempo de processamento, economia de energia e melhora na uniformidade microestrutural dos corpos cerâmicos. Como conseqüência das vantagens frente às técnicas de aquecimentos convencionais, a sinterização usando microondas vem sendo estudada por vários grupos de pesquisa em todo o mundo. Entretanto, os benefícios do uso das microondas só são obtidos quando a sinterização é realizada com o controle e entendimento científico de uma série de parâmetros e aspectos envolvidos no processo. Assim, esse trabalho tem por objetivo abordar aspectos científicos fundamentais do processo de sinterização de cerâmicas usando microondas, enfocando particularmente a interação matéria/microondas, aspectos peculiares do processo e a técnica de sinterização híbrida.Processing of materials based on heating by microwave energy has gained increasing importance in many industrial applications due to its potential advantages over conventional heating methods. In the sintering of ceramic materials, the use of microwave energy decreases the processing time, saves energy and improves the microstructural homogeneity of ceramic bodies. These advantages have motivated various research groups around the world to study microwave sintering. However, the benefits deriving from the use of microwaves depend on the control and scientific understanding of the parameters and aspects of the process. This paper offers a review of the fundamental scientific aspects of microwave sintering of ceramics, focusing on the interaction between materials and microwaves and highlighting particular points of the hybrid

  11. On understanding the very different science premises meaningful to CAM versus orthodox medicine: part I--the fundamentals.

    Science.gov (United States)

    Tiller, William A

    2010-03-01

    In previous articles by this author and his colleagues in the Journal of Alternative and Complementary Medicine, it has been shown that physical reality consists of two uniquely different categories of substance, one being electric charge-based while the other appears to be magnetic charge-based. Normally, only the electric atom/molecule type of substance is accessible by our traditional measurement instruments. We label this condition as the uncoupled state of physical reality that is our long-studied, electric atom/molecule level of nature. The second level of physical reality is invisible to traditional measurement instruments when the system is in the uncoupled state but is accessible to these same instruments when the system is in the coupled state of physical reality. The coupling of these two unique levels has been shown to occur via the application of a sufficient intensity of human consciousness in the form of specific intentions. Part II of this article (in a forthcoming issue) explores the thermodynamics of complementary and 328 alternative medicine (CAM) through five different space-time applications involving coupled state physics to show their relevance to today's medicine: (1) homeopathy; (2) the placebo effect; (3) long-range, room temperature, macroscopic-size-scale, information entanglement; (4) explanation for dark matter/energy plus possible human levitation; and (5) electrodermal diagnostic devices. The purpose is to clearly differentiate the use and limitations of uncoupled state physics in nature and today's traditional medicine from coupled state physics in tomorrow's CAM. Existing orthodox science provides the technical underpinnings and mindset for today's orthodox medicine. Psycho-energetic science will provide the technical underpinnings and mindset for CAM.

  12. Quantum Theory without Planck's Constant

    CERN Document Server

    Ralston, John P

    2012-01-01

    Planck's constant was introduced as a fundamental scale in the early history of quantum mechanics. We find a modern approach where Planck's constant is absent: it is unobservable except as a constant of human convention. Despite long reference to experiment, review shows that Planck's constant cannot be obtained from the data of Ryberg, Davisson and Germer, Compton, or that used by Planck himself. In the new approach Planck's constant is tied to macroscopic conventions of Newtonian origin, which are dispensable. The precision of other fundamental constants is substantially improved by eliminating Planck's constant. The electron mass is determined about 67 times more precisely, and the unit of electric charge determined 139 times more precisely. Improvement in the experimental value of the fine structure constant allows new types of experiment to be compared towards finding "new physics." The long-standing goal of eliminating reliance on the artifact known as the International Prototype Kilogram can be accompl...

  13. O ensino de parte da geometria do ensino fundamental: análise de dificuldades e sugestão de sequência didática

    OpenAIRE

    Leite, Rondineli Schulthais

    2013-01-01

    Este trabalho tem por objetivo construir um aprendizado sistemático e eficaz de parte dos mais importantes conceitos geométricos do nono ano do ensino fundamental, utilizando o software GeoGebra como instrumento inovador na construção de uma sequência didática que contribua de forma significativa na compreensão dos conteúdos geométricos. São eles: semelhança de triângulos, teorema de Tales, relações métricas no triângulo retângulo, teorema de Pitágoras e trigonometria. Deseja-se que estes con...

  14. Varying Constants

    CERN Document Server

    Damour, Thibault Marie Alban Guillaume

    2003-01-01

    We review some string-inspired theoretical models which incorporate a correlated spacetime variation of coupling constants while remaining naturally compatible both with phenomenological constraints coming from geochemical data (Oklo; Rhenium decay) and with present equivalence principle tests. Barring unnatural fine-tunings of parameters, a variation of the fine-structure constant as large as that recently ``observed'' by Webb et al. in quasar absorption spectra appears to be incompatible with these phenomenological constraints. Independently of any model, it is emphasized that the best experimental probe of varying constants are high-precision tests of the universality of free fall, such as MICROSCOPE and STEP. Recent claims by Bekenstein that fine-structure-constant variability does not imply detectable violations of the equivalence principle are shown to be untenable.

  15. Radiology fundamentals

    CERN Document Server

    Singh, Harjit

    2011-01-01

    ""Radiology Fundamentals"" is a concise introduction to the dynamic field of radiology for medical students, non-radiology house staff, physician assistants, nurse practitioners, radiology assistants, and other allied health professionals. The goal of the book is to provide readers with general examples and brief discussions of basic radiographic principles and to serve as a curriculum guide, supplementing a radiology education and providing a solid foundation for further learning. Introductory chapters provide readers with the fundamental scientific concepts underlying the medical use of imag

  16. Fundamentals of the route theory for satellite constellation design for Earth discontinuous coverage. Part 1: Analytic emulation of the Earth coverage

    Science.gov (United States)

    Razoumny, Yury N.

    2016-11-01

    This paper opens a series of articles expounding the fundamentals of the route theory for satellite constellation design for Earth discontinuous coverage. In Part 1 of the series the analytical model for Earth coverage by satellites' swath conforming to the essential of discontinuous coverage, in contrast to continuous coverage, is presented. The analytic relations are consecutively derived for calculation of single- and multi-satellite Earth surface latitude coverage as well as for generating full set of typical satellite visibility zone time streams realized in the repeating latitude coverage pattern for given arbitrary satellite constellation. The analytic relations mentioned are used for developing the method for analysis of discontinuous coverage of fixed arbitrary Earth region for given satellite constellation using both deterministic and stochastic approaches. The method provides analysis of the revisit time for given satellite constellation, as a result of high speed (fractions of a second or seconds) computer calculations in a wide range of possible revisit time variations for different practical purposes with high accuracy which is at least on par with that provided by known numerical simulating methods based on direct modeling of the satellite observation mission, or in a number of cases is even superior to it.

  17. New perspectives in the PAW/GIPAW approach: J(P-O-Si) coupling constants, antisymmetric parts of shift tensors and NQR predictions.

    Science.gov (United States)

    Bonhomme, Christian; Gervais, Christel; Coelho, Cristina; Pourpoint, Frédérique; Azaïs, Thierry; Bonhomme-Coury, Laure; Babonneau, Florence; Jacob, Guy; Ferrari, Maude; Canet, Daniel; Yates, Jonathan R; Pickard, Chris J; Joyce, Siân A; Mauri, Francesco; Massiot, Dominique

    2010-12-01

    In 2001, Pickard and Mauri implemented the gauge including projected augmented wave (GIPAW) protocol for first-principles calculations of NMR parameters using periodic boundary conditions (chemical shift anisotropy and electric field gradient tensors). In this paper, three potentially interesting perspectives in connection with PAW/GIPAW in solid-state NMR and pure nuclear quadrupole resonance (NQR) are presented: (i) the calculation of J coupling tensors in inorganic solids; (ii) the calculation of the antisymmetric part of chemical shift tensors and (iii) the prediction of (14)N and (35)Cl pure NQR resonances including dynamics. We believe that these topics should open new insights in the combination of GIPAW, NMR/NQR crystallography, temperature effects and dynamics. Points (i), (ii) and (iii) will be illustrated by selected examples: (i) chemical shift tensors and heteronuclear (2)J(P-O-Si) coupling constants in the case of silicophosphates and calcium phosphates [Si(5)O(PO(4))(6), SiP(2)O(7) polymorphs and α-Ca(PO(3))(2)]; (ii) antisymmetric chemical shift tensors in cyclopropene derivatives, C(3)X(4) (X = H, Cl, F) and (iii) (14)N and (35)Cl NQR predictions in the case of RDX (C(3)H(6)N(6)O(6)), β-HMX (C(4)H(8)N(8)O(8)), α-NTO (C(2)H(2)N(4)O(3)) and AlOPCl(6). RDX, β-HMX and α-NTO are explosive compounds.

  18. Probing fundamental film parameters of immobilized enzymes--towards enhanced biosensor performance. Part II-Electroanalytical estimation of immobilized enzyme performance.

    Science.gov (United States)

    Fogel, R; Limson, J L

    2011-07-10

    The method of immobilization of a protein has a great influence on the overall conformation, and hence, functioning of the protein. Thus, a greater understanding of the events undergone by the protein during immobilization is key to manipulating the immobilization method to produce a strategy that influences the advantages of immobilization while minimizing their disadvantages in biosensor design. In this, the second paper of a two-part series, we have assessed the kinetic parameters of thin-film laccase monolayers, covalently attached to SAMs differing in spacer-arm length and lateral density of spacer arms. This was achieved using chronoamperometry and an electroactive product (p-benzoquinone), which was modeled in a non-linear regressional fashion to extract the relevant parameters. Finally, comparisons between the kinetic parameters presented in this paper and the rheological parameters of laccase monolayers immobilized in the same manner (Part I of this two paper series) were performed. Improvements in the maximal enzyme-catalysed current, i(max), the apparent Michaelis-Menten constant, K(m) and the apparent biosensor sensitivity were noted for most of the surfaces with increasing linker length. Decreasing the lateral density of the spacer-arms brought about a general improvement in these parameters, which is attributed to the decrease in multiple points of immobilization undergone by functional proteins. Finally, comparisons between rheological data and kinetics data showed that the degree of viscosity exhibited by protein films has a negative influence on attached protein layers, while enhanced protein hydration levels (assessed piezoelectrically from data obtained in Paper 1) has a positive effect on those surfaces comprising rigidly bound protein layers.

  19. Temporal variation of coupling constants and nucleosynthesis

    CERN Document Server

    Oberhummer, Heinz; Fairbairn, M; Schlattl, H; Sharma, M M

    2003-01-01

    We investigate the triple-alpha process and the Oklo phenomenon to obtain constraints on possible cosmological time variations of fundamental constants. Specifically we study cosmological temporal constraints for the fine structure constant and nucleon and meson masses.

  20. Temporal variation of coupling constants and nucleosynthesis

    Science.gov (United States)

    Oberhummer, H.; Csótó, A.; Fairbairn, M.; Schlattl, H.; Sharma, M. M.

    2003-05-01

    We investigate the triple-alpha process and the Oklo phenomenon to obtain constraints on possible cosmological time variations of fundamental constants. Specifically we study cosmological temporal constraints for the fine structure constant and nucleon and meson masses.

  1. MATHEMATICAL CONSTANTS.

    Energy Technology Data Exchange (ETDEWEB)

    Robinson, H.P.; Potter, Elinor

    1971-03-01

    This collection of mathematical data consists of two tables of decimal constants arranged according to size rather than function, a third table of integers from 1 to 1000, giving some of their properties, and a fourth table listing some infinite series arranged according to increasing size of the coefficients of the terms. The decimal values of Tables I and II are given to 20 D.

  2. Hemaka's constant

    CERN Document Server

    Sparavigna, Amelia Carolina

    2012-01-01

    As proposed in a previous paper, the decorations of ancient objects can provide some information on the approximate evaluations of constant {\\pi}, the ratio of circumference to diameter. Here we discuss some disks found in the tomb of Hemaka, the chancellor of a king of the First Dynasty of Egypt, about 3000 BC. The discussion is based on measurements of the dimensionless ratio of lengths.

  3. Marketing fundamentals.

    Science.gov (United States)

    Redmond, W H

    2001-01-01

    This chapter outlines current marketing practice from a managerial perspective. The role of marketing within an organization is discussed in relation to efficiency and adaptation to changing environments. Fundamental terms and concepts are presented in an applied context. The implementation of marketing plans is organized around the four P's of marketing: product (or service), promotion (including advertising), place of delivery, and pricing. These are the tools with which marketers seek to better serve their clients and form the basis for competing with other organizations. Basic concepts of strategic relationship management are outlined. Lastly, alternate viewpoints on the role of advertising in healthcare markets are examined.

  4. Field Theory of Fundamental Interactions

    Science.gov (United States)

    Wang, Shouhong; Ma, Tian

    2017-01-01

    First, we present two basic principles, the principle of interaction dynamics (PID) and the principle of representation invariance (PRI). Intuitively, PID takes the variation of the action under energy-momentum conservation constraint. We show that the PID is the requirement of the presence of dark matter and dark energy, the Higgs field and the quark confinement. PRI requires that the SU(N) gauge theory be independent of representations of SU(N). It is clear that PRI is the logic requirement of any gauge theory. With PRI, we demonstrate that the coupling constants for the strong and the weak interactions are the main sources of these two interactions, reminiscent of the electric charge. Second, we emphasize that symmetry principles-the principle of general relativity and the principle of Lorentz invariance and gauge invariance-together with the simplicity of laws of nature, dictate the actions for the four fundamental interactions. Finally, we show that the PID and the PRI, together with the symmetry principles give rise to a unified field model for the fundamental interactions, which is consistent with current experimental observations and offers some new physical predictions. The research is supported in part by the National Science Foundation (NSF) grant DMS-1515024, and by the Office of Naval Research (ONR) grant N00014-15-1-2662.

  5. Fundamentals of Business Economics

    OpenAIRE

    2013-01-01

    Powerpoint presentations of the 9 theoretical units of the subject: Fundamentals of Business Economics. Business Administration Degree. Faculty of Economics. University of Alicante En el marco de ayudas a preparación de materiales docentes en lengua inglesa, por parte del Servei de Política Llingüística de la Universidad de Alicante

  6. Investigation of Relative Time Constant Influence of Inertial Part of Superheater on Quality of Steam Temperature Control Behind Boiler in Broad Band of Loading Variations

    Directory of Open Access Journals (Sweden)

    G. T. Kulakov

    2008-01-01

    Full Text Available The paper is devoted to computational investigation of influence relative time constant of an object which changes in broad band on quality of steam temperature control behind a boiler with due account of value of regulating action in the system with PI- and PID- regulator. The simulation has been based on a single-loop automatic control system (ACS. It has been revealed that the less value of the relative time constant of an object leads to more integral control error in system with PID- regulator while operating external ACS perturbation. Decrease of numerical value of relative time constant of an object while operating external perturbation causes decrease of relative time concerning appearance of maximum dynamic control error from common relative control time.

  7. Fundamentals of piping design

    CERN Document Server

    Smith, Peter

    2013-01-01

    Written for the piping engineer and designer in the field, this two-part series helps to fill a void in piping literature,since the Rip Weaver books of the '90s were taken out of print at the advent of the Computer Aid Design(CAD) era. Technology may have changed, however the fundamentals of piping rules still apply in the digitalrepresentation of process piping systems. The Fundamentals of Piping Design is an introduction to the designof piping systems, various processes and the layout of pipe work connecting the major items of equipment forthe new hire, the engineering student and the vetera

  8. Pragmatic electrical engineering fundamentals

    CERN Document Server

    Eccles, William

    2011-01-01

    Pragmatic Electrical Engineering: Fundamentals introduces the fundamentals of the energy-delivery part of electrical systems. It begins with a study of basic electrical circuits and then focuses on electrical power. Three-phase power systems, transformers, induction motors, and magnetics are the major topics.All of the material in the text is illustrated with completely-worked examples to guide the student to a better understanding of the topics. This short lecture book will be of use at any level of engineering, not just electrical. Its goal is to provide the practicing engineer with a practi

  9. Fundamentals of continuum mechanics

    CERN Document Server

    Rudnicki, John W

    2014-01-01

    A concise introductory course text on continuum mechanics Fundamentals of Continuum Mechanics focuses on the fundamentals of the subject and provides the background for formulation of numerical methods for large deformations and a wide range of material behaviours. It aims to provide the foundations for further study, not just of these subjects, but also the formulations for much more complex material behaviour and their implementation computationally.  This book is divided into 5 parts, covering mathematical preliminaries, stress, motion and deformation, balance of mass, momentum and energ

  10. Fundamentals of engineering electromagnetism

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Nam; Yoon, Youngro; Jun, Sukhee; Jun, Hoin

    2004-08-15

    It indicates fundamentals of engineering electromagnetism. It mentions electromagnetic field model of introduction and International system of units and universal constant, Vector analysis with summary and orthogonal coordinate systems, electrostatic field on Coulomb's law and Gauss's law, electrostatic energy and strength, steady state current with Ohm's law and Joule's law and calculation of resistance, crystallite field with Vector's electrostatic potential, Biot-Savart law and application and Magnetic Dipole, time-Savart and Maxwell equation with potential function and Faraday law of electromagnetic induction, plane electromagnetic wave, transmission line, a wave guide and cavity resonator and antenna arrangement.

  11. Fundamentals of semiconductor devices

    CERN Document Server

    Lindmayer, Joseph

    1965-01-01

    Semiconductor properties ; semiconductor junctions or diodes ; transistor fundamentals ; inhomogeneous impurity distributions, drift or graded-base transistors ; high-frequency properties of transistors ; band structure of semiconductors ; high current densities and mechanisms of carrier transport ; transistor transient response and recombination processes ; surfaces, field-effect transistors, and composite junctions ; additional semiconductor characteristics ; additional semiconductor devices and microcircuits ; more metal, insulator, and semiconductor combinations for devices ; four-pole parameters and configuration rotation ; four-poles of combined networks and devices ; equivalent circuits ; the error function and its properties ; Fermi-Dirac statistics ; useful physical constants.

  12. Application of the Bahe's pseudolattice-theory to water-1-butyl-3-methylimidazolium tetrafluoroborate (bmimBF(4)) mixtures at 298.15K Part I. Autoprotolysis constants.

    Science.gov (United States)

    Bou Malham, I; Letellier, P; Turmine, M

    2007-04-15

    The autoprotolysis constants (K(s)) of water - 1-butyl-3-methylimidazolium tetrafluoroborate (bmimBF(4)) mixtures were determined at 298K over the composition range of 0 to 77.43vol.% bmimBF(4) using potentiometric method with a glass electrode. A slight increase in the autoprotolysis constant was observed when the salt was added to the water. The value of the ionic product of the medium then decreases as the bmimBF(4) content increases from about 20vol.%. The acid-base properties of these media were perfectly described by Bahe's approaches that were completed by Varela et al. concerning structured electrolyte solutions with large short-range interactions.

  13. Elastic constants of the layered compounds GaS, GaSe, InSe, and their pressure dependence. 2. Theoretical part

    Energy Technology Data Exchange (ETDEWEB)

    Gatulle, M.; Fischer, M.

    1984-01-01

    The block diagonalization of the dynamical matrix of ..beta..-GaS is reported in the case of propagation perpendicular to the layers. A linear chain model that includes intralayer interactions between any atoms is introduced, the principal frequencies and the elastic constants C/sub 33/ and C/sub 44/ are calculated. Within the model, the existence of real coupling parameters is discussed, that leads to an evaluation of the intralayer forces contribution to the elastic constants. The evolution of the interlayer interactions with pressure is studied, using previously published experimental results. Finally, as far as possible, the theoretical formulas are applied to the similar structures of epsilon-GaSe and ..gamma..-InSe.

  14. Elastic constants of the layered compounds GaS, GaSe, InSe, and their pressure dependence. 1. Experimental part

    Energy Technology Data Exchange (ETDEWEB)

    Gatulle, M.; Fischer, M.; Chevy, A. (Paris-6 Univ., 75 (France))

    1983-09-01

    The elastic constants of the lamellar compounds GaS, GaSe, InSe are measured on several samples using an ultrasonic method, and the results are compared with previous publications. The variations of C/sub 33/, C/sub 11/, and C/sub 66/ with hydrostatic pressure are measured up to 3 kbars. In the range of pressure studied, the variations of these constants are perfectly linear. The results on the three compounds are very similar, and typical of the lamellar structure: an important variation of C/sub 33/ caused by the weakness of the interlayer bond; on the other hand, C/sub 66/ is affected very little by pressure.

  15. Fundamental astronomy

    CERN Document Server

    Kröger, Pekka; Oja, Heikki; Poutanen, Markku; Donner, Karl

    2017-01-01

    Now in its sixth edition this successful undergraduate textbook gives a well-balanced and comprehensive introduction to the topics of classical and modern astronomy. While emphasizing both the astronomical concepts and the underlying physical principles, the text provides a sound basis for more profound studies in the astronomical sciences. The chapters on galactic and extragalactic astronomy as well as cosmology were extensively modernized in the previous edition. In this new edition they have been further revised to include more recent results. The long chapter on the solar system has been split into two parts: the first one deals with the general properties, and the other one describes individual objects. A new chapter on exoplanets has been added to the end of the book next to the chapter on astrobiology. In response to the fact that astronomy has evolved enormously over the last few years, only a few chapters of this book have been left unmodified. Long considered a standard text for physical science maj...

  16. Fundamentals of Biomechanics

    OpenAIRE

    Duane Knudson

    2007-01-01

    DESCRIPTION This book provides a broad and in-depth theoretical and practical description of the fundamental concepts in understanding biomechanics in the qualitative analysis of human movement. PURPOSE The aim is to bring together up-to-date biomechanical knowledge with expert application knowledge. Extensive referencing for students is also provided. FEATURES This textbook is divided into 12 chapters within four parts, including a lab activities section at the end. The division is as follow...

  17. The Transfer of Atoms, Ions and Molecular Groups in Solution. Part 3. Monte Carlo Methods for the Evaluation of Rate Constants.

    Science.gov (United States)

    1983-10-31

    tranal. Sykes and Bell) pp. 342-51 12. J. M. McKinley and P. P. Schmidt, Che. Phys. Letters, submitted. and ref. (4) 13. G. Arfken . Mathematical Methods ...Transfer of Atoms, Ions and Molecular Groups Nf in Solution.III. Monte Carlo methods for the evaluation of rate constants I by P. P. Schmidt Prepared...technical Groups in Solution.poilI. Monte Carlo methods for the evaluation of rate a. PERFORMING ORG. REPORT NUMBER congtant 7, AUTHOR(e) B. CONTRACT Oft

  18. Fundamental units: physics and metrology

    CERN Document Server

    Okun, Lev Borisovich

    2003-01-01

    The problem of fundamental units is discussed in the context of achievements of both theoretical physics and modern metrology. On one hand, due to fascinating accuracy of atomic clocks, the traditional macroscopic standards of metrology (second, metre, kilogram) are giving way to standards based on fundamental units of nature: velocity of light $c$ and quantum of action $h$. On the other hand, the poor precision of gravitational constant $G$, which is widely believed to define the ``cube of theories'' and the units of the future ``theory of everything'', does not allow to use $G$ as a fundamental dimensional constant in metrology. The electromagnetic units in SI are actually based on concepts of prerelativistic classical electrodynamics such as ether, electric permitivity and magnetic permeability of vacuum. Concluding remarks are devoted to terminological confusion which accompanies the progress in basic physics and metrology.

  19. FUNDAMENTALS OF BIOMECHANICS

    Directory of Open Access Journals (Sweden)

    Duane Knudson

    2007-09-01

    Full Text Available DESCRIPTION This book provides a broad and in-depth theoretical and practical description of the fundamental concepts in understanding biomechanics in the qualitative analysis of human movement. PURPOSE The aim is to bring together up-to-date biomechanical knowledge with expert application knowledge. Extensive referencing for students is also provided. FEATURES This textbook is divided into 12 chapters within four parts, including a lab activities section at the end. The division is as follows: Part 1 Introduction: 1.Introduction to biomechanics of human movement; 2.Fundamentals of biomechanics and qualitative analysis; Part 2 Biological/Structural Bases: 3.Anatomical description and its limitations; 4.Mechanics of the musculoskeletal system; Part 3 Mechanical Bases: 5.Linear and angular kinematics; 6.Linear kinetics; 7.Angular kinetics; 8.Fluid mechanics; Part 4 Application of Biomechanics in Qualitative Analysis :9.Applying biomechanics in physical education; 10.Applying biomechanics in coaching; 11.Applying biomechanics in strength and conditioning; 12.Applying biomechanics in sports medicine and rehabilitation. AUDIENCE This is an important reading for both student and educators in the medicine, sport and exercise-related fields. For the researcher and lecturer it would be a helpful guide to plan and prepare more detailed experimental designs or lecture and/or laboratory classes in exercise and sport biomechanics. ASSESSMENT The text provides a constructive fundamental resource for biomechanics, exercise and sport-related students, teachers and researchers as well as anyone interested in understanding motion. It is also very useful since being clearly written and presenting several ways of examples of the application of biomechanics to help teach and apply biomechanical variables and concepts, including sport-related ones

  20. Fundamental Physics and Precision Measurements

    Science.gov (United States)

    Hänsch, T. W.

    2006-11-01

    "Very high precision physics has always appealed to me. The steady improvement in technologies that afford higher and higher precision has been a regular source of excitement and challenge during my career. In science, as in most things, whenever one looks at something more closely, new aspects almost always come into play …" With these word from the book "How the Laser happened", Charles H. Townes expresses a passion for precision that is now shared by many scientists. Masers and lasers have become indispensible tools for precision measurements. During the past few years, the advent of femtosecond laser frequency comb synthesizers has revolutionized the art of directly comparing optical and microwave frequencies. Inspired by the needs of precision laser spectroscopy of the simple hydrogen atom, such frequency combs are now enabling ultra-precise spectroscopy over wide spectral ranges. Recent laboratory experiments are already setting stringent limits for possible slow variations of fundamental constants. Laser frequency combs also provide the long missing clockwork for optical atomic clocks that may ultimately reach a precision of parts in 1018 and beyond. Such tools will open intriguing new opportunities for fundamental experiments including new tests of special and general relativity. In the future, frequency comb techniques may be extended into the extreme ultraviolet and soft xray regime, opening a vast new spectral territory to precision measurements. Frequency combs have also become a key tool for the emerging new field of attosecond science, since they can control the electric field of ultrashort laser pulses on an unprecedented time scale. The biggest surprise in these endeavours would be if we found no surprise.

  1. Twin Concept of Fine Structure Constant as the ‘Self Number-Archetype’ in Perspective of the Pauli-Jung Correspondence; Part I: Observation, Identification and Interpretation

    Directory of Open Access Journals (Sweden)

    Péter Várlaki

    2009-07-01

    Full Text Available The paper – similarly to our earlier publications since 1993 – is trying to‘synchronize’ early quantum physics, the Kalmanian representation theory, Jungiananalytic psychology, and certain aesthetical categories. The number ‘137’, the so-calledinverse Fine Structure Constant (IFSC, is placed at the centre of this heuristic andepistemological experiment, along with the scientific cooperation of Pauli and Jung. A newpossibilistic twin concept of “controlling-observing equations” is proposed for thereinterpretation of the FSC and other Number Archetypes on the basis of the Hermeneuticand symbolic languages found in the W. Pauli and C. G. Jung “Correspondence”. The firstpart of the paper deals in first line with the introduction of the possibilistic twin concept ofFSC together with its interpretation according to the hermeneutical “tradition” of thePauli-Jung collaboration.

  2. On the co-existence of chemically peculiar Bp stars, slowly pulsating B stars and constant B stars in the same part of the H-R diagram

    NARCIS (Netherlands)

    Briquet, M.; Hubrig, S.; Cat, P. de; Aerts, C.; North, P.; Schöller, M.

    2007-01-01

    Aims. In order to better model massive B-type stars, we need to understand the physical processes taking place in slowly pulsating B (SPB) stars, chemically peculiar Bp stars, and non-pulsating normal B stars co-existing in the same part of the H-R diagram. Methods: We carry out a comparative study

  3. A Novel Method for Idle-Stop-Start Control of Micro Hybrid Construction Equipment—Part A: Fundamental Concepts and Design

    Directory of Open Access Journals (Sweden)

    Truong Quang Dinh

    2017-07-01

    Full Text Available Although micro hybrid propulsion (MHP systems are recognized as a feasible solution for off-highway construction machines (OHCMs, there is still a lack of understanding how existing MHP technologies can be transferred effectively from the automotive sector to the construction sector. To fill this gap, this paper is the first of a two-part study which focuses on micro hybrid construction machines paying attention to a novel idle-stop-start control (ISSC strategy. Part A presents the system concepts and design procedure while Part B studies on a hardware-in-the-loop test platform for a comprehensive analysis on the potential fuel/emission saving of the proposed system in real-time. In this study—Part A—different types of OHCMs are briefly discussed to identify the target machine. The procedure to model the machine powertrain is also concisely introduced. Next, to minimize the fuel consumption and emissions without degrading the machine performance, a prediction-based idle-stop-start control (PISSC approach is properly designed. The core of the PISSC is to estimate online the future engine working state changes in order to directly shut down the engine or shift it to low power regions during idle periods. Numerical simulations have been carried out to validate the potential of the proposed PISSC method.

  4. Decay constants in geochronology

    Institute of Scientific and Technical Information of China (English)

    IgorM.Villa; PaulR.Renne

    2005-01-01

    Geologic time is fundamental to the Earth Sciences, and progress in many disciplines depends critically on our ability to measure time with increasing accuracy and precision. Isotopic geochronology makes use of the decay of radioactive nuclides as a help to quantify the histories of rock, minerals, and other materials. Both accuracy and precision of radioisotopic ages are, at present, limited by those of radioactive decay constants. Modem mass spectrometers can measure isotope ratios with a precision of 10-4 or better. On the other hand, the uncertainties associated with direct half-life determinations are, in most cases, still at the percent level. The present short note briefly summarizes progress and problems that have been encountered during the Working Group's activity.

  5. Fundamental Study of a Single Point Lean Direct Injector. Part I: Effect of Air Swirler Angle and Injector Tip Location on Spray Characteristics

    Science.gov (United States)

    Tedder, Sarah A.; Hicks, Yolanda R.; Tacina, Kathleen M.; Anderson, Robert C.

    2015-01-01

    Lean direct injection (LDI) is a combustion concept to reduce oxides of nitrogen (NOx) for next generation aircraft gas turbine engines. These newer engines have cycles that increase fuel efficiency through increased operating pressures, which increase combustor inlet temperatures. NOx formation rates increase with higher temperatures; the LDI strategy avoids high temperature by staying fuel lean and away from stoichiometric burning. Thus, LDI relies on rapid and uniform fuel/air mixing. To understand this mixing process, a series of fundamental experiments are underway in the Combustion and Dynamics Facility at NASA Glenn Research Center. This first set of experiments examines cold flow (non-combusting) mixing using air and water. Using laser diagnostics, the effects of air swirler angle and injector tip location on the spray distribution, recirculation zone, and droplet size distribution are examined. Of the three swirler angles examined, 60 degrees is determined to have the most even spray distribution. The injector tip location primarily shifts the flow without changing the structure, unless the flow includes a recirculation zone. When a recirculation zone is present, minimum axial velocity decreases as the injector tip moves downstream towards the venturi exit; also the droplets become more uniform in size and angular distribution.

  6. Development of Monopole Interaction Models for Ionic Compounds. Part I: Estimation of Aqueous Henry's Law Constants for Ions and Gas Phase pKa Values for Acidic Compounds.

    Science.gov (United States)

    Hilal, S H; Saravanaraj, A N; Carreira, L A

    2014-02-01

    The SPARC (SPARC Performs Automated Reasoning in Chemistry) physicochemical mechanistic models for neutral compounds have been extended to estimate Henry's Law Constant (HLC) for charged species by incorporating ionic electrostatic interaction models. Combinations of absolute aqueous pKa values, relative pKa values in the gas phase, and aqueous HLC for neutral compounds have been used to develop monopole interaction models that quantify the energy differences upon moving an ionic solute molecule from the gas phase to the liquid phase. Inter-molecular interaction energies were factored into mechanistic contributions of monopoles with polarizability, dipole, H-bonding, and resonance. The monopole ionic models were validated by a wide range of measured gas phase pKa data for 450 acidic compounds. The RMS deviation error and R(2) for the OH, SH, CO2 H, CH3 and NR2 acidic reaction centers (C) were 16.9 kcal/mol and 0.87, respectively. The calculated HLCs of ions were compared to the HLCs of 142 ions calculated by quantum mechanics. Effects of inter-molecular interaction of the monopoles with polarizability, dipole, H-bonding, and resonance on acidity of the solutes in the gas phase are discussed.

  7. Varying constants, Gravitation and Cosmology

    CERN Document Server

    Uzan, Jean-Philippe

    2010-01-01

    Fundamental constants are a cornerstone of our physical laws. Any constant varying in space and/or time would reflect the existence of an almost massless field that couples to matter. This will induce a violation of the universality of free fall. It is thus of utmost importance for our understanding of gravity and of the domain of validity of general relativity to test for their constancy. We thus detail the relations between the constants, the tests of the local position invariance and of the universality of free fall. We then review the main experimental and observational constraints that have been obtained from atomic clocks, the Oklo phenomenon, Solar system observations, meteorites dating, quasar absorption spectra, stellar physics, pulsar timing, the cosmic microwave background and big bang nucleosynthesis. At each step we describe the basics of each system, its dependence with respect to the constants, the known systematic effects and the most recent constraints that have been obtained. We then describ...

  8. The MOND Fundamental Plane

    CERN Document Server

    Cardone, V F; Diaferio, A; Tortora, C; Molinaro, R

    2010-01-01

    Modified Newtonian Dynamics (MOND) has been shown to be able to fit spiral galaxy rotation curves as well as giving a theoretical foundation for empirically determined scaling relations, such as the Tully - Fisher law, without the need for a dark matter halo. As a complementary analysis, one should investigate whether MOND can also reproduce the dynamics of early - type galaxies (ETGs) without dark matter. As a first step, we here show that MOND can indeed fit the observed central velocity dispersion $\\sigma_0$ of a large sample of ETGs assuming a simple MOND interpolating functions and constant anisotropy. We also show that, under some assumptions on the luminosity dependence of the Sersic n parameter and the stellar M/L ratio, MOND predicts a fundamental plane for ETGs : a log - linear relation among the effective radius $R_{eff}$, $\\sigma_0$ and the mean effective intensity $\\langle I_e \\rangle$. However, we predict a tilt between the observed and the MOND fundamental planes.

  9. A Fundamental Parameter-Based Calibration Model for an Intrinsic Germanium X-Ray Fluorescence Spectrometer

    DEFF Research Database (Denmark)

    Christensen, Leif Højslet; Pind, Niels

    1982-01-01

    secondary target a number of relative calibration constants are calculated on the basis of knowledge of the irradiation geometry, the detector specifications, and tabulated fundamental physical parameters. The absolute calibration of the spectrometer is performed by measuring one pure element standard per......A matrix-independent fundamental parameter-based calibration model for an energy-dispersive X-ray fluorescence spectrometer has been developed. This model, which is part of a fundamental parameter approach quantification method, accounts for both the excitation and detection probability. For each...

  10. Fundamental of biomedical engineering

    CERN Document Server

    Sawhney, GS

    2007-01-01

    About the Book: A well set out textbook explains the fundamentals of biomedical engineering in the areas of biomechanics, biofluid flow, biomaterials, bioinstrumentation and use of computing in biomedical engineering. All these subjects form a basic part of an engineer''s education. The text is admirably suited to meet the needs of the students of mechanical engineering, opting for the elective of Biomedical Engineering. Coverage of bioinstrumentation, biomaterials and computing for biomedical engineers can meet the needs of the students of Electronic & Communication, Electronic & Instrumenta

  11. Calorimetry fundamentals, instrumentation and applications

    CERN Document Server

    Sarge, Stefan M; Hemminger, Wolfgang

    2014-01-01

    Clearly divided into three parts, this practical book begins by dealing with all fundamental aspects of calorimetry. The second part looks at the equipment used and new developments. The third and final section provides measurement guidelines in order to obtain the best results. The result is optimized knowledge for users of this technique, supplemented with practical tips and tricks.

  12. Fundamental equations for two-phase flow. Part 1: general conservation equations. Part 2: complement and remarks; Equations fondamentales des ecoulements diphasiques. Premiere partie: equations generales de conservation. Deuxieme partie: complements et remarques

    Energy Technology Data Exchange (ETDEWEB)

    Delhaye, J.M. [Commissariat a l' Energie Atomique, 38 - Grenoble (France). Centre d' Etudes Nucleaires

    1968-12-01

    This report deals with the general equations of mass conservation, of momentum conservation, and energy conservation in the case of a two-phase flow. These equations are presented in several forms starting from integral equations which are assumed initially a priori. 1. Equations with local instantaneous variables, and interfacial conditions; 2. Equations with mean instantaneous variables in a cross-section, and practical applications: these equations include an important experimental value which is the ratio of the cross-section of passage of one phase to the total cross-section of a flow-tube. 3. Equations with a local statistical mean, and equations averaged over a period of time: A more advanced attempt to relate theory and experiment consists in taking the statistical averages of local equations. Equations are then obtained involving variables which are averaged over a period of time with the help of an ergodic assumption. 4. Combination of statistical averages and averages over a cross-section: in this study are considered the local variables averaged statistically, then averaged over the cross-section, and also the variables averaged over the section and then averaged statistically. 5. General equations concerning emulsions: In this case a phase exists in a locally very finely divided form. This peculiarity makes it possible to define a volume concentration, and to draw up equations which have numerous applications. - Certain points arising in the first part of this report concerning general mass conservation equations for two-phase flow have been completed and clarified. The terms corresponding to the interfacial tension have been introduced into the general equations. The interfacial conditions have thus been generalized. A supplementary step has still to be carried out: it has, in effect, been impossible to take the interfacial tension into account in the case of emulsions. It was then appeared interesting to compare this large group of fundamental

  13. Spaces of constant curvature

    CERN Document Server

    Wolf, Joseph A

    2010-01-01

    This book is the sixth edition of the classic Spaces of Constant Curvature, first published in 1967, with the previous (fifth) edition published in 1984. It illustrates the high degree of interplay between group theory and geometry. The reader will benefit from the very concise treatments of riemannian and pseudo-riemannian manifolds and their curvatures, of the representation theory of finite groups, and of indications of recent progress in discrete subgroups of Lie groups. Part I is a brief introduction to differentiable manifolds, covering spaces, and riemannian and pseudo-riemannian geomet

  14. On the co-existence of chemically peculiar Bp stars, slowly pulsating B stars and constant B stars in the same part of the H-R diagram

    CERN Document Server

    Briquet, M; De Cat, P; Aerts, C; North, P; Scholler, M; 10.1051/0004-6361:20066940

    2009-01-01

    Aims. In order to better model massive B-type stars, we need to understand the physical processes taking place in slowly pulsating B (SPB) stars, chemically peculiar Bp stars, and non-pulsating normal B stars co-existing in the same part of the H-R diagram. Methods. We carry out a comparative study between samples of confirmed and well-studied SPB stars and a sample of well-studied Bp stars with known periods and magnetic field strengths. We determine their evolutionary state using accurate HIPPARCOS parallaxes and Geneva photometry. We discuss the occurrence and strengths of magnetic fields as well as the occurrence of stellar pulsation among both groups. Further, we make a comparison of Geneva photometric variability for both kinds of stars. Results. The group of Bp stars is significantly younger than the group of SPB stars. Longitudinal magnetic fields in SPB stars are weaker than those of Bp stars, suggesting that the magnetic field strength is an important factor for B type stars to become chemically pec...

  15. Fundamental physics in particle traps

    CERN Document Server

    Vogel, Manuel

    2014-01-01

    This volume provides detailed insight into the field of precision spectroscopy and fundamental physics with particles confined in traps. It comprises experiments with electrons and positrons, protons and antiprotons, antimatter and highly charged ions, together with corresponding theoretical background. Such investigations represent stringent tests of quantum electrodynamics and the Standard model, antiparticle and antimatter research, test of fundamental symmetries, constants, and their possible variations with time and space. They are key to various aspects within metrology such as mass measurements and time standards, as well as promising to further developments in quantum information processing. The reader obtains a valuable source of information suited for beginners and experts with an interest in fundamental studies using particle traps.

  16. Varying Constants, Gravitation and Cosmology

    Directory of Open Access Journals (Sweden)

    Jean-Philippe Uzan

    2011-03-01

    Full Text Available Fundamental constants are a cornerstone of our physical laws. Any constant varying in space and/or time would reflect the existence of an almost massless field that couples to matter. This will induce a violation of the universality of free fall. Thus, it is of utmost importance for our understanding of gravity and of the domain of validity of general relativity to test for their constancy. We detail the relations between the constants, the tests of the local position invariance and of the universality of free fall. We then review the main experimental and observational constraints that have been obtained from atomic clocks, the Oklo phenomenon, solar system observations, meteorite dating, quasar absorption spectra, stellar physics, pulsar timing, the cosmic microwave background and big bang nucleosynthesis. At each step we describe the basics of each system, its dependence with respect to the constants, the known systematic effects and the most recent constraints that have been obtained. We then describe the main theoretical frameworks in which the low-energy constants may actually be varying and we focus on the unification mechanisms and the relations between the variation of different constants. To finish, we discuss the more speculative possibility of understanding their numerical values and the apparent fine-tuning that they confront us with.

  17. Varying Constants, Gravitation and Cosmology.

    Science.gov (United States)

    Uzan, Jean-Philippe

    2011-01-01

    Fundamental constants are a cornerstone of our physical laws. Any constant varying in space and/or time would reflect the existence of an almost massless field that couples to matter. This will induce a violation of the universality of free fall. Thus, it is of utmost importance for our understanding of gravity and of the domain of validity of general relativity to test for their constancy. We detail the relations between the constants, the tests of the local position invariance and of the universality of free fall. We then review the main experimental and observational constraints that have been obtained from atomic clocks, the Oklo phenomenon, solar system observations, meteorite dating, quasar absorption spectra, stellar physics, pulsar timing, the cosmic microwave background and big bang nucleosynthesis. At each step we describe the basics of each system, its dependence with respect to the constants, the known systematic effects and the most recent constraints that have been obtained. We then describe the main theoretical frameworks in which the low-energy constants may actually be varying and we focus on the unification mechanisms and the relations between the variation of different constants. To finish, we discuss the more speculative possibility of understanding their numerical values and the apparent fine-tuning that they confront us with.

  18. Varying Constants, Gravitation and Cosmology

    Science.gov (United States)

    Uzan, Jean-Philippe

    2011-12-01

    Fundamental constants are a cornerstone of our physical laws. Any constant varying in space and/or time would reflect the existence of an almost massless field that couples to matter. This will induce a violation of the universality of free fall. Thus, it is of utmost importance for our understanding of gravity and of the domain of validity of general relativity to test for their constancy. We detail the relations between the constants, the tests of the local position invariance and of the universality of free fall. We then review the main experimental and observational constraints that have been obtained from atomic clocks, the Oklo phenomenon, solar system observations, meteorite dating, quasar absorption spectra, stellar physics, pulsar timing, the cosmic microwave background and big bang nucleosynthesis. At each step we describe the basics of each system, its dependence with respect to the constants, the known systematic effects and the most recent constraints that have been obtained. We then describe the main theoretical frameworks in which the low-energy constants may actually be varying and we focus on the unification mechanisms and the relations between the variation of different constants. To finish, we discuss the more speculative possibility of understanding their numerical values and the apparent fine-tuning that they confront us with.

  19. Fundamental Research and Developing Countries

    CERN Document Server

    Narison, Stéphan

    2002-01-01

    In the first part of this report, I discuss the sociological role of fundamental research in Developing Countries (DC) and how to realize this program. In the second part, I give a brief and elementary introduction to the field of high-energy physics (HEP), accessible to a large audience not necessary physicists. The aim of this report is to make politicians and financial backers aware on the long-term usefulness of fundamental research in DC and on the possible globalisation of HEP and, in general, of science.

  20. Dielectric Constant and Loss Data, Part 3

    Science.gov (United States)

    1977-05-01

    170.9 10 - 1.86E-7 Isocyanate , SF-52, liquid MIT, Mech. Eng. Dept. TF K’ 83.7 14.7 2.49E-9 96 - 6.28F.-9 105 - 81SOE-9 109 - 9.60E~-9 122 - 1.25E-8 126...72 V-8.9,64,85096,97 Bell LOGs. F-66, IV-3,83; V2-56,57; Asphalt pavement end asphalts , .- 7 P.R.-156 Renotoite, P.R.-l27 Aenhaltri and esmen’. IV...see i~agneiiium meta- Cements &aid asphalt %, :V-56 silicate Cements, alwnina, 8-3 Coal, powdered, P.lR.-191single lump, P.R.-192 Cencoral oti, V5

  1. Dielectric Constant and Loss Data. Part 4

    Science.gov (United States)

    1980-12-01

    derivatives, IV-23-25; Ciba tantalum oxic ý, optical grade powder, V-9,90,91,92; P.R.-159 P.R.-108 Cellulose nitrate and camphor , IV-25 Cincinnati Milicron...IV-39 671, IV-36 Spruce Pine Mica Co., V-7 Suet, IV-60 "S’tafoam", 10-54 Sulfur, crystalline, IV-2 Stanco Distributors, Inc., IV-65,66 sublimed , IV

  2. Dielectric Constant and Loss Data Part 2

    Science.gov (United States)

    1975-12-01

    fluoride, single crystal, Melamine - formaldehyde resins, Columbia Univ., P.R.-75 IV-21,22,112; V-8,88 Manganese-magnesium ferrite, Melamine GMG, IV-2i...Polybutadiene-Astroquartz 3.164-li, Whictaker Corp. 46 Polybutadiene-Kevlar 3.164-10, I # 46 Polyether sulfone (dry sample), " " 47 Polyphenylquinoxalize resin...116 3.¶) , 00565 71 3. 01 G00483 46 Polyether sulfone k.ry sample) Whittaker Corporation SN 300-P, 24 GlHz, 24 0 C C • tan 6 3.26 .0108

  3. Cosmic Time Variation of the Gravitational Constant

    CERN Document Server

    Tomaschitz, R

    2000-01-01

    A pre-relativistic cosmological approach to electromagnetism and gravitation is explored that leads to a cosmic time variation of the fundamental constants. Space itself is supposed to have physical substance, which manifests by its permeability. The scale factors of the permeability tensor induce a time variation of the fundamental constants. Atomic radii, periods, and energy levels scale in cosmic time, which results in dispersionless redshifts without invoking a space expansion. Hubble constant and deceleration parameter are reviewed in this context. The time variation of the gravitational constant at the present epoch can be expressed in terms of these quantities. This provides a completely new way to restrain the deceleration parameter from laboratory bounds on the time variation of the gravitational constant. This variation also affects the redshift dependence of angular diameters and the surface brightness, and we study in some detail the redshift scaling of the linear sizes of radio sources. The effec...

  4. Fine-structure constant constraints on Bekenstein-type models

    CERN Document Server

    Leal, P M M; Ventura, L B

    2014-01-01

    Astrophysical tests of the stability of dimensionless fundamental couplings, such as the fine-structure constant $\\alpha$, are an area of much increased recent activity, following some indications of possible spacetime variations at the few parts per million level. Here we obtain updated constraints on the Bekenstein-Sandvik-Barrow-Magueijo model, which is arguably the simplest model allowing for $\\alpha$ variations. Recent accurate spectroscopic measurements allow us to improve previous constraints by about an order of magnitude. We briefly comment on the dependence of the results on the data sample, as well as on the improvements expected from future facilities.

  5. Modern measurements fundamentals and applications

    CERN Document Server

    Petri, D; Carbone, P; Catelani, M

    2015-01-01

    This book explores the modern role of measurement science for both the technically most advanced applications and in everyday and will help readers gain the necessary skills to specialize their knowledge for a specific field in measurement. Modern Measurements is divided into two parts. Part I (Fundamentals) presents a model of the modern measurement activity and the already recalled fundamental bricks. It starts with a general description that introduces these bricks and the uncertainty concept. The next chapters provide an overview of these bricks and finishes (Chapter 7) with a more general and complex model that encompasses both traditional (hard) measurements and (soft) measurements, aimed at quantifying non-physical concepts, such as quality, satisfaction, comfort, etc. Part II (Applications) is aimed at showing how the concepts presented in Part I can be usefully applied to design and implement measurements in some very impor ant and broad fields. The editors cover System Identification (Chapter 8...

  6. Fundamental physics in space: The French contribution

    Science.gov (United States)

    Léon-Hirtz, Sylvie

    2003-08-01

    This paper outlines the space Fundamental Physics projects developped under CNES responsability together with the french scientific community, either in the national french programme or in the french contribution to the ESA programme, mainly: -the MICROSCOPE project which aims at testing the Equivalence Principle between inertial mass and gravitational mass at a high level of precision, on a microsatellite of the MYRIADE series developped by CNES, -the PHARAO cold-atom clock which is part of the ACES project of ESA, located on an external pallett of the International Space Station, together with a swiss H-MASER and a micro-wave link making comparison with ground clocks, aimed at relativistic tests and measurement of universal constants, -the T2L2 optical link allowing to compare ultra-stable and ultra-precise clocks, -a contribution to the AMS spectrometer which searches for cosmic antimatter, on the external part of the International Space Station, -a contribution to the LISA mission of ESA for direct detection and measurement of gravitational waves by interferometry, -ground-based studies on cold-atom interferometers which could be part of the HYPER project submitted to ESA.

  7. Exchange Rates and Fundamentals.

    Science.gov (United States)

    Engel, Charles; West, Kenneth D.

    2005-01-01

    We show analytically that in a rational expectations present-value model, an asset price manifests near-random walk behavior if fundamentals are I (1) and the factor for discounting future fundamentals is near one. We argue that this result helps explain the well-known puzzle that fundamental variables such as relative money supplies, outputs,…

  8. RFID design fundamentals and applications

    CERN Document Server

    Lozano-Nieto, Albert

    2010-01-01

    RFID is an increasingly pervasive tool that is now used in a wide range of fields. It is employed to substantiate adherence to food preservation and safety standards, combat the circulation of counterfeit pharmaceuticals, and verify authenticity and history of critical parts used in aircraft and other machinery-and these are just a few of its uses. Goes beyond deployment, focusing on exactly how RFID actually worksRFID Design Fundamentals and Applications systematically explores the fundamental principles involved in the design and characterization of RFID technologies. The RFID market is expl

  9. Fundamental approach to discrete mathematics

    CERN Document Server

    Acharjya, DP

    2009-01-01

    About the Book: The book `Fundamental Approach to Discrete Mathematics` is a required part of pursuing a computer science degree at most universities. It provides in-depth knowledge to the subject for beginners and stimulates further interest in the topic. The salient features of this book include: Strong coverage of key topics involving recurrence relation, combinatorics, Boolean algebra, graph theory and fuzzy set theory. Algorithms and examples integrated throughout the book to bring clarity to the fundamental concepts. Each concept and definition is followed by thoughtful examples.

  10. $\\hbar$ as a Physical Constant of Classical Optics and Electrodynamics

    CERN Document Server

    Tremblay, Real; Allen, Claudine Ni

    2015-01-01

    The Planck constant ($\\hbar$) plays a pivotal role in quantum physics. Historically, it has been proposed as postulate, part of a genius empirical relationship $E=\\hbar \\omega$ in order to explain the intensity spectrum of the blackbody radiation for which classical electrodynamic theory led to an unacceptable prediction: The ultraviolet catastrophe. While the usefulness of the Planck constant in various fields of physics is undisputed, its derivation (or lack of) remains unsatisfactory from a fundamental point of view. In this paper, the analysis of the blackbody problem is performed with a series expansion of the electromagnetic field in terms of TE, TM modes in a metallic cavity with small losses, that leads to developing the electromagnetic fields in a \\textit{complete set of orthonormal functions}. This expansion, based on coupled power theory, maintains both space and time together enabling modeling of the blackbody's evolution toward equilibrium. Reaching equilibrium with a multimodal waveguide analysi...

  11. Ion exchange equilibrium constants

    CERN Document Server

    Marcus, Y

    2013-01-01

    Ion Exchange Equilibrium Constants focuses on the test-compilation of equilibrium constants for ion exchange reactions. The book first underscores the scope of the compilation, equilibrium constants, symbols used, and arrangement of the table. The manuscript then presents the table of equilibrium constants, including polystyrene sulfonate cation exchanger, polyacrylate cation exchanger, polymethacrylate cation exchanger, polysterene phosphate cation exchanger, and zirconium phosphate cation exchanger. The text highlights zirconium oxide anion exchanger, zeolite type 13Y cation exchanger, and

  12. Search for Possible Variation of the Fine Structure Constant

    OpenAIRE

    2003-01-01

    Determination of the fine structure constant alpha and search for its possible variation are considered. We focus on a role of the fine structure constant in modern physics and discuss precision tests of quantum electrodynamics. Different methods of a search for possible variations of fundamental constants are compared and those related to optical measurements are considered in detail.

  13. Some comments on the universal constant in DSR

    CERN Document Server

    Girelli, F; Girelli, Florian; Livine, Etera R.

    2006-01-01

    Deformed Special Relativity is usually presented as a deformation of Special Relativity accommodating a new universal constant, the Planck mass, while respecting the relativity principle. In order to avoid some fundamental problems (e.g. soccer ball problem), we argue that we should switch point of view and consider instead the Newton constant $G$ as the universal constant.

  14. Cosmological Constant, Fine Structure Constant and Beyond

    CERN Document Server

    Wei, Hao; Li, Hong-Yu; Xue, Dong-Ze

    2016-01-01

    In this work, we consider the cosmological constant model $\\Lambda\\propto\\alpha^{-6}$, which is well motivated from three independent approaches. As is well known, the evidence of varying fine structure constant $\\alpha$ was found in 1998. If $\\Lambda\\propto\\alpha^{-6}$ is right, it means that the cosmological constant $\\Lambda$ should be also varying. In this work, we try to develop a suitable framework to model this varying cosmological constant $\\Lambda\\propto\\alpha^{-6}$, in which we view it from an interacting vacuum energy perspective. We propose two types of models to describe the evolutions of $\\Lambda$ and $\\alpha$. Then, we consider the observational constraints on these models, by using the 293 $\\Delta\\alpha/\\alpha$ data from the absorption systems in the spectra of distant quasars, and the data of type Ia supernovae (SNIa), cosmic microwave background (CMB), baryon acoustic oscillation (BAO). We find that the model parameters can be tightly constrained to the narrow ranges of ${\\cal O}(10^{-5})$ t...

  15. Imaging of spatial distributions of the millimeter wave intensity by using visible continuum radiation from a discharge in a Cs–Xe mixture. Part I: Review of the method and its fundamentals

    Energy Technology Data Exchange (ETDEWEB)

    Gitlin, M. S., E-mail: gitlin@appl.sci-nnov.ru [Russian Academy of Sciences, Institute of Applied Physics (Russian Federation)

    2017-02-15

    The first part of the review is presented which is dedicated to the time-resolved method of imaging and measuring the spatial distribution of the intensity of millimeter waves by using visible continuum (VC) emitted by the positive column (PC) of a dc discharge in a mixture of cesium vapor with xenon. The review focuses on the operating principles, fundamentals, and applications of this new technique. The design of the discharge tube and experimental setup used to create a wide homogeneous plasma slab with the help of the Cs–Xe discharge at a gas pressure of 45 Torr are described. The millimeter-wave effects on the plasma slab are studied experimentally. The mechanism of microwave-induced variations in the VC brightness and the causes of violation of the local relation between the VC brightness and the intensity of millimeter waves are discussed. Experiments on the imaging of the field patterns of horn antennas and quasi-optical beams demonstrate that this technique can be used for good-quality imaging of millimeter-wave beams in the entire millimeter-wavelength band. The method has a microsecond temporal resolution and a spatial resolution of about 2 mm. Energy sensitivities of about 10 μJ/cm{sup 2} in the Ka-band and about 200 μJ/cm{sup 2} in the D-band have been demonstrated.

  16. Fundamentals of gas dynamics

    CERN Document Server

    Babu, V

    2014-01-01

    Fundamentals of Gas Dynamics, Second Edition isa comprehensively updated new edition and now includes a chapter on the gas dynamics of steam. It covers the fundamental concepts and governing equations of different flows, and includes end of chapter exercises based on the practical applications. A number of useful tables on the thermodynamic properties of steam are also included.Fundamentals of Gas Dynamics, Second Edition begins with an introduction to compressible and incompressible flows before covering the fundamentals of one dimensional flows and normal shock wav

  17. Crystal science fundamentals

    OpenAIRE

    Ramachandran, V.; Halfpenny, PJ; Roberts, KJ

    2017-01-01

    The fundamentals of crystal science notably crystallography, crystal chemistry, crystal defects, crystal morphology and the surface chemistry of crystals are introduced with particular emphasis on organic crystals.

  18. The Determination of the Strong Coupling Constant

    Science.gov (United States)

    Dissertori, Günther

    2016-10-01

    The strong coupling constant is one of the fundamental parameters of the Standard Theory of particle physics. In this review I will briefly summarise the theoretical framework, within which the strong coupling constant is defined and how it is connected to measurable observables. Then I will give an historical overview of its experimental determinations and discuss the current status and world average value. Among the many different techniques used to determine this coupling constant in the context of quantum chromodynamics, I will focus in particular on a number of measurements carried out at the Large Electron-Positron Collider (LEP) and the Large Hadron Collider (LHC) at CERN.

  19. Fundamentals of plastic optical fibers

    CERN Document Server

    Koike, Yasuhiro

    2014-01-01

    Polymer photonics is an interdisciplinary field which demands excellence both in optics (photonics) and materials science (polymer). However, theses disciplines have developed independently, and therefore the demand for a comprehensive work featuring the fundamentals of photonic polymers is greater than ever.This volume focuses on Polymer Optical Fiber and their applications. The first part of the book introduces typical optical fibers according to their classifications of material, propagating mode, and structure. Optical properties, the high bandwidth POF and transmission loss are discussed,

  20. Fundamentals of Physics

    Science.gov (United States)

    Halliday, David; Resnick, Robert; Walker, Jearl

    2003-01-01

    No other book on the market today can match the success of Halliday, Resnick and Walker's Fundamentals of Physics! In a breezy, easy-to-understand style the book offers a solid understanding of fundamental physics concepts, and helps readers apply this conceptual understanding to quantitative problem solving.

  1. Dependence and Fundamentality

    Directory of Open Access Journals (Sweden)

    Justin Zylstra

    2014-12-01

    Full Text Available I argue that dependence is neither necessary nor sufficient for relative fundamentality. I then introduce the notion of 'likeness in nature' and provide an account of relative fundamentality in terms of it and the notion of dependence. Finally, I discuss some puzzles that arise in Aristotle's Categories, to which the theory developed is applied.

  2. Fundamentals of Condensed Matter Physics

    Science.gov (United States)

    Cohen, Marvin L.; Louie, Steven G.

    2016-05-01

    Part I. Basic Concepts: Electrons and Phonons: 1. Concept of a solid: qualitative introduction and overview; 2. Electrons in crystals; 3. Electronic energy bands; 4. Lattice vibrations and phonons; Part II. Electron Intercations, Dynamics and Responses: 5. Electron dynamics in crystals; 6. Many-electron interactions: the interacting electron gas and beyond; 7. Density functional theory; 8. The dielectric function for solids; Part III. Optical and Transport Phenomena: 9. Electronic transitions and optical properties of solids; 10. Electron-phonon interactions; 11. Dynamics of crystal electrons in a magnetic field; 12. Fundamentals of transport phenomena in solids; Part IV. Superconductivity, Magnetism, and Lower Dimensional Systems: 13. Using many-body techniques; 14. Superconductivity; 15. Magnetism; 16. Reduced-dimensional systems and nanostructures; Index.

  3. Generalized Pickands constants

    NARCIS (Netherlands)

    Debicki, K.G.

    2001-01-01

    Pickands constants play an important role in the exact asymptotic of extreme values for Gaussian stochastic processes. By the {it generalized Pickands constant ${cal H_{eta$ we mean the limit begin{eqnarray* {cal H_{eta= lim_{T to inftyfrac{ {cal H_{eta(T){T, end{eqnarray* where ${cal H_{eta(T)= Exp

  4. Fundamentals of electronics

    CERN Document Server

    Schubert, Thomas F

    2015-01-01

    This book, Electronic Devices and Circuit Application, is the first of four books of a larger work, Fundamentals of Electronics. It is comprised of four chapters describing the basic operation of each of the four fundamental building blocks of modern electronics: operational amplifiers, semiconductor diodes, bipolar junction transistors, and field effect transistors. Attention is focused on the reader obtaining a clear understanding of each of the devices when it is operated in equilibrium. Ideas fundamental to the study of electronic circuits are also developed in the book at a basic level to

  5. Fundamentals of electrochemical science

    CERN Document Server

    Oldham, Keith

    1993-01-01

    Key Features* Deals comprehensively with the basic science of electrochemistry* Treats electrochemistry as a discipline in its own right and not as a branch of physical or analytical chemistry* Provides a thorough and quantitative description of electrochemical fundamentals

  6. Fundamentals of crystallography

    CERN Document Server

    2011-01-01

    Crystallography is a basic tool for scientists in many diverse disciplines. This text offers a clear description of fundamentals and of modern applications. It supports curricula in crystallography at undergraduate level.

  7. Fundamentals of structural dynamics

    CERN Document Server

    Craig, Roy R

    2006-01-01

    From theory and fundamentals to the latest advances in computational and experimental modal analysis, this is the definitive, updated reference on structural dynamics.This edition updates Professor Craig's classic introduction to structural dynamics, which has been an invaluable resource for practicing engineers and a textbook for undergraduate and graduate courses in vibrations and/or structural dynamics. Along with comprehensive coverage of structural dynamics fundamentals, finite-element-based computational methods, and dynamic testing methods, this Second Edition includes new and e

  8. Masses of Fundamental Particles

    CERN Document Server

    Terazawa, Hidezumi

    2011-01-01

    Not only the masses of fundamental particles including the weak bosons, Higgs scalar, quarks, and leptons, but also the mixing angles of quarks and those of neutrinos are all explained and/or predicted in the unified composite model of quarks and leptons successfully. In addition, both of the two anomalies recently found by the CDF Collaboration are suggested to be taken as evidences for the substructure of the fundamental particles.

  9. Information security fundamentals

    CERN Document Server

    Peltier, Thomas R

    2013-01-01

    Developing an information security program that adheres to the principle of security as a business enabler must be the first step in an enterprise's effort to build an effective security program. Following in the footsteps of its bestselling predecessor, Information Security Fundamentals, Second Edition provides information security professionals with a clear understanding of the fundamentals of security required to address the range of issues they will experience in the field.The book examines the elements of computer security, employee roles and r

  10. The time constant of the somatogravic illusion.

    Science.gov (United States)

    Correia Grácio, B J; de Winkel, K N; Groen, E L; Wentink, M; Bos, J E

    2013-02-01

    Without visual feedback, humans perceive tilt when experiencing a sustained linear acceleration. This tilt illusion is commonly referred to as the somatogravic illusion. Although the physiological basis of the illusion seems to be well understood, the dynamic behavior is still subject to discussion. In this study, the dynamic behavior of the illusion was measured experimentally for three motion profiles with different frequency content. Subjects were exposed to pure centripetal accelerations in the lateral direction and were asked to indicate their tilt percept by means of a joystick. Variable-radius centrifugation during constant angular rotation was used to generate these motion profiles. Two self-motion perception models were fitted to the experimental data and were used to obtain the time constant of the somatogravic illusion. Results showed that the time constant of the somatogravic illusion was on the order of two seconds, in contrast to the higher time constant found in fixed-radius centrifugation studies. Furthermore, the time constant was significantly affected by the frequency content of the motion profiles. Motion profiles with higher frequency content revealed shorter time constants which cannot be explained by self-motion perception models that assume a fixed time constant. Therefore, these models need to be improved with a mechanism that deals with this variable time constant. Apart from the fundamental importance, these results also have practical consequences for the simulation of sustained accelerations in motion simulators.

  11. Queueing networks a fundamental approach

    CERN Document Server

    Dijk, Nico

    2011-01-01

    This handbook aims to highlight fundamental, methodological and computational aspects of networks of queues to provide insights and to unify results that can be applied in a more general manner.  The handbook is organized into five parts: Part 1 considers exact analytical results such as of product form type. Topics include characterization of product forms by physical balance concepts and simple traffic flow equations, classes of service and queue disciplines that allow a product form, a unified description of product forms for discrete time queueing networks, insights for insensitivity, and aggregation and decomposition results that allow subnetworks to be aggregated into single nodes to reduce computational burden. Part 2 looks at monotonicity and comparison results such as for computational simplification by either of two approaches: stochastic monotonicity and ordering results based on the ordering of the proces generators, and comparison results and explicit error bounds based on an underlying Markov r...

  12. Prime rings with PI rings of constants

    CERN Document Server

    Kharchenko, V K; Rodríguez-Romo, S

    1996-01-01

    It is shown that if the ring of constants of a restricted differential Lie algebra with a quasi-Frobenius inner part satisfies a polynomial identity (PI) then the original prime ring has a generalized polynomial identitiy (GPI). If additionally the ring of constants is semiprime then the original ring is PI. The case of a non-quasi-Frobenius inner part is also considered.

  13. Astronomical reach of fundamental physics

    Science.gov (United States)

    Burrows, Adam S.; Ostriker, Jeremiah P.

    2014-02-01

    Using basic physical arguments, we derive by dimensional and physical analysis the characteristic masses and sizes of important objects in the universe in terms of just a few fundamental constants. This exercise illustrates the unifying power of physics and the profound connections between the small and the large in the cosmos we inhabit. We focus on the minimum and maximum masses of normal stars, the corresponding quantities for neutron stars, the maximum mass of a rocky planet, the maximum mass of a white dwarf, and the mass of a typical galaxy. To zeroth order, we show that all these masses can be expressed in terms of either the Planck mass or the Chandrasekar mass, in combination with various dimensionless quantities. With these examples, we expose the deep interrelationships imposed by nature between disparate realms of the universe and the amazing consequences of the unifying character of physical law.

  14. Materials Fundamentals of Gate Dielectrics

    CERN Document Server

    Demkov, Alexander A

    2006-01-01

    This book presents materials fundamentals of novel gate dielectrics that are being introduced into semiconductor manufacturing to ensure the continuous scalling of the CMOS devices. This is a very fast evolving field of research so we choose to focus on the basic understanding of the structure, thermodunamics, and electronic properties of these materials that determine their performance in device applications. Most of these materials are transition metal oxides. Ironically, the d-orbitals responsible for the high dielectric constant cause sever integration difficulties thus intrinsically limiting high-k dielectrics. Though new in the electronics industry many of these materials are wel known in the field of ceramics, and we describe this unique connection. The complexity of the structure-property relations in TM oxides makes the use of the state of the art first-principles calculations necessary. Several chapters give a detailed description of the modern theory of polarization, and heterojunction band discont...

  15. Astronomical reach of fundamental physics.

    Science.gov (United States)

    Burrows, Adam S; Ostriker, Jeremiah P

    2014-02-18

    Using basic physical arguments, we derive by dimensional and physical analysis the characteristic masses and sizes of important objects in the universe in terms of just a few fundamental constants. This exercise illustrates the unifying power of physics and the profound connections between the small and the large in the cosmos we inhabit. We focus on the minimum and maximum masses of normal stars, the corresponding quantities for neutron stars, the maximum mass of a rocky planet, the maximum mass of a white dwarf, and the mass of a typical galaxy. To zeroth order, we show that all these masses can be expressed in terms of either the Planck mass or the Chandrasekar mass, in combination with various dimensionless quantities. With these examples, we expose the deep interrelationships imposed by nature between disparate realms of the universe and the amazing consequences of the unifying character of physical law.

  16. History and progress on accurate measurements of the Planck constant

    Science.gov (United States)

    Steiner, Richard

    2013-01-01

    The measurement of the Planck constant, h, is entering a new phase. The CODATA 2010 recommended value is 6.626 069 57 × 10-34 J s, but it has been a long road, and the trip is not over yet. Since its discovery as a fundamental physical constant to explain various effects in quantum theory, h has become especially important in defining standards for electrical measurements and soon, for mass determination. Measuring h in the International System of Units (SI) started as experimental attempts merely to prove its existence. Many decades passed while newer experiments measured physical effects that were the influence of h combined with other physical constants: elementary charge, e, and the Avogadro constant, NA. As experimental techniques improved, the precision of the value of h expanded. When the Josephson and quantum Hall theories led to new electronic devices, and a hundred year old experiment, the absolute ampere, was altered into a watt balance, h not only became vital in definitions for the volt and ohm units, but suddenly it could be measured directly and even more accurately. Finally, as measurement uncertainties now approach a few parts in 108 from the watt balance experiments and Avogadro determinations, its importance has been linked to a proposed redefinition of a kilogram unit of mass. The path to higher accuracy in measuring the value of h was not always an example of continuous progress. Since new measurements periodically led to changes in its accepted value and the corresponding SI units, it is helpful to see why there were bumps in the road and where the different branch lines of research joined in the effort. Recalling the bumps along this road will hopefully avoid their repetition in the upcoming SI redefinition debates. This paper begins with a brief history of the methods to measure a combination of fundamental constants, thus indirectly obtaining the Planck constant. The historical path is followed in the section describing how the improved

  17. Development of Monopole Interaction Models for Ionic Compounds. Part I: Estimation of Aqueous Henry’s Law Constants for Ions and Gas Phase pKa Values for Acidic Compounds

    Science.gov (United States)

    The SPARC (SPARC Performs Automated Reasoning in Chemistry) physicochemical mechanistic models for neutral compounds have been extended to estimate Henry’s Law Constant (HLC) for charged species by incorporating ionic electrostatic interaction models. Combinations of absolute aq...

  18. Fundamentals of algebraic topology

    CERN Document Server

    Weintraub, Steven H

    2014-01-01

    This rapid and concise presentation of the essential ideas and results of algebraic topology follows the axiomatic foundations pioneered by Eilenberg and Steenrod. The approach of the book is pragmatic: while most proofs are given, those that are particularly long or technical are omitted, and results are stated in a form that emphasizes practical use over maximal generality. Moreover, to better reveal the logical structure of the subject, the separate roles of algebra and topology are illuminated. Assuming a background in point-set topology, Fundamentals of Algebraic Topology covers the canon of a first-year graduate course in algebraic topology: the fundamental group and covering spaces, homology and cohomology, CW complexes and manifolds, and a short introduction to homotopy theory. Readers wishing to deepen their knowledge of algebraic topology beyond the fundamentals are guided by a short but carefully annotated bibliography.

  19. Fundamentals of turbomachines

    CERN Document Server

    Dick, Erik

    2015-01-01

    This book explores the working principles of all kinds of turbomachines. The same theoretical framework is used to analyse the different machine types. Fundamentals are first presented and theoretical concepts are then elaborated for particular machine types, starting with the simplest ones.For each machine type, the author strikes a balance between building basic understanding and exploring knowledge of practical aspects. Readers are invited through challenging exercises to consider how the theory applies to particular cases and how it can be generalised.   The book is primarily meant as a course book. It teaches fundamentals and explores applications. It will appeal to senior undergraduate and graduate students in mechanical engineering and to professional engineers seeking to understand the operation of turbomachines. Readers will gain a fundamental understanding of turbomachines. They will also be able to make a reasoned choice of turbomachine for a particular application and to understand its operation...

  20. Monte Carlo fundamentals

    Energy Technology Data Exchange (ETDEWEB)

    Brown, F.B.; Sutton, T.M.

    1996-02-01

    This report is composed of the lecture notes from the first half of a 32-hour graduate-level course on Monte Carlo methods offered at KAPL. These notes, prepared by two of the principle developers of KAPL`s RACER Monte Carlo code, cover the fundamental theory, concepts, and practices for Monte Carlo analysis. In particular, a thorough grounding in the basic fundamentals of Monte Carlo methods is presented, including random number generation, random sampling, the Monte Carlo approach to solving transport problems, computational geometry, collision physics, tallies, and eigenvalue calculations. Furthermore, modern computational algorithms for vector and parallel approaches to Monte Carlo calculations are covered in detail, including fundamental parallel and vector concepts, the event-based algorithm, master/slave schemes, parallel scaling laws, and portability issues.

  1. Fundamentals of Numerical Modelling of Casting Processes

    DEFF Research Database (Denmark)

    Pryds, Nini; Thorborg, Jesper; Lipinski, Marek;

    Fundamentals of Numerical Modelling of Casting Processes comprises a thorough presentation of the basic phenomena that need to be addressed in numerical simulation of casting processes. The main philosophy of the book is to present the topics in view of their physical meaning, whenever possible......, rather than relying strictly on mathematical formalism. The book, aimed both at the researcher and the practicing engineer, as well as the student, is naturally divided into four parts. Part I (Chapters 1-3) introduces the fundamentals of modelling in a 1-dimensional framework. Part II (Chapter 4...

  2. Fundamentals of fluid lubrication

    Science.gov (United States)

    Hamrock, Bernard J.

    1991-01-01

    The aim is to coordinate the topics of design, engineering dynamics, and fluid dynamics in order to aid researchers in the area of fluid film lubrication. The lubrication principles that are covered can serve as a basis for the engineering design of machine elements. The fundamentals of fluid film lubrication are presented clearly so that students that use the book will have confidence in their ability to apply these principles to a wide range of lubrication situations. Some guidance on applying these fundamentals to the solution of engineering problems is also provided.

  3. Infosec management fundamentals

    CERN Document Server

    Dalziel, Henry

    2015-01-01

    Infosec Management Fundamentals is a concise overview of the Information Security management concepts and techniques, providing a foundational template for both experienced professionals and those new to the industry. This brief volume will also appeal to business executives and managers outside of infosec who want to understand the fundamental concepts of Information Security and how it impacts their business decisions and daily activities. Teaches ISO/IEC 27000 best practices on information security management Discusses risks and controls within the context of an overall information securi

  4. Homeschooling and religious fundamentalism

    Directory of Open Access Journals (Sweden)

    Robert KUNZMAN

    2010-10-01

    Full Text Available This article considers the relationship between homeschooling and religious fundamentalism by focusing on their intersection in the philosophies and practices of conservative Christian homeschoolers in the United States. Homeschooling provides an ideal educational setting to support several core fundamentalist principles: resistance to contemporary culture; suspicion of institutional authority and professional expertise; parental control and centrality of the family; and interweaving of faith and academics. It is important to recognize, however, that fundamentalism exists on a continuum; conservative religious homeschoolers resist liberal democratic values to varying degrees, and efforts to foster dialogue and accommodation with religious homeschoolers can ultimately helpstrengthen the broader civic fabric.

  5. Fundamentals of nonlinear optics

    CERN Document Server

    Powers, Peter E

    2011-01-01

    Peter Powers's rigorous but simple description of a difficult field keeps the reader's attention throughout. … All chapters contain a list of references and large numbers of practice examples to be worked through. … By carefully working through the proposed problems, students will develop a sound understanding of the fundamental principles and applications. … the book serves perfectly for an introductory-level course for second- and third-order nonlinear optical phenomena. The author's writing style is refreshing and original. I expect that Fundamentals of Nonlinear Optics will fast become pop

  6. Antennas fundamentals, design, measurement

    CERN Document Server

    Long, Maurice

    2009-01-01

    This comprehensive revision (3rd Edition) is a senior undergraduate or first-year graduate level textbook on antenna fundamentals, design, performance analysis, and measurements. In addition to its use as a formal course textbook, the book's pragmatic style and emphasis on the fundamentals make it especially useful to engineering professionals who need to grasp the essence of the subject quickly but without being mired in unnecessary detail. This new edition was prepared for a first year graduate course at Southern Polytechnic State University in Georgia. It provides broad coverage of antenna

  7. Fundamentals of magnetism

    CERN Document Server

    Reis, Mario

    2013-01-01

    The Fundamentals of Magnetism is a truly unique reference text, that explores the study of magnetism and magnetic behavior with a depth that no other book can provide. It covers the most detailed descriptions of the fundamentals of magnetism providing an emphasis on statistical mechanics which is absolutely critical for understanding magnetic behavior. The books covers the classical areas of basic magnetism, including Landau Theory and magnetic interactions, but features a more concise and easy-to-read style. Perfect for upper-level graduate students and industry researchers, The Fu

  8. Elastic constants of calcite

    Science.gov (United States)

    Peselnick, L.; Robie, R.A.

    1962-01-01

    The recent measurements of the elastic constants of calcite by Reddy and Subrahmanyam (1960) disagree with the values obtained independently by Voigt (1910) and Bhimasenachar (1945). The present authors, using an ultrasonic pulse technique at 3 Mc and 25??C, determined the elastic constants of calcite using the exact equations governing the wave velocities in the single crystal. The results are C11=13.7, C33=8.11, C44=3.50, C12=4.82, C13=5.68, and C14=-2.00, in units of 1011 dyncm2. Independent checks of several of the elastic constants were made employing other directions and polarizations of the wave velocities. With the exception of C13, these values substantially agree with the data of Voigt and Bhimasenachar. ?? 1962 The American Institute of Physics.

  9. Algorithm for structure constants

    CERN Document Server

    Paiva, F M

    2011-01-01

    In a $n$-dimensional Lie algebra, random numerical values are assigned by computer to $n(n-1)$ especially selected structure constants. An algorithm is then created, which calculates without ambiguity the remaining constants, obeying the Jacobi conditions. Differently from others, this algorithm is suitable even for poor personal computer. ------------- En $n$-dimensia algebro de Lie, hazardaj numeraj valoroj estas asignitaj per komputilo al $n(n-1)$ speciale elektitaj konstantoj de strukturo. Tiam algoritmo estas kreita, kalkulante senambigue la ceterajn konstantojn, obeante kondicxojn de Jacobi. Malsimile al aliaj algoritmoj, tiu cxi tauxgas ecx por malpotenca komputilo.

  10. Radiographic constant exposure technique

    DEFF Research Database (Denmark)

    Domanus, Joseph Czeslaw

    1985-01-01

    The constant exposure technique has been applied to assess various industrial radiographic systems. Different X-ray films and radiographic papers of two producers were compared. Special attention was given to fast film and paper used with fluorometallic screens. Radiographic image quality...... was tested by the use of ISO wire IQI's and ASTM penetrameters used on Al and Fe test plates. Relative speed and reduction of kilovoltage obtained with the constant exposure technique were calculated. The advantages of fast radiographic systems are pointed out...

  11. The aliquot constant

    CERN Document Server

    Bosma, Wieb

    2009-01-01

    The average value of log s(n)/n taken over the first N even integers is shown to converge to a constant lambda when N tends to infinity; moreover, the value of this constant is approximated and proven to be less than 0. Here s(n) sums the divisors of n less than n. Thus the geometric mean of s(n)/n, the growth factor of the function s, in the long run tends to be less than 1. This could be interpreted as probabilistic evidence that aliquot sequences tend to remain bounded.

  12. Homeschooling and Religious Fundamentalism

    Science.gov (United States)

    Kunzman, Robert

    2010-01-01

    This article considers the relationship between homeschooling and religious fundamentalism by focusing on their intersection in the philosophies and practices of conservative Christian homeschoolers in the United States. Homeschooling provides an ideal educational setting to support several core fundamentalist principles: resistance to…

  13. Fundamental partial compositeness

    DEFF Research Database (Denmark)

    Sannino, Francesco; Strumia, Alessandro; Tesi, Andrea

    2016-01-01

    We construct renormalizable Standard Model extensions, valid up to the Planck scale, that give a composite Higgs from a new fundamental strong force acting on fermions and scalars. Yukawa interactions of these particles with Standard Model fermions realize the partial compositeness scenario. Unde...

  14. Fundamental Physics Microgravity Sensitivity

    Science.gov (United States)

    Israelsson, Ulf

    1998-01-01

    An introduction followed by a brief discussion about the sensitivity to microgravity environment disturbances for some recent and planned experiments in microgravity fundamental physics will be presented. In particular, correlation between gravity disturbances and the quality of science data sets measured by the Confined Helium Experiment (CHEX) during ground testing and during the November 1997 USMP-4 flight will be described.

  15. Fundamental Metallurgy of Solidification

    DEFF Research Database (Denmark)

    Tiedje, Niels

    2004-01-01

    The text takes the reader through some fundamental aspects of solidification, with focus on understanding the basic physics that govern solidification in casting and welding. It is described how the first solid is formed and which factors affect nucleation. It is described how crystals grow from ...

  16. Fundamentals of convolutional coding

    CERN Document Server

    Johannesson, Rolf

    2015-01-01

    Fundamentals of Convolutional Coding, Second Edition, regarded as a bible of convolutional coding brings you a clear and comprehensive discussion of the basic principles of this field * Two new chapters on low-density parity-check (LDPC) convolutional codes and iterative coding * Viterbi, BCJR, BEAST, list, and sequential decoding of convolutional codes * Distance properties of convolutional codes * Includes a downloadable solutions manual

  17. Fundamentals of soil science

    Science.gov (United States)

    This study guide provides comments and references for professional soil scientists who are studying for the soil science fundamentals exam needed as the first step for certification. The performance objectives were determined by the Soil Science Society of America's Council of Soil Science Examiners...

  18. Fundamentals and Optimal Institutions

    DEFF Research Database (Denmark)

    Gonzalez-Eiras, Martin; Harmon, Nikolaj Arpe; Rossi, Martín

    2016-01-01

    of regulatory institutions such as revenue sharing, salary caps or luxury taxes. We show, theoretically and empirically, that these large differences in adopted institutions can be rationalized as optimal responses to differences in the fundamental characteristics of the sports being played. This provides...

  19. Fundamentals of astrodynamics

    NARCIS (Netherlands)

    Wakker, K.F.

    2015-01-01

    This book deals with the motion of the center of mass of a spacecraft; this discipline is generally called astrodynamics. The book focuses on an analytical treatment of the motion of spacecraft and provides insight into the fundamentals of spacecraft orbit dynamics. A large number of topics are trea

  20. Fundamentals of plasma physics

    CERN Document Server

    Bittencourt, J A

    1986-01-01

    A general introduction designed to present a comprehensive, logical and unified treatment of the fundamentals of plasma physics based on statistical kinetic theory. Its clarity and completeness make it suitable for self-learning and self-paced courses. Problems are included.

  1. Lasers and optoelectronics fundamentals, devices and applications

    CERN Document Server

    Maini, Anil K

    2013-01-01

    With emphasis on the physical and engineering principles, this book provides a comprehensive and highly accessible treatment of modern lasers and optoelectronics. Divided into four parts, it explains laser fundamentals, types of lasers, laser electronics & optoelectronics, and laser applications, covering each of the topics in their entirety, from basic fundamentals to advanced concepts. Key features include: exploration of technological and application-related aspects of lasers and optoelectronics, detailing both existing and emerging applications in industry, medical diag

  2. Multiphase flow dynamics 1 fundamentals

    CERN Document Server

    Kolev, Nikolay Ivanov

    2004-01-01

    Multi-phase flows are part of our natural environment such as tornadoes, typhoons, air and water pollution and volcanic activities as well as part of industrial technology such as power plants, combustion engines, propulsion systems, or chemical and biological industry. The industrial use of multi-phase systems requires analytical and numerical strategies for predicting their behavior. In its third extended edition this monograph contains theory, methods and practical experience for describing complex transient multi-phase processes in arbitrary geometrical configurations, providing a systematic presentation of the theory and practice of numerical multi-phase fluid dynamics. In the present first volume the fundamentals of multiphase dynamics are provided. This third edition includes various updates, extensions and improvements in all book chapters.

  3. Multiphase flow dynamics 1 fundamentals

    CERN Document Server

    Kolev, Nikolay Ivanov

    2007-01-01

    Multi-phase flows are part of our natural environment such as tornadoes, typhoons, air and water pollution and volcanic activities as well as part of industrial technology such as power plants, combustion engines, propulsion systems, or chemical and biological industry. The industrial use of multi-phase systems requires analytical and numerical strategies for predicting their behavior. In its third extended edition this monograph contains theory, methods and practical experience for describing complex transient multi-phase processes in arbitrary geometrical configurations, providing a systematic presentation of the theory and practice of numerical multi-phase fluid dynamics. In the present first volume the fundamentals of multiphase dynamics are provided. This third edition includes various updates, extensions and improvements in all book chapters.

  4. Measuring Boltzmann's Constant with Carbon Dioxide

    Science.gov (United States)

    Ivanov, Dragia; Nikolov, Stefan

    2013-01-01

    In this paper we present two experiments to measure Boltzmann's constant--one of the fundamental constants of modern-day physics, which lies at the base of statistical mechanics and thermodynamics. The experiments use very basic theory, simple equipment and cheap and safe materials yet provide very precise results. They are very easy and…

  5. Measuring Boltzmann's Constant with Carbon Dioxide

    Science.gov (United States)

    Ivanov, Dragia; Nikolov, Stefan

    2013-01-01

    In this paper we present two experiments to measure Boltzmann's constant--one of the fundamental constants of modern-day physics, which lies at the base of statistical mechanics and thermodynamics. The experiments use very basic theory, simple equipment and cheap and safe materials yet provide very precise results. They are very easy and…

  6. Compassion is a constant.

    Science.gov (United States)

    Scott, Tricia

    2015-11-01

    Compassion is a powerful word that describes an intense feeling of commiseration and a desire to help those struck by misfortune. Most people know intuitively how and when to offer compassion to relieve another person's suffering. In health care, compassion is a constant; it cannot be rationed because emergency nurses have limited time or resources to manage increasing demands.

  7. PEM Fuel Cells - Fundamentals, Modeling and Applications

    Directory of Open Access Journals (Sweden)

    Maher A.R. Sadiq Al-Baghdadi

    2013-01-01

    Full Text Available Part I: Fundamentals Chapter 1: Introduction. Chapter 2: PEM fuel cell thermodynamics, electrochemistry, and performance. Chapter 3: PEM fuel cell components. Chapter 4: PEM fuel cell failure modes. Part II: Modeling and Simulation Chapter 5: PEM fuel cell models based on semi-empirical simulation. Chapter 6: PEM fuel cell models based on computational fluid dynamics. Part III: Applications Chapter 7: PEM fuel cell system design and applications.

  8. Minimal surfaces in symmetric spaces with parallel second fundamental form

    Indian Academy of Sciences (India)

    XIAOXIANG JIAO; MINGYAN LI

    2017-09-01

    In this paper, we study geometry of isometric minimal immersions of Riemannian surfaces in a symmetric space by moving frames and prove that the Gaussian curvature must be constant if the immersion is of parallel second fundamental form. In particular, when the surface is $S^2$, we discuss the special case and obtain a necessary and sufficient condition such that its second fundamental form is parallel. We alsoconsider isometric minimal two-spheres immersed in complex two-dimensional Kählersymmetric spaces with parallel second fundamental form, and prove that the immersionis totally geodesic with constant Kähler angle if it is neither holomorphic nor antiholomorphicwith Kähler angle $\\alpha\

  9. Fundamental stellar parameters

    CERN Document Server

    Wittkowski, M

    2004-01-01

    I present a discussion of fundamental stellar parameters and their observational determination in the context of interferometric measurements with current and future optical/infrared interferometric facilities. Stellar parameters and the importance of their determination for stellar physics are discussed. One of the primary uses of interferometry in the field of stellar physics is the measurement of the intensity profile across the stellar disk, both as a function of position angle and of wavelength. High-precision fundamental stellar parameters are also derived by characterizations of binary and multiple system using interferometric observations. This topic is discussed in detail elsewhere in these proceedings. Comparison of observed spectrally dispersed center-to-limb intensity variations with models of stellar atmospheres and stellar evolution may result in an improved understanding of key phenomena in stellar astrophysics such as the precise evolutionary effects on the main sequence, the evolution of meta...

  10. Fundamentals of nuclear physics

    CERN Document Server

    Takigawa, Noboru

    2017-01-01

    This book introduces the current understanding of the fundamentals of nuclear physics by referring to key experimental data and by providing a theoretical understanding of principal nuclear properties. It primarily covers the structure of nuclei at low excitation in detail. It also examines nuclear forces and decay properties. In addition to fundamentals, the book treats several new research areas such as non-relativistic as well as relativistic Hartree–Fock calculations, the synthesis of super-heavy elements, the quantum chromodynamics phase diagram, and nucleosynthesis in stars, to convey to readers the flavor of current research frontiers in nuclear physics. The authors explain semi-classical arguments and derivation of its formulae. In these ways an intuitive understanding of complex nuclear phenomena is provided. The book is aimed at graduate school students as well as junior and senior undergraduate students and postdoctoral fellows. It is also useful for researchers to update their knowledge of diver...

  11. Fundamentals of differential beamforming

    CERN Document Server

    Benesty, Jacob; Pan, Chao

    2016-01-01

    This book provides a systematic study of the fundamental theory and methods of beamforming with differential microphone arrays (DMAs), or differential beamforming in short. It begins with a brief overview of differential beamforming and some popularly used DMA beampatterns such as the dipole, cardioid, hypercardioid, and supercardioid, before providing essential background knowledge on orthogonal functions and orthogonal polynomials, which form the basis of differential beamforming. From a physical perspective, a DMA of a given order is defined as an array that measures the differential acoustic pressure field of that order; such an array has a beampattern in the form of a polynomial whose degree is equal to the DMA order. Therefore, the fundamental and core problem of differential beamforming boils down to the design of beampatterns with orthogonal polynomials. But certain constraints also have to be considered so that the resulting beamformer does not seriously amplify the sensors’ self noise and the mism...

  12. Fundamentals of Polarized Light

    Science.gov (United States)

    Mishchenko, Michael

    2003-01-01

    The analytical and numerical basis for describing scattering properties of media composed of small discrete particles is formed by the classical electromagnetic theory. Although there are several excellent textbooks outlining the fundamentals of this theory, it is convenient for our purposes to begin with a summary of those concepts and equations that are central to the subject of this book and will be used extensively in the following chapters. We start by formulating Maxwell's equations and constitutive relations for time- harmonic macroscopic electromagnetic fields and derive the simplest plane-wave solution that underlies the basic optical idea of a monochromatic parallel beam of light. This solution naturally leads to the introduction of such fundamental quantities as the refractive index and the Stokes parameters. Finally, we define the concept of a quasi-monochromatic beam of light and discuss its implications.

  13. What is Fundamental?

    CERN Multimedia

    2004-01-01

    Discussing what is fundamental in a variety of fields, biologist Richard Dawkins, physicist Gerardus 't Hooft, and mathematician Alain Connes spoke to a packed Main Auditorium at CERN 15 October. Dawkins, Professor of the Public Understanding of Science at Oxford University, explained simply the logic behind Darwinian natural selection, and how it would seem to apply anywhere in the universe that had the right conditions. 't Hooft, winner of the 1999 Physics Nobel Prize, outlined some of the main problems in physics today, and said he thinks physics is so fundamental that even alien scientists from another planet would likely come up with the same basic principles, such as relativity and quantum mechanics. Connes, winner of the 1982 Fields Medal (often called the Nobel Prize of Mathematics), explained how physics is different from mathematics, which he described as a "factory for concepts," unfettered by connection to the physical world. On 16 October, anthropologist Sharon Traweek shared anecdotes from her ...

  14. Fundamental composite electroweak dynamics

    DEFF Research Database (Denmark)

    Arbey, Alexandre; Cacciapaglia, Giacomo; Cai, Haiying

    2017-01-01

    Using the recent joint results from the ATLAS and CMS collaborations on the Higgs boson, we determine the current status of composite electroweak dynamics models based on the expected scalar sector. Our analysis can be used as a minimal template for a wider class of models between the two limiting...... cases of composite Goldstone Higgs and Technicolor-like ones. This is possible due to the existence of a unified description, both at the effective and fundamental Lagrangian levels, of models of composite Higgs dynamics where the Higgs boson itself can emerge, depending on the way the electroweak...... space at the effective Lagrangian level. We show that a wide class of models of fundamental composite electroweak dynamics are still compatible with the present constraints. The results are relevant for the ongoing and future searches at the Large Hadron Collider....

  15. Fundamentals of Stochastic Networks

    CERN Document Server

    Ibe, Oliver C

    2011-01-01

    An interdisciplinary approach to understanding queueing and graphical networks In today's era of interdisciplinary studies and research activities, network models are becoming increasingly important in various areas where they have not regularly been used. Combining techniques from stochastic processes and graph theory to analyze the behavior of networks, Fundamentals of Stochastic Networks provides an interdisciplinary approach by including practical applications of these stochastic networks in various fields of study, from engineering and operations management to communications and the physi

  16. Fundamentals of queueing theory

    CERN Document Server

    Gross, Donald; Thompson, James M; Harris, Carl M

    2013-01-01

    Praise for the Third Edition ""This is one of the best books available. Its excellent organizational structure allows quick reference to specific models and its clear presentation . . . solidifies the understanding of the concepts being presented.""-IIE Transactions on Operations Engineering Thoroughly revised and expanded to reflect the latest developments in the field, Fundamentals of Queueing Theory, Fourth Edition continues to present the basic statistical principles that are necessary to analyze the probabilistic nature of queues. Rather than pre

  17. Fundamentals of Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Wollaber, Allan Benton [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-06-16

    This is a powerpoint presentation which serves as lecture material for the Parallel Computing summer school. It goes over the fundamentals of the Monte Carlo calculation method. The material is presented according to the following outline: Introduction (background, a simple example: estimating π), Why does this even work? (The Law of Large Numbers, The Central Limit Theorem), How to sample (inverse transform sampling, rejection), and An example from particle transport.

  18. High voltage engineering fundamentals

    CERN Document Server

    Kuffel, E; Hammond, P

    1984-01-01

    Provides a comprehensive treatment of high voltage engineering fundamentals at the introductory and intermediate levels. It covers: techniques used for generation and measurement of high direct, alternating and surge voltages for general application in industrial testing and selected special examples found in basic research; analytical and numerical calculation of electrostatic fields in simple practical insulation system; basic ionisation and decay processes in gases and breakdown mechanisms of gaseous, liquid and solid dielectrics; partial discharges and modern discharge detectors; and over

  19. Biomedical engineering fundamentals

    CERN Document Server

    Bronzino, Joseph D

    2014-01-01

    Known as the bible of biomedical engineering, The Biomedical Engineering Handbook, Fourth Edition, sets the standard against which all other references of this nature are measured. As such, it has served as a major resource for both skilled professionals and novices to biomedical engineering.Biomedical Engineering Fundamentals, the first volume of the handbook, presents material from respected scientists with diverse backgrounds in physiological systems, biomechanics, biomaterials, bioelectric phenomena, and neuroengineering. More than three dozen specific topics are examined, including cardia

  20. Fundamentals of neurobiology.

    Science.gov (United States)

    Greg Hall, D

    2011-01-01

    Session 1 of the 2010 STP/IFSTP Joint Symposium on Toxicologic Neuropathology, titled "Fundamentals of Neurobiology," was organized to provide a foundation for subsequent sessions by presenting essential elements of neuroanatomy and nervous system function. A brief introduction to the session titled "Introduction to Correlative Neurobiology" was provided by Dr. Greg Hall (Eli Lilly and Company, Indianapolis, IN). Correlative neurobiology refers to considerations of the relationships between the highly organized and compartmentalized structure of nervous tissues and the functioning within this system.

  1. Fundamentals of linear algebra

    CERN Document Server

    Dash, Rajani Ballav

    2008-01-01

    FUNDAMENTALS OF LINEAR ALGEBRA is a comprehensive Text Book, which can be used by students and teachers of All Indian Universities. The Text has easy, understandable form and covers all topics of UGC Curriculum. There are lots of worked out examples which helps the students in solving the problems without anybody's help. The Problem sets have been designed keeping in view of the questions asked in different examinations.

  2. Neutrons and Fundamental Symmetries

    Energy Technology Data Exchange (ETDEWEB)

    Plaster, Bradley [Univ. of Kentucky, Lexington, KY (United States). Dept. of Physics and Astronomy

    2016-01-11

    The research supported by this project addressed fundamental open physics questions via experiments with subatomic particles. In particular, neutrons constitute an especially ideal “laboratory” for fundamental physics tests, as their sensitivities to the four known forces of nature permit a broad range of tests of the so-called “Standard Model”, our current best physics model for the interactions of subatomic particles. Although the Standard Model has been a triumphant success for physics, it does not provide satisfactory answers to some of the most fundamental open questions in physics, such as: are there additional forces of nature beyond the gravitational, electromagnetic, weak nuclear, and strong nuclear forces?, or why does our universe consist of more matter than anti-matter? This project also contributed significantly to the training of the next generation of scientists, of considerable value to the public. Young scientists, ranging from undergraduate students to graduate students to post-doctoral researchers, made significant contributions to the work carried out under this project.

  3. Value of Fundamental Science

    Science.gov (United States)

    Burov, Alexey

    Fundamental science is a hard, long-term human adventure that has required high devotion and social support, especially significant in our epoch of Mega-science. The measure of this devotion and this support expresses the real value of the fundamental science in public opinion. Why does fundamental science have value? What determines its strength and what endangers it? The dominant answer is that the value of science arises out of curiosity and is supported by the technological progress. Is this really a good, astute answer? When trying to attract public support, we talk about the ``mystery of the universe''. Why do these words sound so attractive? What is implied by and what is incompatible with them? More than two centuries ago, Immanuel Kant asserted an inseparable entanglement between ethics and metaphysics. Thus, we may ask: which metaphysics supports the value of scientific cognition, and which does not? Should we continue to neglect the dependence of value of pure science on metaphysics? If not, how can this issue be addressed in the public outreach? Is the public alienated by one or another message coming from the face of science? What does it mean to be politically correct in this sort of discussion?

  4. Influencia del momento de inercia del tambor y de diferentes ángulos de alimentación constantes sobre el calibre de las partículas de forraje procesado con picadores del tipo de tambor con alimentación manual. Parte I

    Directory of Open Access Journals (Sweden)

    Pedro A. Valdés Hernández

    2010-01-01

    Full Text Available La investigación realizada, presenta como objetivo determinar la influencia del momento de inercia del tambor picador y de diferentes ángulos de alimentación constantes, sobre el calibre de las partículas de forraje procesado con máquinas picadoras de forraje de tallos gruesos del tipo de tambor con alimentación manual. En la investigación se aplica la modelación físico-matemática del proceso tecnológico, cuyos resultados se toman como base para el diseño de experimentos realizados durante el procesamiento de tallos de la variedad de Caña de Azúcar C323-68, siendo la de mayor uso en Cuba para la alimentación del ganado. La modelación físico matemática del proceso permite la predicción del calibre de las partículas con errores no superiores a 15,3%, con un nivel de significación del 1% para valores racionales del momento de inercia del tambor, calculado sobre bases teóricas. La racionalización del momento de inercia en la picadora de forraje de producción nacional MF-IIMA modelo EM-01 produjo un incremento en la calidad del desmenuzado, que se manifestó en un mayor porcentaje de partículas con calibres por debajo de 20 mm entre 43 y 48%. La modelación teórica resultó una valiosa herramienta de trabajo, posibilitando el perfeccionamiento del diseño de este tipo de picadora de forraje.

  5. Can compactifications solve the cosmological constant problem?

    Energy Technology Data Exchange (ETDEWEB)

    Hertzberg, Mark P. [Institute of Cosmology, Department of Physics and Astronomy, Tufts University,574 Boston Ave, Medford, MA 02155 (United States); Center for Theoretical Physics, Department of Physics,Massachusetts Institute of Technology,77 Massachusetts Ave, Cambridge, MA 02139 (United States); Masoumi, Ali [Institute of Cosmology, Department of Physics and Astronomy, Tufts University,574 Boston Ave, Medford, MA 02155 (United States)

    2016-06-30

    Recently, there have been claims in the literature that the cosmological constant problem can be dynamically solved by specific compactifications of gravity from higher-dimensional toy models. These models have the novel feature that in the four-dimensional theory, the cosmological constant Λ is much smaller than the Planck density and in fact accumulates at Λ=0. Here we show that while these are very interesting models, they do not properly address the real cosmological constant problem. As we explain, the real problem is not simply to obtain Λ that is small in Planck units in a toy model, but to explain why Λ is much smaller than other mass scales (and combinations of scales) in the theory. Instead, in these toy models, all other particle mass scales have been either removed or sent to zero, thus ignoring the real problem. To this end, we provide a general argument that the included moduli masses are generically of order Hubble, so sending them to zero trivially sends the cosmological constant to zero. We also show that the fundamental Planck mass is being sent to zero, and so the central problem is trivially avoided by removing high energy physics altogether. On the other hand, by including various large mass scales from particle physics with a high fundamental Planck mass, one is faced with a real problem, whose only known solution involves accidental cancellations in a landscape.

  6. Constant-pressure Blowers

    Science.gov (United States)

    Sorensen, E

    1940-01-01

    The conventional axial blowers operate on the high-pressure principle. One drawback of this type of blower is the relatively low pressure head, which one attempts to overcome with axial blowers producing very high pressure at a given circumferential speed. The Schicht constant-pressure blower affords pressure ratios considerably higher than those of axial blowers of conventional design with approximately the same efficiency.

  7. String Scale Cosmological Constant

    OpenAIRE

    Chalmers, Gordon

    2006-01-01

    The cosmological constant is an unexplained until now phenomena of nature that requires an explanation through string effects. The apparent discrepancy between theory and experiment is enourmous and has already been explained several times by the author including mechanisms. In this work the string theory theory of abolished string modes is documented and given perturbatively to all loop orders. The holographic underpinning is also exposed. The matching with the data of the LIGO and D0 experi...

  8. The Hubble Constant.

    Science.gov (United States)

    Jackson, Neal

    2015-01-01

    I review the current state of determinations of the Hubble constant, which gives the length scale of the Universe by relating the expansion velocity of objects to their distance. There are two broad categories of measurements. The first uses individual astrophysical objects which have some property that allows their intrinsic luminosity or size to be determined, or allows the determination of their distance by geometric means. The second category comprises the use of all-sky cosmic microwave background, or correlations between large samples of galaxies, to determine information about the geometry of the Universe and hence the Hubble constant, typically in a combination with other cosmological parameters. Many, but not all, object-based measurements give H0 values of around 72-74 km s(-1) Mpc(-1), with typical errors of 2-3 km s(-1) Mpc(-1). This is in mild discrepancy with CMB-based measurements, in particular those from the Planck satellite, which give values of 67-68 km s(-1) Mpc(-1) and typical errors of 1-2 km s(-1) Mpc(-1). The size of the remaining systematics indicate that accuracy rather than precision is the remaining problem in a good determination of the Hubble constant. Whether a discrepancy exists, and whether new physics is needed to resolve it, depends on details of the systematics of the object-based methods, and also on the assumptions about other cosmological parameters and which datasets are combined in the case of the all-sky methods.

  9. Universe of constant

    Science.gov (United States)

    Yongquan, Han

    2016-10-01

    The ideal gas state equation is not applicable to ordinary gas, it should be applied to the Electromagnetic ``gas'' that is applied to the radiation, the radiation should be the ultimate state of matter changes or initial state, the universe is filled with radiation. That is, the ideal gas equation of state is suitable for the Singular point and the universe. Maybe someone consider that, there is no vessel can accommodate radiation, it is because the Ordinary container is too small to accommodate, if the radius of your container is the distance that Light through an hour, would you still think it can't accommodates radiation? Modern scientific determinate that the radius of the universe now is about 1027 m, assuming that the universe is a sphere whose volume is approximately: V = 4.19 × 1081 cubic meters, the temperature radiation of the universe (cosmic microwave background radiation temperature of the universe, should be the closest the average temperature of the universe) T = 3.15k, radiation pressure P = 5 × 10-6 N / m 2, according to the law of ideal gas state equation, PV / T = constant = 6 × 1075, the value of this constant is the universe, The singular point should also equal to the constant Author: hanyongquan

  10. Ion transport with charge-protected and non-charge-protected cations in alcohol-based electrolytes using the compensated Arrhenius formalism. Part I: ionic conductivity and the static dielectric constant.

    Science.gov (United States)

    Petrowsky, Matt; Fleshman, Allison; Frech, Roger

    2012-05-17

    The temperature dependence of ionic conductivity and the static dielectric constant is examined for 0.30 m TbaTf- or LiTf-1-alcohol solutions. Above ambient temperature, the conductivity increases with temperature to a greater extent in electrolytes whose salt has a charge-protected cation. Below ambient temperature, the dielectric constant changes only slightly with temperature in electrolytes whose salt has a cation that is not charge-protected. The compensated Arrhenius formalism is used to describe the temperature-dependent conductivity in terms of the contributions from both the exponential prefactor σo and Boltzmann factor exp(-Ea/RT). This analysis explains why the conductivity decreases with increasing temperature above 65 °C for the LiTf-dodecanol electrolyte. At higher temperatures, the decrease in the exponential prefactor is greater than the increase in the Boltzmann factor.

  11. Beyond lensing by the cosmological constant

    CERN Document Server

    Faraoni, Valerio

    2016-01-01

    The long-standing problem of whether the cosmological constant affects directly the deflection of light caused by a gravitational lens is reconsidered. We use a new approach based on the Hawking quasilocal mass of a sphere grazed by light rays and on its splitting into local and cosmological parts. Previous literature restricted to the cosmological constant is extended to any form of dark energy accelerating the universe in which the gravitational lens is embedded.

  12. Beyond lensing by the cosmological constant

    Science.gov (United States)

    Faraoni, Valerio; Lapierre-Léonard, Marianne

    2017-01-01

    The long-standing problem of whether the cosmological constant affects directly the deflection of light caused by a gravitational lens is reconsidered. We use a new approach based on the Hawking quasilocal mass of a sphere grazed by light rays and on its splitting into local and cosmological parts. Previous literature restricted to the cosmological constant is extended to any form of dark energy accelerating the universe in which the gravitational lens is embedded.

  13. Design of the Key Parts of Force Balanced Constant Spring Hangers Based on MATLAB%基于MATLAB的力平衡式恒力吊关键零件的设计

    Institute of Scientific and Technical Information of China (English)

    章志荣

    2011-01-01

    对力平衡式恒力吊作机构分析,根据其工作原理建立其刀形凸轮的曲线微分方程,使用MATLAB程序求解凸轮曲线微分方程并绘制凸轮曲线。建立恒力吊主弹簧的优化设计数学模型,使用MATLAB遗传算法工具求解,实现主弹簧的优化设计。用这种方法设计力平衡式恒力吊,可以提高设计质量和设计效率。%A mechanics model for force balanced constant spring hangers is put forward.The curve equation of knife cams is induced from the working principle of constant hanger.The curve equation is solved and the curves of cams are also plotted with MATLAB.The main spring mathematical model of optimal design is established.Parameters of the main spring are optimized with the MATLAB genetic algorithm tool.Design force balanced constant spring hangers in this way would improve the design quality and design efficiency.

  14. Influence of Constant High Ambient Temperature on Fat Metabolism of Different Parts in Finishing Pigs%持续高温对育肥猪不同部位脂肪代谢的影响

    Institute of Scientific and Technical Information of China (English)

    吴鑫; 冯京海; 张敏红; 苏红光; 贾安峰

    2015-01-01

    [Objective] The objective of this study is to investigate the effects of constant high ambient temperature on fat metabolism in finishing pigs and to preliminarily explore the mechanism of the impact.[Method] Sixteen Duroc × Landrace × Large White castrated male pigs were randomly assigned into a high-temperature environment (HT group: 30℃,ad libtum) and a normal thermal group (NT group: 22℃,ad libtum) with eight pigs in each treatment. Pigs were housed in individual wire cages under a 14-h lighting schedule and had free access to water. The experiment lasted for 3 weeks, and the temperature kept unchanged during this time. The relative humidity in the room was controlled at (55±5)%. The pigs were electrically stunned and exsanguinated after a 12-h period of feed withdrawal with free access to water at the end of the experiment.[Result] The results of the experiment showed that the carcass weight and backfat depth at 30℃ were lower than that at 22℃, but the differences were not significant (P>0.10). And high ambient temperature had a trend to increase the proportion of flare fat in carcass weight (+22.06%,P=0.07), to decrease the lipid content of longissimus dorsi (LM) (-22.39%,P=0.08). The activities of fatty acid synthase (FAS) (P0.10). High ambient temperature significantly increased the content of LPL in flare fat (P=0.05), and decreased the content of LPL in LM (P=0.05). The rule how high ambient temperature influenced fat deposition of the three parts was in accordance with the rule how high ambient temperature influenced the contents of LPL in the same part, which means that high ambient temperature may influence the fat deposition by regulating the content of LPL. The activities ofβ-hydroxyacyl coenzyme A dehydrogenase (HAD) at the front (P0.10). The plasma concentration of nonesterified fatty acid (NEFA) was higher (P0.10).[Conclusion]The results demonstrated that high ambient temperature had different effects on adipose tissues in different

  15. Information security fundamentals

    CERN Document Server

    Blackley, John A; Peltier, Justin

    2004-01-01

    Effective security rules and procedures do not exist for their own sake-they are put in place to protect critical assets, thereby supporting overall business objectives. Recognizing security as a business enabler is the first step in building a successful program.Information Security Fundamentals allows future security professionals to gain a solid understanding of the foundations of the field and the entire range of issues that practitioners must address. This book enables students to understand the key elements that comprise a successful information security program and eventually apply thes

  16. El grupo fundamental

    Directory of Open Access Journals (Sweden)

    Carlos A. Robles Corbalá

    2015-12-01

    Full Text Available En este artículo se aborda un problema clásico para poder detectar si dos espacios topológicos son homeomorfos o no. Para lo cual a cada espacio topológico se le asocia un grupo algebraico, de tal suerte que si los espacios son homeomorfos, entonces los grupos asociados serán isomorfos. Se presenta una construcción del grupo fundamental de un espacio topológico y se enfoca en demostrar que efectivamente es un grupo.

  17. Fundamentals of calculus

    CERN Document Server

    Morris, Carla C

    2015-01-01

    Fundamentals of Calculus encourages students to use power, quotient, and product rules for solutions as well as stresses the importance of modeling skills.  In addition to core integral and differential calculus coverage, the book features finite calculus, which lends itself to modeling and spreadsheets.  Specifically, finite calculus is applied to marginal economic analysis, finance, growth, and decay.  Includes: Linear Equations and FunctionsThe DerivativeUsing the Derivative Exponential and Logarithmic Functions Techniques of DifferentiationIntegral CalculusIntegration TechniquesFunctions

  18. Fundamentals of attosecond optics

    CERN Document Server

    Chang, Zenghu

    2011-01-01

    Attosecond optical pulse generation, along with the related process of high-order harmonic generation, is redefining ultrafast physics and chemistry. A practical understanding of attosecond optics requires significant background information and foundational theory to make full use of these cutting-edge lasers and advance the technology toward the next generation of ultrafast lasers. Fundamentals of Attosecond Optics provides the first focused introduction to the field. The author presents the underlying concepts and techniques required to enter the field, as well as recent research advances th

  19. Fundamentals of microwave photonics

    CERN Document Server

    Urick, V J; McKinney , Jason D

    2015-01-01

    A comprehensive resource to designing andconstructing analog photonic links capable of high RFperformanceFundamentals of Microwave Photonics provides acomprehensive description of analog optical links from basicprinciples to applications.  The book is organized into fourparts. The first begins with a historical perspective of microwavephotonics, listing the advantages of fiber optic links anddelineating analog vs. digital links. The second section coversbasic principles associated with microwave photonics in both the RFand optical domains.  The third focuses on analog modulationformats-starti

  20. Mathematical analysis fundamentals

    CERN Document Server

    Bashirov, Agamirza

    2014-01-01

    The author's goal is a rigorous presentation of the fundamentals of analysis, starting from elementary level and moving to the advanced coursework. The curriculum of all mathematics (pure or applied) and physics programs include a compulsory course in mathematical analysis. This book will serve as can serve a main textbook of such (one semester) courses. The book can also serve as additional reading for such courses as real analysis, functional analysis, harmonic analysis etc. For non-math major students requiring math beyond calculus, this is a more friendly approach than many math-centric o

  1. Fundamentals of Project Management

    CERN Document Server

    Heagney, Joseph

    2011-01-01

    With sales of more than 160,000 copies, Fundamentals of Project Management has helped generations of project managers navigate the ins and outs of every aspect of this complex discipline. Using a simple step-by-step approach, the book is the perfect introduction to project management tools, techniques, and concepts. Readers will learn how to: ò Develop a mission statement, vision, goals, and objectives ò Plan the project ò Create the work breakdown structure ò Produce a workable schedule ò Understand earned value analysis ò Manage a project team ò Control and evaluate progress at every stage.

  2. Fundamental concepts of mathematics

    CERN Document Server

    Goodstein, R L

    Fundamental Concepts of Mathematics, 2nd Edition provides an account of some basic concepts in modern mathematics. The book is primarily intended for mathematics teachers and lay people who wants to improve their skills in mathematics. Among the concepts and problems presented in the book include the determination of which integral polynomials have integral solutions; sentence logic and informal set theory; and why four colors is enough to color a map. Unlike in the first edition, the second edition provides detailed solutions to exercises contained in the text. Mathematics teachers and people

  3. Fundamentals of Cavitation

    CERN Document Server

    Franc, Jean-Pierre

    2005-01-01

    The present book is aimed at providing a comprehensive presentation of cavitation phenomena in liquid flows. It is further backed up by the experience, both experimental and theoretical, of the authors whose expertise has been internationally recognized. A special effort is made to place the various methods of investigation in strong relation with the fundamental physics of cavitation, enabling the reader to treat specific problems independently. Furthermore, it is hoped that a better knowledge of the cavitation phenomenon will allow engineers to create systems using it positively. Examples in the literature show the feasibility of this approach.

  4. Fundamentals of photonics

    CERN Document Server

    Saleh, Bahaa E A

    2007-01-01

    Now in a new full-color edition, Fundamentals of Photonics, Second Edition is a self-contained and up-to-date introductory-level textbook that thoroughly surveys this rapidly expanding area of engineering and applied physics. Featuring a logical blend of theory and applications, coverage includes detailed accounts of the primary theories of light, including ray optics, wave optics, electromagnetic optics, and photon optics, as well as the interaction of photons and atoms, and semiconductor optics. Presented at increasing levels of complexity, preliminary sections build toward more advan

  5. Nanomachines fundamentals and applications

    CERN Document Server

    Wang, Joseph

    2013-01-01

    This first-hand account by one of the pioneers of nanobiotechnology brings together a wealth of valuable material in a single source. It allows fascinating insights into motion at the nanoscale, showing how the proven principles of biological nanomotors are being transferred to artificial nanodevices.As such, the author provides engineers and scientists with the fundamental knowledge surrounding the design and operation of biological and synthetic nanomotors and the latest advances in nanomachines. He addresses such topics as nanoscale propulsions, natural biomotors, molecular-scale machin

  6. Electronic circuits fundamentals & applications

    CERN Document Server

    Tooley, Mike

    2015-01-01

    Electronics explained in one volume, using both theoretical and practical applications.New chapter on Raspberry PiCompanion website contains free electronic tools to aid learning for students and a question bank for lecturersPractical investigations and questions within each chapter help reinforce learning Mike Tooley provides all the information required to get to grips with the fundamentals of electronics, detailing the underpinning knowledge necessary to appreciate the operation of a wide range of electronic circuits, including amplifiers, logic circuits, power supplies and oscillators. The

  7. The Hubble Constant

    Directory of Open Access Journals (Sweden)

    Neal Jackson

    2015-09-01

    Full Text Available I review the current state of determinations of the Hubble constant, which gives the length scale of the Universe by relating the expansion velocity of objects to their distance. There are two broad categories of measurements. The first uses individual astrophysical objects which have some property that allows their intrinsic luminosity or size to be determined, or allows the determination of their distance by geometric means. The second category comprises the use of all-sky cosmic microwave background, or correlations between large samples of galaxies, to determine information about the geometry of the Universe and hence the Hubble constant, typically in a combination with other cosmological parameters. Many, but not all, object-based measurements give H_0 values of around 72–74 km s^–1 Mpc^–1, with typical errors of 2–3 km s^–1 Mpc^–1. This is in mild discrepancy with CMB-based measurements, in particular those from the Planck satellite, which give values of 67–68 km s^–1 Mpc^–1 and typical errors of 1–2 km s^–1 Mpc^–1. The size of the remaining systematics indicate that accuracy rather than precision is the remaining problem in a good determination of the Hubble constant. Whether a discrepancy exists, and whether new physics is needed to resolve it, depends on details of the systematics of the object-based methods, and also on the assumptions about other cosmological parameters and which datasets are combined in the case of the all-sky methods.

  8. Testing Our Fundamental Assumptions

    Science.gov (United States)

    Kohler, Susanna

    2016-06-01

    Science is all about testing the things we take for granted including some of the most fundamental aspects of how we understand our universe. Is the speed of light in a vacuum the same for all photons regardless of their energy? Is the rest mass of a photon actually zero? A series of recent studies explore the possibility of using transient astrophysical sources for tests!Explaining Different Arrival TimesArtists illustration of a gamma-ray burst, another extragalactic transient, in a star-forming region. [NASA/Swift/Mary Pat Hrybyk-Keith and John Jones]Suppose you observe a distant transient astrophysical source like a gamma-ray burst, or a flare from an active nucleus and two photons of different energies arrive at your telescope at different times. This difference in arrival times could be due to several different factors, depending on how deeply you want to question some of our fundamental assumptions about physics:Intrinsic delayThe photons may simply have been emitted at two different times by the astrophysical source.Delay due to Lorentz invariance violationPerhaps the assumption that all massless particles (even two photons with different energies) move at the exact same velocity in a vacuum is incorrect.Special-relativistic delayMaybe there is a universal speed for massless particles, but the assumption that photons have zero rest mass is wrong. This, too, would cause photon velocities to be energy-dependent.Delay due to gravitational potentialPerhaps our understanding of the gravitational potential that the photons experience as they travel is incorrect, also causing different flight times for photons of different energies. This would mean that Einsteins equivalence principle, a fundamental tenet of general relativity (GR), is incorrect.If we now turn this problem around, then by measuring the arrival time delay between photons of different energies from various astrophysical sources the further away, the better we can provide constraints on these

  9. Fundamentals of Space Systems

    Science.gov (United States)

    Pisacane, Vincent L.

    2005-06-01

    Fundamentals of Space Systems was developed to satisfy two objectives: the first is to provide a text suitable for use in an advanced undergraduate or beginning graduate course in both space systems engineering and space system design. The second is to be a primer and reference book for space professionals wishing to broaden their capabilities to develop, manage the development, or operate space systems. The authors of the individual chapters are practicing engineers that have had extensive experience in developing sophisticated experimental and operational spacecraft systems in addition to having experience teaching the subject material. The text presents the fundamentals of all the subsystems of a spacecraft missions and includes illustrative examples drawn from actual experience to enhance the learning experience. It included a chapter on each of the relevant major disciplines and subsystems including space systems engineering, space environment, astrodynamics, propulsion and flight mechanics, attitude determination and control, power systems, thermal control, configuration management and structures, communications, command and telemetry, data processing, embedded flight software, survuvability and reliability, integration and test, mission operations, and the initial conceptual design of a typical small spacecraft mission.

  10. Royal Society, Discussion on the Constants of Physics, London, England, May 25, 26, 1983, Proceedings

    Science.gov (United States)

    1983-12-01

    Various topics dealing with the constants of physics are addressed. The subjects considered include: measurement of the fundamental constants; the search for proton decay; the constancy of G; limits on the variability of coupling constants from the Oklo natural reactor; implications of quasar spectroscopy for constancy of constants; theoretical prospects for understanding the values of fundamental constants; the strong, electromagnetic, and weak couplings; and field theories without fundamental gauge symmetries. Also discussed are: Einstein gravitation as a long-wavelength effective field theory; unification and supersymmetry; phase transitions in the early universe; the cosmological constant; large numbers and ratios in astrophysics and cosmology; dependence of macrophysical phenomena on the values of the fundamental constants; dimensionality; and the anthropic principle and its implications for biological evolution.

  11. Spacetime Dynamics from Spin Dynamics: Cosmological Constant and Neutrino Mass

    Science.gov (United States)

    Crawford, James

    2003-04-01

    Two fundamental unresolved issues in gravitational physics are the origin of the cosmological constant (dark energy), whose existence is suggested by the observed acceleration of the universe, and the origin of the particle masses, which we now know includes the neutrinos. Since all matter particles are represented by spinor fields, it seems natural to inquire whether the gravitational interaction of the spinor fields can illuminate these issues. Therefore we consider the possibility that the spin curvature is fundamental, and show that by relaxing the Schrödinger condition (covariant constancy of the Dirac matrices) it is possible to obtain both spacetime curvature and torsion as parts of the spin curvature. We assume a scale invariant Lagrangian composed of the standard Yang-Mills Lagrangian for the spin curvature and the massless Dirac Lagrangian for the spinors. An exact vacuum cosmological solution to the associated field equations is found which exhibits exponential acceleration of the universe and gives a minimum mass for all spinors.

  12. Biochemical Engineering Fundamentals

    Science.gov (United States)

    Bailey, J. E.; Ollis, D. F.

    1976-01-01

    Discusses a biochemical engineering course that is offered as part of a chemical engineering curriculum and includes topics that influence the behavior of man-made or natural microbial or enzyme reactors. (MLH)

  13. Digital Fourier analysis fundamentals

    CERN Document Server

    Kido, Ken'iti

    2015-01-01

    This textbook is a thorough, accessible introduction to digital Fourier analysis for undergraduate students in the sciences. Beginning with the principles of sine/cosine decomposition, the reader walks through the principles of discrete Fourier analysis before reaching the cornerstone of signal processing: the Fast Fourier Transform. Saturated with clear, coherent illustrations, "Digital Fourier Analysis - Fundamentals" includes practice problems and thorough Appendices for the advanced reader. As a special feature, the book includes interactive applets (available online) that mirror the illustrations.  These user-friendly applets animate concepts interactively, allowing the user to experiment with the underlying mathematics. For example, a real sine signal can be treated as a sum of clockwise and counter-clockwise rotating vectors. The applet illustration included with the book animates the rotating vectors and the resulting sine signal. By changing parameters such as amplitude and frequency, the reader ca...

  14. Fundamentals of quantum mechanics

    CERN Document Server

    House, J E

    2017-01-01

    Fundamentals of Quantum Mechanics, Third Edition is a clear and detailed introduction to quantum mechanics and its applications in chemistry and physics. All required math is clearly explained, including intermediate steps in derivations, and concise review of the math is included in the text at appropriate points. Most of the elementary quantum mechanical models-including particles in boxes, rigid rotor, harmonic oscillator, barrier penetration, hydrogen atom-are clearly and completely presented. Applications of these models to selected “real world” topics are also included. This new edition includes many new topics such as band theory and heat capacity of solids, spectroscopy of molecules and complexes (including applications to ligand field theory), and small molecules of astrophysical interest.

  15. Fundamental partial compositeness

    CERN Document Server

    Sannino, Francesco

    2016-01-01

    We construct renormalizable Standard Model extensions, valid up to the Planck scale, that give a composite Higgs from a new fundamental strong force acting on fermions and scalars. Yukawa interactions of these particles with Standard Model fermions realize the partial compositeness scenario. Successful models exist because gauge quantum numbers of Standard Model fermions admit a minimal enough 'square root'. Furthermore, right-handed SM fermions have an SU(2)$_R$-like structure, yielding a custodially-protected composite Higgs. Baryon and lepton numbers arise accidentally. Standard Model fermions acquire mass at tree level, while the Higgs potential and flavor violations are generated by quantum corrections. We further discuss accidental symmetries and other dynamical features stemming from the new strongly interacting scalars. If the same phenomenology can be obtained from models without our elementary scalars, they would reappear as composite states.

  16. Lasers Fundamentals and Applications

    CERN Document Server

    Thyagarajan, K

    2010-01-01

    Lasers: Fundamentals and Applications, serves as a vital textbook to accompany undergraduate and graduate courses on lasers and their applications. Ever since their invention in 1960, lasers have assumed tremendous importance in the fields of science, engineering and technology because of their diverse uses in basic research and countless technological applications. This book provides a coherent presentation of the basic physics behind the way lasers work, and presents some of their most important applications in vivid detail. After reading this book, students will understand how to apply the concepts found within to practical, tangible situations. This textbook includes worked-out examples and exercises to enhance understanding, and the preface shows lecturers how to most beneficially match the textbook with their course curricula. The book includes several recent Nobel Lectures, which will further expose students to the emerging applications and excitement of working with lasers. Students who study lasers, ...

  17. Fundamentals of Structural Engineering

    CERN Document Server

    Connor, Jerome J

    2013-01-01

    Fundamentals of Structural Engineering provides a balanced, seamless treatment of both classic, analytic methods and contemporary, computer-based techniques for conceptualizing and designing a structure. The book’s principle goal is to foster an intuitive understanding of structural behavior based on problem solving experience for students of civil engineering and architecture who have been exposed to the basic concepts of engineering mechanics and mechanics of materials. Making it distinct from many other undergraduate textbooks, the authors of this text recognize the notion that engineers reason about behavior using simple models and intuition they acquire through problem solving. The approach adopted in this text develops this type of intuition  by presenting extensive, realistic problems and case studies together with computer simulation, which allows rapid exploration of  how a structure responds to changes in geometry and physical parameters. This book also: Emphasizes problem-based understanding of...

  18. Fundamentals of sustainable neighbourhoods

    CERN Document Server

    Friedman, Avi

    2015-01-01

    This book introduces architects, engineers, builders, and urban planners to a range of design principles of sustainable communities and illustrates them with outstanding case studies. Drawing on the author’s experience as well as local and international case studies, Fundamentals of Sustainable Neighbourhoods presents planning concepts that minimize developments' carbon footprint through compact communities, adaptable and expandable dwellings, adaptable landscapes, and smaller-sized yet quality-designed housing. This book also: Examines in-depth global strategies for minimizing the residential carbon footprint, including district heating, passive solar gain, net-zero residences, as well as preserving the communities' natural assets Reconsiders conceptual approaches in building design and urban planning to promote a better connection between communities and nature Demonstrates practical applications of green architecture Focuses on innovative living spaces in urban environments

  19. Fundamentals of phosphors

    CERN Document Server

    Yen, William M; Yamamoto, Hajime

    2006-01-01

    Drawing from the second edition of the best-selling Handbook of Phosphors, Fundamentals of Phosphors covers the principles and mechanisms of luminescence in detail and surveys the primary phosphor materials as well as their optical properties. The book addresses cutting-edge developments in phosphor science and technology including oxynitride phosphors and the impact of lanthanide level location on phosphor performance.Beginning with an explanation of the physics underlying luminescence mechanisms in solids, the book goes on to interpret various luminescence phenomena in inorganic and organic materials. This includes the interpretation of the luminescence of recently developed low-dimensional systems, such as quantum wells and dots. The book also discusses the excitation mechanisms by cathode-ray and ionizing radiation and by electric fields to produce electroluminescence. The book classifies phosphor materials according to the type of luminescence centers employed or the class of host materials used and inte...

  20. Superconductivity fundamentals and applications

    CERN Document Server

    Buckel, Werner

    2004-01-01

    This is the second English edition of what has become one of the definitive works on superconductivity in German -- currently in its sixth edition. Comprehensive and easy to understand, this introductory text is written especially with the non-specialist in mind. The authors, both long-term experts in this field, present the fundamental considerations without the need for extensive mathematics, describing the various phenomena connected with the superconducting state, with liberal insertion of experimental facts and examples for modern applications. While all fields of superconducting phenomena are dealt with in detail, this new edition pays particular attention to the groundbreaking discovery of magnesium diboride and the current developments in this field. In addition, a new chapter provides an overview of the elements, alloys and compounds where superconductivity has been observed in experiments, together with their major characteristics. The chapter on technical applications has been considerably expanded...

  1. Fundamentals of Fire Phenomena

    DEFF Research Database (Denmark)

    Quintiere, James

    discipline. It covers thermo chemistry including mixtures and chemical reactions; Introduces combustion to the fire protection student; Discusses premixed flames and spontaneous ignition; Presents conservation laws for control volumes, including the effects of fire; Describes the theoretical bases...... analyses. Fire phenomena encompass everything about the scientific principles behind fire behaviour. Combining the principles of chemistry, physics, heat and mass transfer, and fluid dynamics necessary to understand the fundamentals of fire phenomena, this book integrates the subject into a clear...... for empirical aspects of the subject of fire; Analyses ignition of liquids and the importance of evaporation including heat and mass transfer; Features the stages of fire in compartments, and the role of scale modelling in fire. The book is written by Prof. James G. Quintiere from University of Maryland...

  2. Automotive electronics design fundamentals

    CERN Document Server

    Zaman, Najamuz

    2015-01-01

    This book explains the topology behind automotive electronics architectures and examines how they can be profoundly augmented with embedded controllers. These controllers serve as the core building blocks of today’s vehicle electronics. Rather than simply teaching electrical basics, this unique resource focuses on the fundamental concepts of vehicle electronics architecture, and details the wide variety of Electronic Control Modules (ECMs) that enable the increasingly sophisticated "bells & whistles" of modern designs.  A must-have for automotive design engineers, technicians working in automotive electronics repair centers and students taking automotive electronics courses, this guide bridges the gap between academic instruction and industry practice with clear, concise advice on how to design and optimize automotive electronics with embedded controllers.

  3. Fundamentals of Fire Phenomena

    DEFF Research Database (Denmark)

    Quintiere, James

    Understanding fire dynamics and combustion is essential in fire safety engineering and in fire science curricula. Engineers and students involved in fire protection, safety and investigation need to know and predict how fire behaves to be able to implement adequate safety measures and hazard...... analyses. Fire phenomena encompass everything about the scientific principles behind fire behaviour. Combining the principles of chemistry, physics, heat and mass transfer, and fluid dynamics necessary to understand the fundamentals of fire phenomena, this book integrates the subject into a clear...... discipline. It covers thermo chemistry including mixtures and chemical reactions; Introduces combustion to the fire protection student; Discusses premixed flames and spontaneous ignition; Presents conservation laws for control volumes, including the effects of fire; Describes the theoretical bases...

  4. The Hubble Constant

    Directory of Open Access Journals (Sweden)

    Jackson Neal

    2007-09-01

    Full Text Available I review the current state of determinations of the Hubble constant, which gives the length scale of the Universe by relating the expansion velocity of objects to their distance. In the last 20 years, much progress has been made and estimates now range between 60 and 75 km s^-1 Mpc^-1, with most now between 70 and 75 km s^-1 Mpc^-1, a huge improvement over the factor-of-2 uncertainty which used to prevail. Further improvements which gave a generally agreed margin of error of a few percent rather than the current 10% would be vital input to much other interesting cosmology. There are several programmes which are likely to lead us to this point in the next 10 years.

  5. When constants are important

    Energy Technology Data Exchange (ETDEWEB)

    Beiu, V.

    1997-04-01

    In this paper the authors discuss several complexity aspects pertaining to neural networks, commonly known as the curse of dimensionality. The focus will be on: (1) size complexity and depth-size tradeoffs; (2) complexity of learning; and (3) precision and limited interconnectivity. Results have been obtained for each of these problems when dealt with separately, but few things are known as to the links among them. They start by presenting known results and try to establish connections between them. These show that they are facing very difficult problems--exponential growth in either space (i.e. precision and size) and/or time (i.e., learning and depth)--when resorting to neural networks for solving general problems. The paper will present a solution for lowering some constants, by playing on the depth-size tradeoff.

  6. Constant Proportion Portfolio Insurance

    DEFF Research Database (Denmark)

    Jessen, Cathrine

    2014-01-01

    Portfolio insurance, as practiced in 1987, consisted of trading between an underlying stock portfolio and cash, using option theory to place a floor on the value of the position, as if it included a protective put. Constant Proportion Portfolio Insurance (CPPI) is an option-free variation...... on the theme, originally proposed by Fischer Black. In CPPI, a financial institution guarantees a floor value for the “insured” portfolio and adjusts the stock/bond mix to produce a leveraged exposure to the risky assets, which depends on how far the portfolio value is above the floor. Plain-vanilla portfolio...... insurance largely died with the crash of 1987, but CPPI is still going strong. In the frictionless markets of finance theory, the issuer’s strategy to hedge its liability under the contract is clear, but in the real world with transactions costs and stochastic jump risk, the optimal strategy is less obvious...

  7. Interacting universes and the cosmological constant

    Energy Technology Data Exchange (ETDEWEB)

    Alonso-Serrano, A. [Centro de Física “Miguel Catalán”, Instituto de Física Fundamental, Consejo Superior de Investigaciones Científicas, Serrano 121, 28006 Madrid (Spain); Estación Ecológica de Biocosmología, Pedro de Alvarado 14, 06411 Medellín (Spain); Bastos, C. [Instituto de Plasmas e Fusão Nuclear, Instituto Superior Técnico, Avenida Rovisco Pais 1, 1049-001 Lisboa (Portugal); Bertolami, O. [Instituto de Plasmas e Fusão Nuclear, Instituto Superior Técnico, Avenida Rovisco Pais 1, 1049-001 Lisboa (Portugal); Departamento de Física e Astronomia, Faculdade de Ciências da Universidade do Porto, Rua do Campo Alegre 687, 4169-007 Porto (Portugal); Robles-Pérez, S., E-mail: salvarp@imaff.cfmac.csic.es [Centro de Física “Miguel Catalán”, Instituto de Física Fundamental, Consejo Superior de Investigaciones Científicas, Serrano 121, 28006 Madrid (Spain); Estación Ecológica de Biocosmología, Pedro de Alvarado 14, 06411 Medellín (Spain); Física Teórica, Universidad del País Vasco, Apartado 644, 48080 Bilbao (Spain)

    2013-02-12

    In this Letter it is studied the effects that an interaction scheme among universes can have in the values of their cosmological constants. In the case of two interacting universes, the value of the cosmological constant of one of the universes becomes very close to zero at the expense of an increasing value of the cosmological constant of the partner universe. In the more general case of a chain of N interacting universes with periodic boundary conditions, the spectrum of the Hamiltonian splits into a large number of levels, each of them associated with a particular value of the cosmological constant, that can be occupied by single universes revealing a collective behavior that plainly shows that the multiverse is much more than the mere sum of its parts.

  8. Reconciling Planck constant determinations via watt balance and enriched-silicon measurements at NRC Canada

    Science.gov (United States)

    Steele, A. G.; Meija, J.; Sanchez, C. A.; Yang, L.; Wood, B. M.; Sturgeon, R. E.; Mester, Z.; Inglis, A. D.

    2012-02-01

    The next revision to the International System of Units will emphasize the relationship between the base units (kilogram, metre, second, ampere, kelvin, candela and mole) and fundamental constants of nature (the speed of light, c, the Planck constant, h, the elementary charge, e, the Boltzmann constant, kB, the Avogadro constant, NA, etc). The redefinition cannot proceed without consistency between two complementary metrological approaches to measuring h: a 'physics' approach, using watt balances and the equivalence principle between electrical and mechanical force, and a 'chemistry' approach that can be viewed as determining the mass of a single atom of silicon. We report the first high precision physics and chemistry results that agree within 12 parts per billion: h (watt balance) = 6.626 070 63(43) × 10-34 J s and h(silicon) = 6.626 070 55(21) × 10-34 J s. When combined with values determined by other metrology laboratories, this work helps to constrain our knowledge of h to 20 parts per billion, moving us closer to a redefinition of the metric system used around the world.

  9. Fundamentals of ergonomic exoskeleton robots

    NARCIS (Netherlands)

    Schiele, A.

    2008-01-01

    This thesis is the first to provide the fundamentals of ergonomic exoskeleton design. The fundamental theory as well as technology necessary to analyze and develop ergonomic wearable robots interacting with humans is established and validated by experiments and prototypes. The fundamentals are (1) a

  10. Fundamentals of ergonomic exoskeleton robots

    NARCIS (Netherlands)

    Schiele, A.

    2008-01-01

    This thesis is the first to provide the fundamentals of ergonomic exoskeleton design. The fundamental theory as well as technology necessary to analyze and develop ergonomic wearable robots interacting with humans is established and validated by experiments and prototypes. The fundamentals are (1) a

  11. On determination of the geometric cosmological constant from the OPERA experiment of superluminal neutrinos

    OpenAIRE

    Yan, Mu-Lin; Hu, Sen; Huang, Wei; Xiao, Neng-Chao

    2011-01-01

    The recent OPERA experiment of superluminal neutrinos has deep consequences in cosmology. In cosmology a fundamental constant is the cosmological constant. From observations one can estimate the effective cosmological constant $\\Lambda_{eff}$ which is the sum of the quantum zero point energy $\\Lambda_{dark energy}$ and the geometric cosmological constant $\\Lambda$. The OPERA experiment can be applied to determine the geometric cosmological constant $\\Lambda$. It is the first time to distingui...

  12. Fundamentals of klystron testing

    Energy Technology Data Exchange (ETDEWEB)

    Caldwell, J.W. Jr.

    1978-08-01

    Fundamentals of klystron testing is a text primarily intended for the indoctrination of new klystron group test stand operators. It should significantly reduce the familiarization time of a new operator, making him an asset to the group sooner than has been experienced in the past. The new employee must appreciate the mission of SLAC before he can rightfully be expected to make a meaningful contribution to the group's effort. Thus, the introductory section acquaints the reader with basic concepts of accelerators in general, then briefly describes major physical aspects of the Stanford Linear Accelerator. Only then is his attention directed to the klystron, with its auxiliary systems, and the rudiments of klystron tube performance checks. It is presumed that the reader is acquainted with basic principles of electronics and scientific notation. However, to preserve the integrity of an indoctrination guide, tedious technical discussions and mathematical analysis have been studiously avoided. It is hoped that the new operator will continue to use the text for reference long after his indoctrination period is completed. Even the more experienced operator should find that particular sections will refresh his understanding of basic principles of klystron testing.

  13. GRBs and Fundamental Physics

    Science.gov (United States)

    Petitjean, Patrick; Wang, F. Y.; Wu, X. F.; Wei, J. J.

    2016-12-01

    Gamma-ray bursts (GRBs) are short and intense flashes at the cosmological distances, which are the most luminous explosions in the Universe. The high luminosities of GRBs make them detectable out to the edge of the visible universe. So, they are unique tools to probe the properties of high-redshift universe: including the cosmic expansion and dark energy, star formation rate, the reionization epoch and the metal evolution of the Universe. First, they can be used to constrain the history of cosmic acceleration and the evolution of dark energy in a redshift range hardly achievable by other cosmological probes. Second, long GRBs are believed to be formed by collapse of massive stars. So they can be used to derive the high-redshift star formation rate, which can not be probed by current observations. Moreover, the use of GRBs as cosmological tools could unveil the reionization history and metal evolution of the Universe, the intergalactic medium (IGM) properties and the nature of first stars in the early universe. But beyond that, the GRB high-energy photons can be applied to constrain Lorentz invariance violation (LIV) and to test Einstein's Equivalence Principle (EEP). In this paper, we review the progress on the GRB cosmology and fundamental physics probed by GRBs.

  14. Fundamental Limits of Cooperation

    CERN Document Server

    Lozano, Angel; Andrews, Jeffrey G

    2012-01-01

    Cooperation is viewed as a key ingredient for interference management in wireless systems. This paper shows that cooperation has fundamental limitations. The main result is that even full cooperation between transmitters cannot in general change an interference-limited network to a noise-limited network. The key idea is that there exists a spectral efficiency upper bound that is independent of the transmit power. First, a spectral efficiency upper bound is established for systems that rely on pilot-assisted channel estimation; in this framework, cooperation is shown to be possible only within clusters of limited size, which are subject to out-of-cluster interference whose power scales with that of the in-cluster signals. Second, an upper bound is also shown to exist when cooperation is through noncoherent communication; thus, the spectral efficiency limitation is not a by-product of the reliance on pilot-assisted channel estimation. Consequently, existing literature that routinely assumes the high-power spect...

  15. Revisiting energy efficiency fundamentals

    Energy Technology Data Exchange (ETDEWEB)

    Perez-Lombard, L.; Velazquez, D. [Grupo de Termotecnia, Escuela Superior de Ingenieros, Universidad de Sevilla, Camino de los Descubrimientos s/n, 41092 Seville (Spain); Ortiz, J. [Building Research Establishment (BRE), Garston, Watford, WD25 9XX (United Kingdom)

    2013-05-15

    Energy efficiency is a central target for energy policy and a keystone to mitigate climate change and to achieve a sustainable development. Although great efforts have been carried out during the last four decades to investigate the issue, focusing into measuring energy efficiency, understanding its trends and impacts on energy consumption and to design effective energy efficiency policies, many energy efficiency-related concepts, some methodological problems for the construction of energy efficiency indicators (EEI) and even some of the energy efficiency potential gains are often ignored or misunderstood, causing no little confusion and controversy not only for laymen but even for specialists. This paper aims to revisit, analyse and discuss some efficiency fundamental topics that could improve understanding and critical judgement of efficiency stakeholders and that could help in avoiding unfounded judgements and misleading statements. Firstly, we address the problem of measuring energy efficiency both in qualitative and quantitative terms. Secondly, main methodological problems standing in the way of the construction of EEI are discussed, and a sequence of actions is proposed to tackle them in an ordered fashion. Finally, two key topics are discussed in detail: the links between energy efficiency and energy savings, and the border between energy efficiency improvement and renewable sources promotion.

  16. Chiral transition of fundamental and adjoint quarks

    Energy Technology Data Exchange (ETDEWEB)

    Capdevilla, R.M. [Instituto de Física Teórica, UNESP – Universidade Estadual Paulista, Rua Dr. Bento T. Ferraz, 271, Bloco II, 01140-070 São Paulo, SP (Brazil); Doff, A., E-mail: agomes@utfpr.edu.br [Universidade Tecnológica Federal do Paraná – UTFPR – DAFIS, Av. Monteiro Lobato Km 04, 84016-210 Ponta Grossa, PR (Brazil); Natale, A.A., E-mail: natale@ift.unesp.br [Instituto de Física Teórica, UNESP – Universidade Estadual Paulista, Rua Dr. Bento T. Ferraz, 271, Bloco II, 01140-070 São Paulo, SP (Brazil); Centro de Ciências Naturais e Humanas, Universidade Federal do ABC, 09210-170 Santo André, SP (Brazil)

    2014-01-20

    The chiral symmetry breaking transition of quarks in the fundamental and adjoint representation is studied in a model where the gap equation contains two contributions, one containing a confining propagator and another corresponding to the exchange of one-dressed dynamically massive gluons. When quarks are in the fundamental representation the confinement effect dominates the chiral symmetry breaking while the gluon exchange is suppressed due to the dynamical gluon mass effect in the propagator and in the coupling constant. In this case the chiral and deconfinement transition temperatures are approximately the same. For quarks in the adjoint representation, due to the larger Casimir eigenvalue, the gluon exchange is operative and the chiral transition happens at a larger temperature than the deconfinement one.

  17. TASI Lectures on the cosmological constant

    Energy Technology Data Exchange (ETDEWEB)

    Bousso, Raphael; Bousso, Raphael

    2007-08-30

    The energy density of the vacuum, Lambda, is at least 60 orders of magnitude smaller than several known contributions to it. Approaches to this problem are tightly constrained by data ranging from elementary observations to precision experiments. Absent overwhelming evidence to the contrary, dark energy can only be interpreted as vacuum energy, so the venerable assumption that Lambda=0 conflicts with observation. The possibility remains that Lambda is fundamentally variable, though constant over large spacetime regions. This can explain the observed value, but only in a theory satisfying a number of restrictive kinematic and dynamical conditions. String theory offers a concrete realization through its landscape of metastable vacua.

  18. Unity of Fundamental Interactions

    CERN Document Server

    Sastry, R R

    2000-01-01

    The vector representation of the linearized gravitational field (the graviton field) or the so called quantum gravitodynamics which describes the motion of masses in a weak gravitational field is employed to understand the unity of the four known interactions. We propose a gauge group SU(3)xSU(2)xU(1)xU(1) for such a unified field theory. In this paper we study the SU(2)xU(1)xU(1) sector of the theory and in analogy to the electroweak mixing angle we define a gravitoweak mixing angle. The unified gauge field theory predicts the existence of three massive vector bosons, the Y+/- and the X^0. and two massless vector bosons, the photon and the graviton (in its vector representation). We determine the mass spectrum of the Y+/- and the X^0 and predict a modification to the fine structure constant under unified field conditions. Furthermore, we briefly discuss the implications of the extended object formulation for the gauge hierarchy problem.

  19. Fundamental constraints on two-time physics

    Science.gov (United States)

    Piceno, E.; Rosado, A.; Sadurní, E.

    2016-10-01

    We show that generalizations of classical and quantum dynamics with two times lead to a fundamentally constrained evolution. At the level of classical physics, Newton's second law is extended and exactly integrated in a (1 + 2) -dimensional space, leading to effective single-time evolution for any initial condition. The cases 2 + 2 and 3 + 2 are also analyzed. In the domain of quantum mechanics, we follow strictly the hypothesis of probability conservation by extending the Heisenberg picture to unitary evolution with two times. As a result, the observability of two temporal axes is constrained by a generalized uncertainty relation involving level spacings, total duration of the effect and Planck's constant.

  20. Topological Quantization in Units of the Fine Structure Constant

    Energy Technology Data Exchange (ETDEWEB)

    Maciejko, Joseph; /Stanford U., Phys. Dept. /Stanford U., Materials Sci. Dept. /SLAC; Qi, Xiao-Liang; /Station Q, UCSB /Stanford U., Phys. Dept. /Stanford U., Materials Sci. Dept. /SLAC; Drew, H.Dennis; /Maryland U.; Zhang, Shou-Cheng; /Stanford U., Phys. Dept. /Stanford U., Materials Sci. Dept. /SLAC

    2011-11-11

    Fundamental topological phenomena in condensed matter physics are associated with a quantized electromagnetic response in units of fundamental constants. Recently, it has been predicted theoretically that the time-reversal invariant topological insulator in three dimensions exhibits a topological magnetoelectric effect quantized in units of the fine structure constant {alpha} = e{sup 2}/{h_bar}c. In this Letter, we propose an optical experiment to directly measure this topological quantization phenomenon, independent of material details. Our proposal also provides a way to measure the half-quantized Hall conductances on the two surfaces of the topological insulator independently of each other.

  1. The Interacting and Non-constant Cosmological Constant

    CERN Document Server

    Verma, Murli Manohar

    2009-01-01

    We propose a time-varying cosmological constant with a fixed equation of state, which evolves mainly through its interaction with the background during most of the long history of the universe. However, such interaction does not exist in the very early and the late-time universe and produces the acceleration during these eras when it becomes very nearly a constant. It is found that after the initial inflationary phase, the cosmological constant, that we call as lambda parameter, rolls down from a large constant value to another but very small constant value and further dominates the present epoch showing up in form of the dark energy driving the acceleration.

  2. Fundamentals of machine design

    CERN Document Server

    Karaszewski, Waldemar

    2011-01-01

    A forum of researchers, educators and engineers involved in various aspects of Machine Design provided the inspiration for this collection of peer-reviewed papers. The resultant dissemination of the latest research results, and the exchange of views concerning the future research directions to be taken in this field will make the work of immense value to all those having an interest in the topics covered. The book reflects the cooperative efforts made in seeking out the best strategies for effecting improvements in the quality and the reliability of machines and machine parts and for extending

  3. Maximum Entropy Fundamentals

    Directory of Open Access Journals (Sweden)

    F. Topsøe

    2001-09-01

    Full Text Available Abstract: In its modern formulation, the Maximum Entropy Principle was promoted by E.T. Jaynes, starting in the mid-fifties. The principle dictates that one should look for a distribution, consistent with available information, which maximizes the entropy. However, this principle focuses only on distributions and it appears advantageous to bring information theoretical thinking more prominently into play by also focusing on the "observer" and on coding. This view was brought forward by the second named author in the late seventies and is the view we will follow-up on here. It leads to the consideration of a certain game, the Code Length Game and, via standard game theoretical thinking, to a principle of Game Theoretical Equilibrium. This principle is more basic than the Maximum Entropy Principle in the sense that the search for one type of optimal strategies in the Code Length Game translates directly into the search for distributions with maximum entropy. In the present paper we offer a self-contained and comprehensive treatment of fundamentals of both principles mentioned, based on a study of the Code Length Game. Though new concepts and results are presented, the reading should be instructional and accessible to a rather wide audience, at least if certain mathematical details are left aside at a rst reading. The most frequently studied instance of entropy maximization pertains to the Mean Energy Model which involves a moment constraint related to a given function, here taken to represent "energy". This type of application is very well known from the literature with hundreds of applications pertaining to several different elds and will also here serve as important illustration of the theory. But our approach reaches further, especially regarding the study of continuity properties of the entropy function, and this leads to new results which allow a discussion of models with so-called entropy loss. These results have tempted us to speculate over

  4. Fundamentals of Space Medicine

    Science.gov (United States)

    Clément, Gilles

    2005-03-01

    A total of more than 240 human space flights have been completed to date, involving about 450 astronauts from various countries, for a combined total presence in space of more than 70 years. The seventh long-duration expedition crew is currently in residence aboard the International Space Station, continuing a permanent presence in space that began in October 2000. During that time, investigations have been conducted on both humans and animal models to study the bone demineralization and muscle deconditioning, space motion sickness, the causes and possible treatment of postflight orthostatic intolerance, the changes in immune function, crew and crew-ground interactions, and the medical issues of living in a space environment, such as the effects of radiation or the risk of developing kidney stones. Some results of these investigations have led to fundamental discoveries about the adaptation of the human body to the space environment. Gilles Clément has been active in this research. This readable text presents the findings from the life science experiments conducted during and after space missions. Topics discussed in this book include: adaptation of sensory-motor, cardio-vascular, bone, and muscle systems to the microgravity of spaceflight; psychological and sociological issues of living in a confined, isolated, and stressful environment; operational space medicine, such as crew selection, training and in-flight health monitoring, countermeasures and support; results of space biology experiments on individual cells, plants, and animal models; and the impact of long-duration missions such as the human mission to Mars. The author also provides a detailed description of how to fly a space experiment, based on his own experience with research projects conducted onboard Salyut-7, Mir, Spacelab, and the Space Shuttle. Now is the time to look at the future of human spaceflight and what comes next. The future human exploration of Mars captures the imagination of both the

  5. Astronomers Gain Clues About Fundamental Physics

    Science.gov (United States)

    2005-12-01

    An international team of astronomers has looked at something very big -- a distant galaxy -- to study the behavior of things very small -- atoms and molecules -- to gain vital clues about the fundamental nature of our entire Universe. The team used the National Science Foundation's Robert C. Byrd Green Bank Telescope (GBT) to test whether the laws of nature have changed over vast spans of cosmic time. The Green Bank Telescope The Robert C. Byrd Green Bank Telescope CREDIT: NRAO/AUI/NSF (Click on image for GBT gallery) "The fundamental constants of physics are expected to remain fixed across space and time; that's why they're called constants! Now, however, new theoretical models for the basic structure of matter indicate that they may change. We're testing these predictions." said Nissim Kanekar, an astronomer at the National Radio Astronomy Observatory (NRAO), in Socorro, New Mexico. So far, the scientists' measurements show no change in the constants. "We've put the most stringent limits yet on some changes in these constants, but that's not the end of the story," said Christopher Carilli, another NRAO astronomer. "This is the exciting frontier where astronomy meets particle physics," Carilli explained. The research can help answer fundamental questions about whether the basic components of matter are tiny particles or tiny vibrating strings, how many dimensions the Universe has, and the nature of "dark energy." The astronomers were looking for changes in two quantities: the ratio of the masses of the electron and the proton, and a number physicists call the fine structure constant, a combination of the electron charge, the speed of light and the Planck constant. These values, considered fundamental physical constants, once were "taken as time independent, with values given once and forever" said German particle physicist Christof Wetterich. However, Wetterich explained, "the viewpoint of modern particle theory has changed in recent years," with ideas such as

  6. Fundamental formulation for frictional contact with graded materials

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    In the paper, we develop the fundamental solutions for a graded half-plane subjected to concentrated forces acting perpendicularly and parallel to the surface. In the solutions, Young’s modulus is assumed to vary in the form of E(y)=E0eαy and Poisson’s ratio is assumed to be constant. On the basis of the fundamental solutions, the singular integral equations are formulated for the unknown traction distributions with Green’s function method. From the fundamental integral equations, a series of integral equat...

  7. Fundamentals of functional analysis

    CERN Document Server

    Farenick, Douglas

    2016-01-01

    This book provides a unique path for graduate or advanced undergraduate students to begin studying the rich subject of functional analysis with fewer prerequisites than is normally required. The text begins with a self-contained and highly efficient introduction to topology and measure theory, which focuses on the essential notions required for the study of functional analysis, and which are often buried within full-length overviews of the subjects. This is particularly useful for those in applied mathematics, engineering, or physics who need to have a firm grasp of functional analysis, but not necessarily some of the more abstruse aspects of topology and measure theory normally encountered. The reader is assumed to only have knowledge of basic real analysis, complex analysis, and algebra. The latter part of the text provides an outstanding treatment of Banach space theory and operator theory, covering topics not usually found together in other books on functional analysis. Written in a clear, concise manner,...

  8. Fundamental solutions of linear partial differential operators theory and practice

    CERN Document Server

    Ortner, Norbert

    2015-01-01

    This monograph provides the theoretical foundations needed for the construction of fundamental solutions and fundamental matrices of (systems of) linear partial differential equations. Many illustrative examples also show techniques for finding such solutions in terms of integrals. Particular attention is given to developing the fundamentals of distribution theory, accompanied by calculations of fundamental solutions. The main part of the book deals with existence theorems and uniqueness criteria, the method of parameter integration, the investigation of quasihyperbolic systems by means of Fourier and Laplace transforms, and the representation of fundamental solutions of homogeneous elliptic operators with the help of Abelian integrals. In addition to rigorous distributional derivations and verifications of fundamental solutions, the book also shows how to construct fundamental solutions (matrices) of many physically relevant operators (systems), in elasticity, thermoelasticity, hexagonal/cubic elastodynamics...

  9. Communication technology update and fundamentals

    CERN Document Server

    Grant, August E

    2010-01-01

    New communication technologies are being introduced at an astonishing rate. Making sense of these technologies is increasingly difficult. Communication Technology Update and Fundamentals is the single best source for the latest developments, trends, and issues in communication technology. Featuring the fundamental framework along with the history and background of communication technologies, Communication Technology Update and Fundamentals, 12th edition helps you stay ahead of these ever-changing and emerging technologies.As always, every chapter ha

  10. Resonant MEMS fundamentals, implementation, and application

    CERN Document Server

    Brand, Oliver; Heinrich, Stephen; Josse, Fabien; Fedder, Gary K; Hierold, Christofer; Korvink, Jan G; Tabata, Osamu

    2015-01-01

    Part of the AMN book series, this book covers the principles, modeling and implementation as well as applications of resonant MEMS from a unified viewpoint. It starts out with the fundamental equations and phenomena that govern the behavior of resonant MEMS and then gives a detailed overview of their implementation in capacitive, piezoelectric, thermal and organic devices, complemented by chapters addressing the packaging of the devices and their stability. The last part of the book is devoted to the cutting-edge applications of resonant MEMS such as inertial, chemical and biosensors, fluid p

  11. Fundamentals of human-computer interaction

    CERN Document Server

    Monk, Andrew F

    1985-01-01

    Fundamentals of Human-Computer Interaction aims to sensitize the systems designer to the problems faced by the user of an interactive system. The book grew out of a course entitled """"The User Interface: Human Factors for Computer-based Systems"""" which has been run annually at the University of York since 1981. This course has been attended primarily by systems managers from the computer industry. The book is organized into three parts. Part One focuses on the user as processor of information with studies on visual perception; extracting information from printed and electronically presented

  12. Fundamentals and Techniques of Nonimaging

    Energy Technology Data Exchange (ETDEWEB)

    O' Gallagher, J. J.; Winston, R.

    2003-07-10

    This is the final report describing a long term basic research program in nonimaging optics that has led to major advances in important areas, including solar energy, fiber optics, illumination techniques, light detectors, and a great many other applications. The term ''nonimaging optics'' refers to the optics of extended sources in systems for which image forming is not important, but effective and efficient collection, concentration, transport, and distribution of light energy is. Although some of the most widely known developments of the early concepts have been in the field of solar energy, a broad variety of other uses have emerged. Most important, under the auspices of this program in fundamental research in nonimaging optics established at the University of Chicago with support from the Office of Basic Energy Sciences at the Department of Energy, the field has become very dynamic, with new ideas and concepts continuing to develop, while applications of the early concepts continue to be pursued. While the subject began as part of classical geometrical optics, it has been extended subsequently to the wave optics domain. Particularly relevant to potential new research directions are recent developments in the formalism of statistical and wave optics, which may be important in understanding energy transport on the nanoscale. Nonimaging optics permits the design of optical systems that achieve the maximum possible concentration allowed by physical conservation laws. The earliest designs were constructed by optimizing the collection of the extreme rays from a source to the desired target: the so-called ''edge-ray'' principle. Later, new concentrator types were generated by placing reflectors along the flow lines of the ''vector flux'' emanating from lambertian emitters in various geometries. A few years ago, a new development occurred with the discovery that making the design edge-ray a functional of some

  13. Laser-fundamentals and applications

    Energy Technology Data Exchange (ETDEWEB)

    Schinagl, W.

    1982-09-01

    The survey article gives an introduction to laser technology. Fundamentals and physical aspects are discussed at large. After a brief historical review and a discussion of the physical fundamentals, important types of laser, characteristics of laser radiation and its applications in medicine are discussed.

  14. Mass anomalous dimension in SU(2) with six fundamental fermions

    DEFF Research Database (Denmark)

    Bursa, Francis; Del Debbio, Luigi; Keegan, Liam;

    2010-01-01

    We simulate SU(2) gauge theory with six massless fundamental Dirac fermions. We measure the running of the coupling and the mass in the Schroedinger Functional scheme. We observe very slow running of the coupling constant. We measure the mass anomalous dimension gamma, and find it is between 0.13...

  15. The Fundamental Scale of Descriptions

    CERN Document Server

    Febres, Gerardo

    2014-01-01

    The complexity of a system description is a function of the entropy of its symbolic description. Prior to computing the entropy of the system description, an observation scale has to be assumed. In natural language texts, typical scales are binary, characters, and words. However, considering languages as structures built around certain preconceived set of symbols, like words or characters, is only a presumption. This study depicts the notion of the Description Fundamental Scale as a set of symbols which serves to analyze the essence a language structure. The concept of Fundamental Scale is tested using English and MIDI music texts by means of an algorithm developed to search for a set of symbols, which minimizes the system observed entropy, and therefore best expresses the fundamental scale of the language employed. Test results show that it is possible to find the Fundamental Scale of some languages. The concept of Fundamental Scale, and the method for its determination, emerges as an interesting tool to fac...

  16. Fundamentals of ICF Hohlraums

    Energy Technology Data Exchange (ETDEWEB)

    Rosen, M D

    2005-09-30

    On the Nova Laser at LLNL, we demonstrated many of the key elements required for assuring that the next laser, the National Ignition Facility (NIF) will drive an Inertial Confinement Fusion (ICF) target to ignition. The indirect drive (sometimes referred to as ''radiation drive'') approach converts laser light to x-rays inside a gold cylinder, which then acts as an x-ray ''oven'' (called a hohlraum) to drive the fusion capsule in its center. On Nova we've demonstrated good understanding of the temperatures reached in hohlraums and of the ways to control the uniformity with which the x-rays drive the spherical fusion capsules. In these lectures we will be reviewing the physics of these laser heated hohlraums, recent attempts at optimizing their performance, and then return to the ICF problem in particular to discuss scaling of ICF gain with scale size, and to compare indirect vs. direct drive gains. In ICF, spherical capsules containing Deuterium and Tritium (DT)--the heavy isotopes of hydrogen--are imploded, creating conditions of high temperature and density similar to those in the cores of stars required for initiating the fusion reaction. When DT fuses an alpha particle (the nucleus of a helium atom) and a neutron are created releasing large amount amounts of energy. If the surrounding fuel is sufficiently dense, the alpha particles are stopped and can heat it, allowing a self-sustaining fusion burn to propagate radially outward and a high gain fusion micro-explosion ensues. To create those conditions the outer surface of the capsule is heated (either directly by a laser or indirectly by laser produced x-rays) to cause rapid ablation and outward expansion of the capsule material. A rocket-like reaction to that outward flowing heated material leads to an inward implosion of the remaining part of the capsule shell. The pressure generated on the outside of the capsule can reach nearly 100 megabar (100 million times

  17. Spectrophotometric determination of association constant

    DEFF Research Database (Denmark)

    2016-01-01

    Least-squares 'Systematic Trial-and-Error Procedure' (STEP) for spectrophotometric evaluation of association constant (equilibrium constant) K and molar absorption coefficient E for a 1:1 molecular complex, A + B = C, with error analysis according to Conrow et al. (1964). An analysis of the Charg...

  18. Constant-Pressure Hydraulic Pump

    Science.gov (United States)

    Galloway, C. W.

    1982-01-01

    Constant output pressure in gas-driven hydraulic pump would be assured in new design for gas-to-hydraulic power converter. With a force-multiplying ring attached to gas piston, expanding gas would apply constant force on hydraulic piston even though gas pressure drops. As a result, pressure of hydraulic fluid remains steady, and power output of the pump does not vary.

  19. Spectrophotometric determination of association constant

    DEFF Research Database (Denmark)

    Spanget-Larsen, Jens

    2016-01-01

    Least-squares 'Systematic Trial-and-Error Procedure' (STEP) for spectrophotometric evaluation of association constant (equilibrium constant) K and molar absorption coefficient E for a 1:1 molecular complex, A + B = C, with error analysis according to Conrow et al. (1964). An analysis of the Charge...

  20. Fundamental principles of heat transfer

    CERN Document Server

    Whitaker, Stephen

    1977-01-01

    Fundamental Principles of Heat Transfer introduces the fundamental concepts of heat transfer: conduction, convection, and radiation. It presents theoretical developments and example and design problems and illustrates the practical applications of fundamental principles. The chapters in this book cover various topics such as one-dimensional and transient heat conduction, energy and turbulent transport, forced convection, thermal radiation, and radiant energy exchange. There are example problems and solutions at the end of every chapter dealing with design problems. This book is a valuable int

  1. Fundamentals of technology project management

    CERN Document Server

    Garton, Colleen

    2012-01-01

    Designed to provide software engineers, students, and IT professionals with an understanding of the fundamentals of project management in the technology/IT field, this book serves as a practical introduction to the subject. Updated with information on how Fundamentals of Project Management integrates with and complements Project Management Institute''s Project Management Body of Knowledge, this collection explains fundamental methodologies and techniques while also discussing new technology, tools, and virtual work environments. Examples and case studies are based on technology projects, and t

  2. Fundamental approach to discrete mathematics

    CERN Document Server

    Acharjya, DP

    2005-01-01

    Salient Features Mathematical logic, fundamental concepts, proofs and mathematical induction (Chapter 1) Set theory, fundamental concepts, theorems, proofs, Venn diagrams, product of sets, application of set theory and fundamental products (Chapter 2) An introduction to binary relations and concepts, graphs, arrow diagrams, relation matrix, composition of relations, types of relation, partial order relations, total order relation, closure of relations, poset, equivalence classes and partitions. (Chapter 3) An introduction to functions and basic concepts, graphs, composition of functions, floor and ceiling function, characteristic function, remainder function, signum function and introduction to hash function. (Chapter 4) The algebraic structure includes group theory and ring theory. Group theory includes group, subgroups, cyclic group, cosets, homomorphism, introduction to codes and group codes and error correction for block code. The ring theory includes general definition, fundamental concepts, integra...

  3. Clinical fundamentals for radiation oncologists.

    Science.gov (United States)

    Yang, Jack

    2011-11-01

    Clinical fundamentals for radiation oncologists. Hasan Murshed. Medical Physics Publishing, Madison, WI, 2011. 680 pp. (soft cover), Price: $90.00. 978-1-930524-43-9. © 2011 American Association of Physicists in Medicine.

  4. Quantum mechanics I the fundamentals

    CERN Document Server

    Rajasekar, S

    2015-01-01

    Quantum Mechanics I: The Fundamentals provides a graduate-level account of the behavior of matter and energy at the molecular, atomic, nuclear, and sub-nuclear levels. It covers basic concepts, mathematical formalism, and applications to physically important systems.

  5. Fundamentals of modern unsteady aerodynamics

    CERN Document Server

    Gülçat, Ülgen

    2010-01-01

    This introduction to the principles of unsteady aerodynamics covers all the core concepts, provides readers with a review of the fundamental physics, terminology and basic equations, and covers hot new topics such as the use of flapping wings for propulsion.

  6. Conjugated polyelectrolytes fundamentals and applications

    CERN Document Server

    Liu, Bin

    2013-01-01

    This is the first monograph to specifically focus on fundamentals and applications of polyelectrolytes, a class of molecules that gained substantial interest due to their unique combination of properties. Combining both features of organic semiconductors and polyelectrolytes, they offer a broad field for fundamental research as well as applications to analytical chemistry, optical imaging, and opto-electronic devices. The initial chapters introduce readers to the synthesis, optical and electrical properties of various conjugated polyelectrolytes. This is followed by chapters on the applica

  7. Fundamentals of electronic image processing

    CERN Document Server

    Weeks, Arthur R

    1996-01-01

    This book is directed to practicing engineers and scientists who need to understand the fundamentals of image processing theory and algorithms to perform their technical tasks. It is intended to fill the gap between existing high-level texts dedicated to specialists in the field and the need for a more practical, fundamental text on image processing. A variety of example images are used to enhance reader understanding of how particular image processing algorithms work.

  8. Open Source Fundamental Industry Classification

    OpenAIRE

    Kakushadze, Zura; Yu, Willie

    2017-01-01

    We provide complete source code for building a fundamental industry classification based on publically available and freely downloadable data. We compare various fundamental industry classifications by running a horserace of short-horizon trading signals (alphas) utilizing open source heterotic risk models (https://ssrn.com/abstract=2600798) built using such industry classifications. Our source code includes various stand-alone and portable modules, e.g., for downloading/parsing web data, etc.

  9. Expected Devaluation and Economic Fundamentals

    OpenAIRE

    Alun H. Thomas

    1993-01-01

    Recent incidents of exchange rate collapse have provoked interest in the extent to which such events are determined by economic fundamentals. This paper considers whether interest rate differentials are appropriate measures of the risk of devaluation and whether this measure of devaluation risk reflects the movements of variables which capture internal and external balance. The paper finds that interest rate differentials reflect devaluation risk but that movements in fundamental variables ha...

  10. Hydrogenlike highly charged ions for tests of the time independence of fundamental constants.

    Science.gov (United States)

    Schiller, S

    2007-05-04

    Hyperfine transitions in the electronic ground state of cold, trapped hydrogenlike highly charged ions have attractive features for use as frequency standards because the majority of systematic frequency shifts are smaller by orders of magnitude compared to many microwave and optical frequency standards. Frequency measurements of these transitions hold promise for significantly improved laboratory tests of local position invariance of the electron and quark masses.

  11. Do the Fundamental Constants Vary in the Course of the Cosmological Evolution?

    CERN Document Server

    Ivanchik, A V; Petitjean, P; Varshalovich, D A

    2002-01-01

    We estimate the cosmological variation of the proton-to-electron mass ratio \\mu=m_p/m_e by measuring the wavelengths of molecular hydrogen transitions in the early universe. The analysis is performed using high spectral resolution observations (FWHM ~ 7 km/s) of two damped Lyman-\\alpha systems at z_{abs}=2.3377 and 3.0249 observed along the lines of sight to the quasars Q 1232+082 and Q 0347-382 respectively. The most conservative result of the analysis is a possible variation of \\mu over the last ~ 10 Gyrs, with an amplitude \\Delat\\mu/\\mu = (5.7+-3.8)x10^{-5}. The result is significant at the 1.5\\sigma level only and should be confirmed by further observations. This is the most stringent estimate of a possible cosmological variation of \\mu obtained up to now.

  12. Cosmochemistry, cosmology and fundamental constants: High-resolution spectroscopy of damped Lyman-alpha systems

    CERN Document Server

    Quast, R; Smette, A; Garcet, O; Ledoux, C; López, S; Wisotzki, L

    2004-01-01

    Spectroscopy of QSO absorption lines provides essential observational input for the study of nucleosynthesis and chemical evolution of galaxies at high redshift. But new observations may indicate that present chemical abundance data are biased due to deficient spectral resolution and unknown selection effects: Recent high-resolution spectra reveal the hitherto unperceived chemical nonuniformity of molecular hydrogen-bearing damped Lyman-alpha (DLA) systems, and the novel H/ESO DLA survey produces compelling evidence for faint QSOs being attenuated by dust. We present a revised analysis of the molecular hydrogen-bearing DLA complex toward HE 0515-4414 showing nonuniform differential depletion of chemical elements onto dust grains, and introduce to the H/ESO DLA survey and its implications. Conclusively, we aim at starting an unbiased chemical abundance database established on high-resolution spectroscopic observations. New data to probe the temperature-redshift relation predicted by standard cosmology and to t...

  13. Precision atomic mass spectrometry with applications to fundamental constants, neutrino physics, and physical chemistry

    Energy Technology Data Exchange (ETDEWEB)

    Mount, Brianna J. [Florida State University, Department of Physics (United States); Redshaw, Matthew [National Superconducting Cyclotron Laboratory (United States); Myers, Edmund G., E-mail: myers@nucmar.physics.fsu.edu [Florida State University, Department of Physics (United States)

    2011-07-15

    We present a summary of precision atomic mass measurements of stable isotopes carried out at Florida State University. These include the alkalis {sup 6}Li, {sup 23}Na, {sup 39,41}K, {sup 85,87}Rb, {sup 133}Cs; the rare gas isotopes {sup 84,86}Kr and {sup 129,130,132,136}Xe; {sup 17,18}O, {sup 19}F, {sup 28}Si, {sup 31}P, {sup 32}S; and various isotope pairs of importance to neutrino physics, namely {sup 74,76}Se/{sup 74,76}Ge, {sup 130}Xe/{sup 130}Te, and {sup 115}In/{sup 115}Sn. We also summarize our Penning trap measurements of the dipole moments of PH{sup + } and HCO{sup + }.

  14. Manifestations of dark matter and variations of fundamental constants in atoms and astrophysical phenomena

    CERN Document Server

    Stadnik, Y V

    2015-01-01

    We present an overview of recent developments in the detection of light bosonic dark matter, including axion, pseudoscalar axion-like and scalar dark matter, which form either a coherently oscillating classical field or topological defects (solitons). We emphasise new high-precision laboratory and astrophysical measurements, in which the sought effects are linear in the underlying interaction strength between dark matter and ordinary matter, in contrast to traditional detection schemes for dark matter, where the effects are quadratic or higher order in the underlying interaction parameters and are extremely small. New terrestrial experiments include measurements with atomic clocks, spectroscopy, atomic and solid-state magnetometry, torsion pendula, ultracold neutrons, and laser interferometry. New astrophysical observations include pulsar timing, cosmic radiation lensing, Big Bang nucleosynthesis and cosmic microwave background measurements. We also discuss various recently proposed mechanisms for the inducti...

  15. Cadastral Surveys, constantly being updated, Published in 2006, Sauk County.

    Data.gov (United States)

    NSGIC GIS Inventory (aka Ramona) — This Cadastral Surveys dataset, was produced all or in part from Hardcopy Maps information as of 2006. It is described as 'constantly being updated'. Data by this...

  16. Fundamental mechanisms of micromachine reliability

    Energy Technology Data Exchange (ETDEWEB)

    DE BOER,MAARTEN P.; SNIEGOWSKI,JEFFRY J.; KNAPP,JAMES A.; REDMOND,JAMES M.; MICHALSKE,TERRY A.; MAYER,THOMAS K.

    2000-01-01

    Due to extreme surface to volume ratios, adhesion and friction are critical properties for reliability of Microelectromechanical Systems (MEMS), but are not well understood. In this LDRD the authors established test structures, metrology and numerical modeling to conduct studies on adhesion and friction in MEMS. They then concentrated on measuring the effect of environment on MEMS adhesion. Polycrystalline silicon (polysilicon) is the primary material of interest in MEMS because of its integrated circuit process compatibility, low stress, high strength and conformal deposition nature. A plethora of useful micromachined device concepts have been demonstrated using Sandia National Laboratories' sophisticated in-house capabilities. One drawback to polysilicon is that in air the surface oxidizes, is high energy and is hydrophilic (i.e., it wets easily). This can lead to catastrophic failure because surface forces can cause MEMS parts that are brought into contact to adhere rather than perform their intended function. A fundamental concern is how environmental constituents such as water will affect adhesion energies in MEMS. The authors first demonstrated an accurate method to measure adhesion as reported in Chapter 1. In Chapter 2 through 5, they then studied the effect of water on adhesion depending on the surface condition (hydrophilic or hydrophobic). As described in Chapter 2, they find that adhesion energy of hydrophilic MEMS surfaces is high and increases exponentially with relative humidity (RH). Surface roughness is the controlling mechanism for this relationship. Adhesion can be reduced by several orders of magnitude by silane coupling agents applied via solution processing. They decrease the surface energy and render the surface hydrophobic (i.e. does not wet easily). However, only a molecular monolayer coats the surface. In Chapters 3-5 the authors map out the extent to which the monolayer reduces adhesion versus RH. They find that adhesion is

  17. Numerical modeling of shoreline undulations part 1: Constant wave climate

    DEFF Research Database (Denmark)

    Kærgaard, Kasper Hauberg; Fredsøe, Jørgen

    2013-01-01

    This paper presents a numerical study of the non-linear development of alongshore undulations up to fully developed quasi-steady equilibrium. A numerical model which describes the longshore sediment transport along arbitrarily shaped shorelines is applied, based on a spectral wave model, a depth...... integrated flow model, a wave-phase resolving sediment transport description and a one-line shoreline model.First the length of the shoreline undulations is determined in the linear regime using a stability analysis. Next the further evolution from the linear to the fully non-linear regime is described....... In the fully non-linear regime down-drift spits and migrating shoreline undulations are described.Three different shoreline shapes are found depending on the wave conditions: undulations with no spits, undulations with shore parallel spit and undulations with reconnecting spits. © 2012 Published by Elsevier B.V....

  18. DETERMINATION OF STABILITY CONSTANTS OF MANGANESE (II ...

    African Journals Online (AJOL)

    DR. AMINU

    Keywords: Amino acids, dissociation constant, potentiometry, stability constant. INTRODUCTION ... constants of manganese (II) amino acid complexes using potentiometer. .... Principles of Biochemistry Third Edition,. Worth publishers, 41 ...

  19. The Astronomical Reach of Fundamental Physics

    CERN Document Server

    Burrows, Adam

    2014-01-01

    Using basic physical arguments, we derive by dimensional and physical analysis the characteristic masses and sizes of important objects in the Universe in terms of just a few fundamental constants. This exercise illustrates the unifying power of physics and the profound connections between the small and the large in the Cosmos we inhabit. We focus on the minimum and maximum masses of normal stars, the corresponding quantities for neutron stars, the maximum mass of a rocky planet, the maximum mass of a white dwarf, and the mass of a typical galaxy. To zeroth order, we show that all these masses can be expressed in terms of either the Planck mass or the Chandrasekar mass, in combination with various dimensionless quantities. With these examples we expose the deep interrelationships imposed by Nature between disparate realms of the Universe and the amazing consequences of the unifying character of physical law.

  20. Ruzsa's Constant on Additive Functions

    Institute of Scientific and Technical Information of China (English)

    Jin Hui FANG; Yong Gao CHEN

    2013-01-01

    A function f:N → R is called additive if f(mn) =f(m)+f(n) for all m,n with (m,n) =1.Let μ(x) =maxn≤x(f(n)-f(n + 1)) and v(x) =maxn≤x(f(n + 1)-f(n)).In 1979,Ruzsa proved that there exists a constant c such that for any additive function f,μ(x) ≤ cv(x2) + cf,where cf is a constant depending only on f.Denote by Raf the least such constant c.We call Raf Ruzsa's constant on additive functions.In this paper,we prove that Raf ≤ 20.

  1. Naturally Time Dependent Cosmological Constant

    CERN Document Server

    Gregori, A

    2004-01-01

    In the light of the proposal of hep-th/0207195, we discuss in detail the issue of the cosmological constant, explaining how can string theory naturally predict the value which is experimentally observed, without low-energy supersymmetry.

  2. 2-harmonic Submanifolds in a Quasi Constant Holomorphic Sectional Curvature Space

    Institute of Scientific and Technical Information of China (English)

    ZHU Jing-yong; SONG Wei-dong

    2013-01-01

    In the present paper,the authors study totally real 2-harmonic submanifolds in a quasi constant holomorphic sectional curvature space and obtain a Simons' type integral inequality of compact submanifolds as well as some pinching theoremsonthe second fundamental form.

  3. Inflation and the cosmological constant

    Directory of Open Access Journals (Sweden)

    FENG Chaojun

    2014-08-01

    Full Text Available By assuming the cosmological “constant” is no longer a constant during the inflation epoch,it is found that the cosmological constant fine-tuning problem is solved.In the meanwhile,inflation models could predict a large tensor-to-scalar ratio,correct power spectral index and a larger running of it.Furthermore,the e-folding number is large enough to overcome the horizon,flatness problems in the Big Bang cosmology.

  4. Quantum Exclusion of Positive Cosmological Constant?

    CERN Document Server

    Dvali, Gia

    2014-01-01

    We show that a positive cosmological constant is incompatible with the quantum-corpuscular resolution of de Sitter metric in form of a coherent state. The reason is very general and is due to the quantum self-destruction of the coherent state because of the scattering of constituent graviton quanta. This process creates an irreversible quantum clock, which precludes eternal de Sitter. It also eliminates the possibility of Boltzmann brains and Poincare recurrences. This effect is expected to be part of any microscopic theory that takes into account the quantum corpuscular structure of the cosmological background. This observation puts the cosmological constant problem in a very different light, promoting it, from a naturalness problem, into a question of quantum consistency. We are learning that quantum gravity cannot tolerate exceedingly-classical sources.

  5. Learning Read-constant Polynomials of Constant Degree modulo Composites

    DEFF Research Database (Denmark)

    Chattopadhyay, Arkadev; Gavaldá, Richard; Hansen, Kristoffer Arnsfelt;

    2011-01-01

    Boolean functions that have constant degree polynomial representation over a fixed finite ring form a natural and strict subclass of the complexity class \\textACC0ACC0. They are also precisely the functions computable efficiently by programs over fixed and finite nilpotent groups. This class...... is not known to be learnable in any reasonable learning model. In this paper, we provide a deterministic polynomial time algorithm for learning Boolean functions represented by polynomials of constant degree over arbitrary finite rings from membership queries, with the additional constraint that each variable...

  6. Effective cosmological constant induced by stochastic fluctuations of Newton's constant

    Science.gov (United States)

    de Cesare, Marco; Lizzi, Fedele; Sakellariadou, Mairi

    2016-09-01

    We consider implications of the microscopic dynamics of spacetime for the evolution of cosmological models. We argue that quantum geometry effects may lead to stochastic fluctuations of the gravitational constant, which is thus considered as a macroscopic effective dynamical quantity. Consistency with Riemannian geometry entails the presence of a time-dependent dark energy term in the modified field equations, which can be expressed in terms of the dynamical gravitational constant. We suggest that the late-time accelerated expansion of the Universe may be ascribed to quantum fluctuations in the geometry of spacetime rather than the vacuum energy from the matter sector.

  7. Effective cosmological constant induced by stochastic fluctuations of Newton's constant

    CERN Document Server

    de Cesare, Marco; Sakellariadou, Mairi

    2016-01-01

    We consider implications of the microscopic dynamics of spacetime for the evolution of cosmological models. We argue that quantum geometry effects may lead to stochastic fluctuations of the gravitational constant, which is thus considered as a macroscopic effective dynamical quantity. Consistency with Riemannian geometry entails the presence of a time-dependent dark energy term in the modified field equations, which can be expressed in terms of the dynamical gravitational constant. We suggest that the late-time accelerated expansion of the Universe may be ascribed to quantum fluctuations in the geometry of spacetime rather than the vacuum energy from the matter sector.

  8. Silicon nanochrystals. Fundamentals, synthesis and applications

    Energy Technology Data Exchange (ETDEWEB)

    Pavesi, Lorenzo [Trento Univ., Povo (Italy). Physics Dept.; Turan, Rasit (eds.) [Middle East Technical Univ., Ankara (Turkey). Dept. of Physics

    2010-07-01

    This unique collection of knowledge represents a comprehensive treatment of the fundamental and practical consequences of size reduction in silicon crystals. This clearly structured reference introduces readers to the optical, electrical and thermal properties of silicon nanocrystals that arise from their greatly reduced dimensions. It covers their synthesis and characterization from both chemical and physical viewpoints, including ion implantation, colloidal synthesis and vapor deposition methods. A major part of the text is devoted to applications in microelectronics as well as photonics and nanobiotechnology, making this of great interest to the high-tech industry. (orig.)

  9. Constraining the fundamental interactions and couplings with Eoetvoes experiments

    Energy Technology Data Exchange (ETDEWEB)

    Kraiselburd, Lucila, E-mail: lkrai@fcaglp.fcaglp.unlp.edu.ar [Grupo de Gravitacion, Astrofisica y Cosmologia, Facultad de Ciencias Astronomicas y Geofisicas, Universidad Nacional de La Plata, Paseo del Bosque S/N (1900) La Plata (Argentina); Vucetich, Hector, E-mail: vucetich@fcaglp.unlp.edu.ar [Grupo de Gravitacion, Astrofisica y Cosmologia, Facultad de Ciencias Astronomicas y Geofisicas, Universidad Nacional de La Plata, Paseo del Bosque S/N (1900) La Plata (Argentina)

    2012-11-15

    Upper bounds for the violation of the Weak Equivalence Principle (WEP) by the fundamental interactions have been given before. We now recompute the limits on the parameters measuring the strength of the violation with the whole set of high accuracy Eoetvoes experiments. Besides, limits on spatial variation of the fundamental constants {alpha}, sin{sup 2}{theta}{sub W} and v, the vacuum expectation value of the Higgs field, are found in a model independent way. Limits on other parameters in the gauge sector are also found from the structure of the Standard Model.

  10. Fundamental tests of nature with cooled and stored exotic ions

    CERN Document Server

    CERN. Geneva

    2014-01-01

    The presentation will concentrate on recent applications with exciting results of Penning traps in atomic and nuclear physics with cooled and stored exotic ions. These are high-accuracy mass measurements of short-lived radionuclides, g-factor determinations of the bound-electron in highly-charged, hydrogen-like ions and g-factor measurements of the proton and antiproton. The experiments are dedicated, e.g., to astrophysics studies and to tests of fundamental symmetries in the case of mass measurements on radionuclides, and to the determination of fundamental constants and a CPT test in the case of the g-factor measurements.

  11. Astrophysical probes of fundamental physics

    Energy Technology Data Exchange (ETDEWEB)

    Martins, C.J.A.P. [Centro de Astrofisica, Universidade do Porto, Rua das Estrelas, 4150-762 Porto (Portugal); DAMTP, University of Cambridge, Wilberforce Road, Cambridge CB3 0WA (United Kingdom)

    2009-10-15

    I review the motivation for varying fundamental couplings and discuss how these measurements can be used to constrain fundamental physics scenarios that would otherwise be inaccessible to experiment. I highlight the current controversial evidence for varying couplings and present some new results. Finally I focus on the relation between varying couplings and dark energy, and explain how varying coupling measurements might be used to probe the nature of dark energy, with some advantages over standard methods. In particular I discuss what can be achieved with future spectrographs such as ESPRESSO and CODEX.

  12. The fundamentals of mathematical analysis

    CERN Document Server

    Fikhtengol'ts, G M

    1965-01-01

    The Fundamentals of Mathematical Analysis, Volume 1 is a textbook that provides a systematic and rigorous treatment of the fundamentals of mathematical analysis. Emphasis is placed on the concept of limit which plays a principal role in mathematical analysis. Examples of the application of mathematical analysis to geometry, mechanics, physics, and engineering are given. This volume is comprised of 14 chapters and begins with a discussion on real numbers, their properties and applications, and arithmetical operations over real numbers. The reader is then introduced to the concept of function, i

  13. Fundamentals of multicore software development

    CERN Document Server

    Pankratius, Victor; Tichy, Walter F

    2011-01-01

    With multicore processors now in every computer, server, and embedded device, the need for cost-effective, reliable parallel software has never been greater. By explaining key aspects of multicore programming, Fundamentals of Multicore Software Development helps software engineers understand parallel programming and master the multicore challenge. Accessible to newcomers to the field, the book captures the state of the art of multicore programming in computer science. It covers the fundamentals of multicore hardware, parallel design patterns, and parallel programming in C++, .NET, and Java. It

  14. Is the Notion of Time Really Fundamental?

    Directory of Open Access Journals (Sweden)

    Florian Girelli

    2011-06-01

    Full Text Available From the physics point of view, time is now best described through General Relativity as part of space-time, which is a dynamical object encoding gravity. Time possesses also some intrinsic irreversibility due to thermodynamics and quantum mechanical effects. This irreversibility can look puzzling since time-like loops (and hence time machines can appear in General Relativity (for example in the Gödel universe, a solution of Einstein’s equations. We take this apparent discrepancy as a warning bell, pointing out that time as we understand it might not be fundamental and that whatever theory lying beyond General Relativity may not include time as we know it as a fundamental structure. We propose therefore, following the philosophy of analog models of gravity, that time and gravity might not be fundamental per se, but only emergent features. We illustrate our proposal using a toy-model where we show how the Lorentzian signature and Nordström gravity (a diffeomorphisms invariant scalar gravity theory can emerge from a timeless non-dynamical space. This article received the fourth prize at the essay competition of the Foundational Questions Institute on the nature of time.

  15. Fundamental and composite scalars from extra dimensions

    Energy Technology Data Exchange (ETDEWEB)

    Aranda, Alfredo [Dual C-P Institute of High Energy Physics, Facultad de Ciencias, Universidad de Colima, Bernal Diaz del Castillo 340, Colima, Colima (Mexico)], E-mail: fefo@ucol.mx; Diaz-Cruz, J.L. [Dual C-P Institute of High Energy Physics, Facultad de Ciencias, Universidad de Colima, Bernal Diaz del Castillo 340, Colima, Colima (Mexico); Dual C-P Institute of High Energy Physics, Facultad de Ciencias Fisico-Matematicas, BUAP, Apdo. Postal 1364, C.P. 72000 Puebla, Pue (Mexico)], E-mail: lorenzo.diaz@fcfm.buap.mx; Hernandez-Sanchez, J. [Dual C-P Institute of High Energy Physics, Centro de Investigacion en Matematicas, Universidad Autonoma del Estado de Hidalgo, Carretera Pachuca-Tulancingo km. 4.5, C.P. 42184, Pachuca, Hidalgo (Mexico)], E-mail: jaimeh@uaeh.edu.mx; Noriega-Papaqui, R. [Dual C-P Institute of High Energy Physics, Instituto de Fisica, Universidad Nacional Autonoma de Mexico, Apdo. Postal 20-364, 01000 Mexico D.F. (Mexico)], E-mail: rnoriega@fisica.unam.mx

    2007-12-13

    We discuss a scenario consisting of an effective 4D theory containing fundamental and composite fields. The strong dynamics sector responsible for the compositeness is assumed to be of extra dimensional origin. In the 4D effective theory the SM fermion and gauge fields are taken as fundamental fields. The scalar sector of the theory resembles a bosonic topcolor in the sense there are two scalar Higgs fields, a composite scalar field and a fundamental gauge-Higgs unification scalar. A detailed analysis of the scalar spectrum is presented in order to explore the parameter space consistent with experiment. It is found that, under the model assumptions, the acceptable parameter space is quite constrained. As a part of our phenomenological study of the model, we evaluate the branching ratio of the lightest Higgs boson and find that our model predicts a large FCNC mode h{yields}tc, which can be as large as O(10{sup -3}). Similarly, a large BR for the top FCNC decay is obtained, namely BR(t{yields}c+H){approx_equal}10{sup -4}.

  16. Fundamental Plane of Sunyaev-Zeldovich clusters

    CERN Document Server

    Afshordi, Niayesh

    2007-01-01

    Sunyaev-Zel'dovich (SZ) cluster surveys are considered among the most promising methods for probing dark energy up to large redshifts. However, their premise is hinged upon an accurate mass-observable relationship, which could be affected by the (rather poorly understood) physics of the intracluster gas. In this letter, using a semi-analytic model of the intracluster gas that accommodates various theoretical uncertainties, I develop a Fundamental Plane relationship between the observed size, thermal energy, and mass of galaxy clusters. In particular, I find that M ~ (Y_{SZ}/R_{SZ,2})^{3/4}, where M is the mass, Y_{SZ} is the total SZ flux or thermal energy, and R_{SZ,2} is the SZ half-light radius of the cluster. I show that, within this model, using the Fundamental Plane relationship reduces the (systematic+random) errors in mass estimates to 14%, from 22% for a simple mass-flux relationship. Since measurement of the cluster sizes is an inevitable part of observing the SZ clusters, the Fundamental Plane rela...

  17. Critique of Coleman's Theory of the Vanishing Cosmological Constant

    Science.gov (United States)

    Susskind, Leonard

    In these lectures I would like to review some of the criticisms to the Coleman worm-hole theory of the vanishing cosmological constant. In particular, I would like to focus on the most fundamental assumption that the path integral over topologies defines a probability for the cosmological constant which has the form EXP(A) with A being the Baum-Hawking-Coleman saddle point. Coleman argues that the euclideam path integral over all geometries may be dominated by special configurations which consist of large smooth "spheres" connected by any number of narrow wormholes. Formally summing up such configurations gives a very divergent expression for the path integral…

  18. Einstein Manifolds, Abelian Instantons, Bundle Reduction, and the Cosmological Constant

    CERN Document Server

    Soo, C P

    2001-01-01

    The anti-self-dual projection of the spin connections of certain four-dimensional Einstein manifolds can be Abelian in nature. These configurations signify bundle reductions. By a theorem of Kobayashi and Nomizu such a process is predicated on the existence of a covariantly constant field. It turns out that even without fundamental Higgs fields and other physical matter, gravitational self-interactions can generate this mechanism if the cosmological constant is non-vanishing. This article identifies the order parameter, and clarifies how these Abelian instanton solutions are associated with a Higgs triplet which causes the bundle reduction from SO(3) gauge group to U(1).

  19. Mirror QCD and Cosmological Constant

    CERN Document Server

    Pasechnik, Roman; Teryaev, Oleg

    2016-01-01

    An analog of Quantum Chromo Dynamics (QCD) sector known as mirror QCD (mQCD) can affect the cosmological evolution and help in resolving the Cosmological Constant problem. In this work, we explore an intriguing possibility for a compensation of the negative QCD vacuum contribution to the ground state energy density of the universe by means of a positive contribution from the chromomagnetic gluon condensate in mQCD. The trace anomaly compensation condition and the form of the mQCD coupling constant in the infrared limit have been proposed by analysing a partial non-perturbative solution of the Einstein--Yang-Mills equations of motion.

  20. Bouncing universes with varying constants

    Energy Technology Data Exchange (ETDEWEB)

    Barrow, John D [DAMTP, Centre for Mathematical Sciences, Cambridge University, Wilberforce Road, Cambridge CB3 0WA (United Kingdom); Kimberly, Dagny [Theoretical Physics, Blackett Laboratory, Imperial College, Prince Consort Road, London SW7 2BZ (United Kingdom); Magueijo, Joao [Theoretical Physics, Blackett Laboratory, Imperial College, Prince Consort Road, London SW7 2BZ (United Kingdom)

    2004-09-21

    We investigate the behaviour of exact closed bouncing Friedmann universes in theories with varying constants. We show that the simplest BSBM varying alpha theory leads to a bouncing universe. The value of alpha increases monotonically, remaining approximately constant during most of each cycle, but increasing significantly around each bounce. When dissipation is introduced we show that in each new cycle the universe expands for longer and to a larger size. We find a similar effect for closed bouncing universes in Brans-Dicke theory, where G also varies monotonically in time from cycle to cycle. Similar behaviour occurs also in varying speed of light theories.

  1. Bouncing Universes with Varying Constants

    CERN Document Server

    Barrow, J D; Magueijo, J; Barrow, John D.; Kimberly, Dagny; Magueijo, Joao

    2004-01-01

    We investigate the behaviour of exact closed bouncing Friedmann universes in theories with varying constants. We show that the simplest BSBM varying-alpha theory leads to a bouncing universe. The value of alpha increases monotonically, remaining approximately constant during most of each cycle, but increasing significantly around each bounce. When dissipation is introduced we show that in each new cycle the universe expands for longer and to a larger size. We find a similar effect for closed bouncing universes in Brans-Dicke theory, where $G$ also varies monotonically in time from cycle to cycle. Similar behaviour occurs also in varying speed of light theories.

  2. Gravitational Instantons and Cosmological Constant

    CERN Document Server

    Cyriac, Josily

    2015-01-01

    The cosmological dynamics of an otherwise empty universe in the presence of vacuum fields is considered. Quantum fluctuations at the Planck scale leads to a dynamical topology of space-time at very small length scales, which is dominated by compact gravitational instantons. The Planck scale vacuum energy acts as a source for the curvature of the these compact gravitational instantons and decouples from the large scale energy momentum tensor of the universe, thus making the observable cosmological constant vanish. However, a Euclidean functional integral over all possible topologies of the gravitational instantons generates a small non-zero value for the large scale cosmological constant, which agrees with the present observations.

  3. Fundamentals: IVC and Computer Science

    NARCIS (Netherlands)

    Gozalvez, Javier; Haerri, Jerome; Hartenstein, Hannes; Heijenk, Gerhard J.; Kargl, Frank; Petit, Jonathan; Scheuermann, Björn; Tieler, Tessa; Altintas, O.; Dressler, F.; Hartenstein, H.; Tonguz, O.K.

    The working group on “Fundamentals: IVC and Computer Science‿ discussed the lasting value of achieved research results as well as potential future directions in the field of inter- vehicular communication. Two major themes ‘with variations’ were the dependence on a specific technology (particularly

  4. Fundamentals: IVC and Computer Science

    NARCIS (Netherlands)

    Gozalvez, Javier; Haerri, Jerome; Hartenstein, Hannes; Heijenk, Gerhard J.; Kargl, Frank; Petit, Jonathan; Scheuermann, Björn; Tieler, Tessa; Altintas, O.; Dressler, F.; Hartenstein, H.; Tonguz, O.K.

    2013-01-01

    The working group on “Fundamentals: IVC and Computer Science‿ discussed the lasting value of achieved research results as well as potential future directions in the field of inter- vehicular communication. Two major themes ‘with variations’ were the dependence on a specific technology (particularly

  5. Brake Fundamentals. Automotive Articulation Project.

    Science.gov (United States)

    Cunningham, Larry; And Others

    Designed for secondary and postsecondary auto mechanics programs, this curriculum guide contains learning exercises in seven areas: (1) brake fundamentals; (2) brake lines, fluid, and hoses; (3) drum brakes; (4) disc brake system and service; (5) master cylinder, power boost, and control valves; (6) parking brakes; and (7) trouble shooting. Each…

  6. Fundamentals of EU VAT law

    NARCIS (Netherlands)

    van Doesum, A.; van Kesteren, H.W.M.; van Norden, G.J.

    2016-01-01

    Fundamentals of EU VAT Law aims at providing a deep insight into the systematics, the functioning and the principles of the European Value Added Tax (VAT) system. VAT is responsible for generating approximately EUR 903 billion per year in tax revenues across the European Union – revenues that play a

  7. Fundamentals of Microelectronics Processing (VLSI).

    Science.gov (United States)

    Takoudis, Christos G.

    1987-01-01

    Describes a 15-week course in the fundamentals of microelectronics processing in chemical engineering, which emphasizes the use of very large scale integration (VLSI). Provides a listing of the topics covered in the course outline, along with a sample of some of the final projects done by students. (TW)

  8. Fundamentals of Welding. Teacher Edition.

    Science.gov (United States)

    Fortney, Clarence; And Others

    These instructional materials assist teachers in improving instruction on the fundamentals of welding. The following introductory information is included: use of this publication; competency profile; instructional/task analysis; related academic and workplace skills list; tools, materials, and equipment list; and 27 references. Seven units of…

  9. Status of Fundamental Physics Program

    Science.gov (United States)

    Lee, Mark C.

    2003-01-01

    Update of the Fundamental Physics Program. JEM/EF Slip. 2 years delay. Reduced budget. Community support and advocacy led by Professor Nick Bigelow. Reprogramming led by Fred O Callaghan/JPL team. LTMPF M1 mission (DYNAMX and SUMO). PARCS. Carrier re baselined on JEM/EF.

  10. Different Variants of Fundamental Portfolio

    Directory of Open Access Journals (Sweden)

    Tarczyński Waldemar

    2014-06-01

    Full Text Available The paper proposes the fundamental portfolio of securities. This portfolio is an alternative for the classic Markowitz model, which combines fundamental analysis with portfolio analysis. The method’s main idea is based on the use of the TMAI1 synthetic measure and, in limiting conditions, the use of risk and the portfolio’s rate of return in the objective function. Different variants of fundamental portfolio have been considered under an empirical study. The effectiveness of the proposed solutions has been related to the classic portfolio constructed with the help of the Markowitz model and the WIG20 market index’s rate of return. All portfolios were constructed with data on rates of return for 2005. Their effectiveness in 2006- 2013 was then evaluated. The studied period comprises the end of the bull market, the 2007-2009 crisis, the 2010 bull market and the 2011 crisis. This allows for the evaluation of the solutions’ flexibility in various extreme situations. For the construction of the fundamental portfolio’s objective function and the TMAI, the study made use of financial and economic data on selected indicators retrieved from Notoria Serwis for 2005.

  11. Fundamental stellar properties from asteroseismology

    DEFF Research Database (Denmark)

    Silva Aguirre, V.; Casagrande, L.; Miglio, A.

    2013-01-01

    Accurate characterization of stellar populations is of prime importance to correctly understand the formation and evolution process of our Galaxy. The field of asteroseismology has been particularly successful in such an endeavor providing fundamental parameters for large samples of stars in diff...

  12. Fundamentals of Biomass pellet production

    DEFF Research Database (Denmark)

    Holm, Jens Kai; Henriksen, Ulrik Birk; Hustad, Johan Einar

    2005-01-01

    Pelletizing experiments along with modelling of the pelletizing process have been carried out with the aim of understanding the fundamental physico-chemical mechanisms that control the quality and durability of biomass pellets. A small-scale California pellet mill (25 kg/h) located with the Biomass...

  13. Energy informatics: Fundamentals and standardization

    Directory of Open Access Journals (Sweden)

    Biyao Huang

    2017-06-01

    Full Text Available Based on international standardization and power utility practices, this paper presents a preliminary and systematic study on the field of energy informatics and analyzes boundary expansion of information and energy system, and the convergence of energy system and ICT. A comprehensive introduction of the fundamentals and standardization of energy informatics is provided, and several key open issues are identified.

  14. Biological Computing Fundamentals and Futures

    OpenAIRE

    Akula, Balaji; Cusick, James

    2009-01-01

    The fields of computing and biology have begun to cross paths in new ways. In this paper a review of the current research in biological computing is presented. Fundamental concepts are introduced and these foundational elements are explored to discuss the possibilities of a new computing paradigm. We assume the reader to possess a basic knowledge of Biology and Computer Science

  15. Fundamentals: IVC and computer science

    NARCIS (Netherlands)

    Gozalvez, Javier; Haerri, Jerome; Hartenstein, Hannes; Heijenk, Geert; Kargl, Frank; Petit, Jonathan; Scheuermann, Björn; Tieler, Tessa; Altintas, O.; Dressler, F.; Hartenstein, H.; Tonguz, O.K.

    2013-01-01

    The working group on “Fundamentals: IVC and Computer Science” discussed the lasting value of achieved research results as well as potential future directions in the field of inter- vehicular communication. Two major themes ‘with variations’ were the dependence on a specific technology (particularly

  16. Determination of the fine-structure constant {alpha} by measuring the quotient of the Planck constant and the neutron mass

    Energy Technology Data Exchange (ETDEWEB)

    Krueger, E.; Nistler, W.; Weirauch, W. [Physikalisch-Technische Bundesanstalt, Braunschweig (Germany)

    1997-04-01

    Using a special high-precision apparatus at ILL the quotient h/m{sub n} (h Planck constant, m{sub n} neutron mass) has been measured. The value measured for h/m{sub n} leads to {alpha}{sup -1} = 137.03601082(524) (relative uncertainty: 3.9{center_dot}10{sup -8}) It was the first time that this fundamental constant has been determined by means of neutrons. The experiment, which had been running since 1981 in a preliminary version and since 1987 in the final version, which was finished in December 1996, is described. (author).

  17. Electronic measurement of the Boltzmann constant with a quantum-voltage-calibrated Johnson-noise thermometer

    NARCIS (Netherlands)

    Benz, Samuel; White, D. Rod; Qu, JiFeng; Rogalla, Horst; Tew, Weston

    2010-01-01

    Currently, the CODATA value of the Boltzmann constant is dominated by a single gas-based thermometry measurement with a relative standard uncertainty of 1.8×10−6 [P.J. Mohr, B.N. Taylor, D.B. Newell, CODATA recommended values of the fundamental physical constants: 2006, Rev. Mod. Phys. 80 (2008)

  18. On determination of the geometric cosmological constant from the OPERA experiment of superluminal neutrinos

    CERN Document Server

    Yan, Mu-Lin; Huang, Wei; Xiao, Neng-Chao

    2011-01-01

    The recent OPERA experiment of superluminal neutrinos has deep consequences in cosmology. In cosmology a fundamental constant is the cosmological constant. From observations one can estimate the effective cosmological constant $\\Lambda_{eff}$ which is the sum of the quantum zero point energy $\\Lambda_{dark energy}$ and the geometric cosmological constant $\\Lambda$. The OPERA experiment can be applied to determine the geometric cosmological constant $\\Lambda$. It is the first time to distinguish the contributions of $\\Lambda$ and $\\Lambda_{dark energy}$ from each other by experiment. The determination is based on an explanation of the OPERA experiment in the framework of Special Relativity with de Sitter space-time symmetry.

  19. Decay Constants of Vector Mesons

    Institute of Scientific and Technical Information of China (English)

    LI Heng-Mei; WAN Shao-Long

    2008-01-01

    @@ The light vector mesons are studied within the framework of the Bethe-Salpeter equation with the vector-vectortype flat-bottom potential The Bethe-Salpeter wavefunctions and the decay constants of the vector mesons are obtained. All the obtained results, fρ, fφ, and fΚ* , are in agreement with the experimental values, respectively.

  20. Determination of the Vibrational Constants of Some Diatomic Molecules: A Combined Infrared Spectroscopic and Quantum Chemical Third Year Chemistry Project.

    Science.gov (United States)

    Ford, T. A.

    1979-01-01

    In one option for this project, the rotation-vibration infrared spectra of a number of gaseous diatomic molecules were recorded, from which the fundamental vibrational wavenumber, the force constant, the rotation-vibration interaction constant, the equilibrium rotational constant, and the equilibrium internuclear distance were determined.…

  1. Characterization of a constant current charge detector.

    Science.gov (United States)

    Mori, Masanobu; Chen, Yongjing; Ohira, Shin-Ichi; Dasgupta, Purnendu K

    2012-12-15

    Ion exchangers are ionic equivalents of doped semiconductors, where cations and anions are equivalents of holes and electrons as charge carriers in solid state semiconductors. We have previously demonstrated an ion exchange membrane (IEM) based electrolyte generator which behaves similar to a light-emitting diode and a charge detector (ChD) which behaves analogous to a p-i-n photodiode. The previous work on the charge detector, operated at a constant voltage, established its unique ability to respond to the charge represented by the analyte ions regardless of their redox properties, rather than to their conductivities. It also suggested that electric field induced dissociation (EFID) of water occurs at one or both ion exchange membranes. A logical extension is to study the behavior of the same device, operated in a constant current mode (ChD(i)). The evidence indicates that in the present operational mode the device also responds to the charge represented by the analytes and not their conductivity. Injection of a base into a charge detector operated in the constant voltage mode was not previously examined; in the constant current mode, base injection appears to inhibit EFID. The effects of applied current, analyte residence time and outer channel fluid composition were individually examined; analyte ions of different mobilities as well as affinities for the respective IEMs were used. While the exact behavior is somewhat dependent on the applied current, strong electrolytes, both acids and salts, respond the highest and in a near-uniform fashion, weak acids and their salts respond in an intermediate fashion and bases produce the lowest responses. A fundamentally asymmetric behavior is observed. Injected bases but not injected acids produce a poor response; the effects of incorporating a strong base as the electrolyte in the anion exchange membrane (AEM) compartment is far greater than incorporating an acid in the cation exchange membrane (CEM) compartment. These

  2. Mass anomalous dimension in SU(2) with six fundamental fermions

    CERN Document Server

    Bursa, Francis; Keegan, Liam; Pica, Claudio; Pickup, Thomas

    2010-01-01

    We simulate SU(2) gauge theory with six massless fundamental Dirac fermions. We measure the running of the coupling and the mass in the Schroedinger Functional scheme. We observe very slow running of the coupling constant. We measure the mass anomalous dimension gamma, and find it is between 0.135 and 1.03 in the range of couplings consistent with the existence of an IR fixed point.

  3. The 1% concordance Hubble constant

    Energy Technology Data Exchange (ETDEWEB)

    Bennett, C. L.; Larson, D.; Weiland, J. L. [Johns Hopkins University, 3400 North Charles Street, Baltimore, MD 21218 (United States); Hinshaw, G., E-mail: cbennett@jhu.edu [Department of Physics and Astronomy, University of British Columbia, Vancouver, BC V6T 1Z1 (Canada)

    2014-10-20

    The determination of the Hubble constant has been a central goal in observational astrophysics for nearly a hundred years. Extraordinary progress has occurred in recent years on two fronts: the cosmic distance ladder measurements at low redshift and cosmic microwave background (CMB) measurements at high redshift. The CMB is used to predict the current expansion rate through a best-fit cosmological model. Complementary progress has been made with baryon acoustic oscillation (BAO) measurements at relatively low redshifts. While BAO data do not independently determine a Hubble constant, they are important for constraints on possible solutions and checks on cosmic consistency. A precise determination of the Hubble constant is of great value, but it is more important to compare the high and low redshift measurements to test our cosmological model. Significant tension would suggest either uncertainties not accounted for in the experimental estimates or the discovery of new physics beyond the standard model of cosmology. In this paper we examine in detail the tension between the CMB, BAO, and cosmic distance ladder data sets. We find that these measurements are consistent within reasonable statistical expectations and we combine them to determine a best-fit Hubble constant of 69.6 ± 0.7 km s{sup –1} Mpc{sup –1}. This value is based upon WMAP9+SPT+ACT+6dFGS+BOSS/DR11+H {sub 0}/Riess; we explore alternate data combinations in the text. The combined data constrain the Hubble constant to 1%, with no compelling evidence for new physics.

  4. Varying Fine-Structure Constant and the Cosmological Constant Problem

    CERN Document Server

    Fujii, Y

    2003-01-01

    We start with a brief account of the latest analysis of the Oklo phenomenon providing the still most stringent constraint on time-variability of the fine- structure constant $\\alpha$. Comparing this with the recent result from the measurement of distant QSO's appears to indicate a non-uniform time-dependence, which we argue to be related to another recent finding of the accelerating universe. This view is implemented in terms of the scalar-tensor theory, applied specifically to the small but nonzero cosmological constant. Our detailed calculation shows that these two phenomena can be understood in terms of a common origin, a particular behavior of the scalar field, dilaton. We also sketch how this theoretical approach makes it appropriate to revisit non- Newtonian gravity featuring small violation of Weak Equivalence Principle at medium distances.

  5. Varying Fine-Structure Constant and the Cosmological Constant Problem

    Science.gov (United States)

    Fujii, Yasunori

    We start with a brief account of the latest analysis of the Oklo phenomenon providing the still most stringent constraint on time variability of the fine-structure constant α. Comparing this with the recent result from the measurement of distant QSO's appears to indicate a non-uniform time-dependence, which we argue to be related to another recent finding of the accelerating universe. This view is implemented in terms of the scalar-tensor theory, applied specifically to the small but nonzero cosmological constant. Our detailed calculation shows that these two phenomena can be understood in terms of a common origin, a particular behavior of the scalar field, dilaton. We also sketch how this theoretical approach makes it appropriate to revisit non-Newtonian gravity featuring small violation of Weak Equivalence Principle at medium distances.

  6. Fundamentals of Managing Reference Collections

    Science.gov (United States)

    Singer, Carol A.

    2012-01-01

    Whether a library's reference collection is large or small, it needs constant attention. Singer's book offers information and insight on best practices for reference collection management, no matter the size, and shows why managing without a plan is a recipe for clutter and confusion. In this very practical guide, reference librarians will learn:…

  7. Fundamentals of Managing Reference Collections

    Science.gov (United States)

    Singer, Carol A.

    2012-01-01

    Whether a library's reference collection is large or small, it needs constant attention. Singer's book offers information and insight on best practices for reference collection management, no matter the size, and shows why managing without a plan is a recipe for clutter and confusion. In this very practical guide, reference librarians will learn:…

  8. Constant-bandwidth constant-temperature hot-wire anemometer.

    Science.gov (United States)

    Ligeza, P

    2007-07-01

    A constant-temperature anemometer (CTA) enables the measurement of fast-changing velocity fluctuations. In the classical solution of CTA, the transmission band is a function of flow velocity. This is a minor drawback when the mean flow velocity does not significantly change, though it might lead to dynamic errors when flow velocity varies over a considerable range. A modification is outlined, whereby an adaptive controller is incorporated in the CTA system such that the anemometer's transmission band remains constant in the function of flow velocity. For that purpose, a second feedback loop is provided, and the output signal from the anemometer will regulate the controller's parameters such that the transmission bandwidth remains constant. The mathematical model of a CTA that has been developed and model testing data allow a through evaluation of the proposed solution. A modified anemometer can be used in measurements of high-frequency variable flows in a wide range of velocities. The proposed modification allows the minimization of dynamic measurement errors.

  9. On the Stability of Fundamental Couplings in the Galaxy

    CERN Document Server

    João, S M; Mota, I S A B; Vianez, P M T

    2015-01-01

    Astrophysical tests of the stability of Nature's fundamental couplings are a key probe of the standard paradigms in fundamental physics and cosmology. In this report we discuss updated constraints on the stability of the fine-structure constant $\\alpha$ and the proton-to-electron mass ratio $\\mu=m_p/m_e$ within the Galaxy. We revisit and improve upon the analysis by Truppe {\\it et al.} by allowing for the possibility of simultaneous variations of both couplings and also by combining them with the recent measurements by Levshakov {\\it et al.} By considering representative unification scenarios we find no evidence for variations of $\\alpha$ at the 0.4 ppm level, and of $\\mu$ at the 0.6 ppm level; if one uses the Levshakov bound on $\\mu$ as a prior, the$\\alpha$ bound is improved to 0.1 ppm. We also highlight how these measurements can constrain (and discriminate among) several fundamental physics paradigms.

  10. COMPARATIVE ANALYSIS BETWEEN THE FUNDAMENTAL AND TECHNICAL ANALYSIS OF STOCKS

    Directory of Open Access Journals (Sweden)

    Nada Petrusheva

    2016-04-01

    Full Text Available In the world of investing and trading, in order to have a definite advantage and constantly create profit, you need to have a strategic approach. Generally speaking, the two main schools of thought and strategies in financial markets are fundamental and technical analysis. Fundamental and technical analysis differ in several aspects, such as the way of functioning and execution, the time horizon used, the tools used and their objective. These differences lead to certain advantages and disadvantages of each of the analyses. Fundamental and technical analysis are also a subject of critical reviews by the academic and scientific community and many of these reviews concern the methods of their application, i.e. the possibility of combining the two analyses and using them complementarily to fully utilize their strengths and advantages.

  11. Fundamental neutron physics at LANSCE

    Energy Technology Data Exchange (ETDEWEB)

    Greene, G.

    1995-10-01

    Modern neutron sources and science share a common origin in mid-20th-century scientific investigations concerned with the study of the fundamental interactions between elementary particles. Since the time of that common origin, neutron science and the study of elementary particles have evolved into quite disparate disciplines. The neutron became recognized as a powerful tool for studying condensed matter with modern neutron sources being primarily used (and justified) as tools for neutron scattering and materials science research. The study of elementary particles has, of course, led to the development of rather different tools and is now dominated by activities performed at extremely high energies. Notwithstanding this trend, the study of fundamental interactions using neutrons has continued and remains a vigorous activity at many contemporary neutron sources. This research, like neutron scattering research, has benefited enormously by the development of modern high-flux neutron facilities. Future sources, particularly high-power spallation sources, offer exciting possibilities for continuing this research.

  12. THE FUNDAMENTS OF EXPLANATORY CAUSES

    Directory of Open Access Journals (Sweden)

    Lavinia Mihaela VLĂDILĂ

    2015-07-01

    Full Text Available The new Criminal Code in the specter of the legal life the division of causes removing the criminal feature of the offence in explanatory causes and non-attributable causes. This dichotomy is not without legal and factual fundaments and has been subjected to doctrinaire debates even since the period when the Criminal Code of 1969 was still in force. From our perspective, one of the possible legal fundaments of the explanatory causes results from that the offence committed is based on the protection of a right at least equal with the one prejudiced by the action of aggression, salvation, by the legal obligation imposed or by the victim’s consent.

  13. Fundamentals of condensed matter physics

    CERN Document Server

    Cohen, Marvin L

    2016-01-01

    Based on an established course and covering the fundamentals, central areas, and contemporary topics of this diverse field, Fundamentals of Condensed Matter Physics is a much-needed textbook for graduate students. The book begins with an introduction to the modern conceptual models of a solid from the points of view of interacting atoms and elementary excitations. It then provides students with a thorough grounding in electronic structure as a starting point to understand many properties of condensed matter systems - electronic, structural, vibrational, thermal, optical, transport, magnetic and superconductivity - and methods to calculate them. Taking readers through the concepts and techniques, the text gives both theoretically and experimentally inclined students the knowledge needed for research and teaching careers in this field. It features 200 illustrations, 40 worked examples and 150 homework problems for students to test their understanding. Solutions to the problems for instructors are available at w...

  14. Astrophysical Probes of Fundamental Physics

    CERN Document Server

    Martins, C J A P

    2006-01-01

    I review the theoretical motivation for varying fundamental couplings and discuss how these measurements can be used to constrain a number of fundamental physics scenarios that would otherwise be inacessible to experiment. As a case study I will focus on the relation between varying couplings and dark energy, and explain how varying coupling measurements can be used to probe the nature of dark energy, with important advantages over the standard methods. Assuming that the current observational evidence for varying $\\alpha$ and $\\mu$ is correct, a several-sigma detection of dynamical dark energy is feasible within a few years, using currently operational ground-based facilities. With forthcoming instruments like CODEX, a high-accuracy reconstruction of the equation of state may be possible all the way up to redshift $z\\sim4$.

  15. Fundamental investigations of catalyst nanoparticles

    DEFF Research Database (Denmark)

    Elkjær, Christian Fink

    fundamental understanding of catalytic processes and our ability to make use of that understanding. This thesis presents fundamental studies of catalyst nanoparticles with particular focus on dynamic processes. Such studies often require atomic-scale characterization, because the catalytic conversion takes...... place on the molecular and atomic level. Transmission electron microscopy (TEM) has the ability to image nanostructures with atomic resolution and reveal the atomic configuration of the important nanoparticle surfaces. In the present work, TEM has been used to study nanoparticles in situ at elevated...... different topics, each related to different aspects of nanoparticle dynamics and catalysis. The first topic is the reduction of a homogeneous solid state precursor to form the catalytically active phase which is metal nanoparticles on an inert support. Here, we have reduced Cu phyllosilicate to Cu on silica...

  16. Fundamentals of estuarine physical oceanography

    CERN Document Server

    Bruner de Miranda, Luiz; Kjerfve, Björn; Castro Filho, Belmiro Mendes de

    2017-01-01

    This book provides an introduction to the complex system functions, variability and human interference in ecosystem between the continent and the ocean. It focuses on circulation, transport and mixing of estuarine and coastal water masses, which is ultimately related to an understanding of the hydrographic and hydrodynamic characteristics (salinity, temperature, density and circulation), mixing processes (advection and diffusion), transport timescales such as the residence time and the exposure time. In the area of physical oceanography, experiments using these water bodies as a natural laboratory and interpreting their circulation and mixing processes using theoretical and semi-theoretical knowledge are of fundamental importance. Small-scale physical models may also be used together with analytical and numerical models. The book highlights the fact that research and theory are interactive, and the results provide the fundamentals for the development of the estuarine research.

  17. Fundamentals of electronic systems design

    CERN Document Server

    Lienig, Jens

    2017-01-01

    This textbook covers the design of electronic systems from the ground up, from drawing and CAD essentials to recycling requirements. Chapter by chapter, it deals with the challenges any modern system designer faces: the design process and its fundamentals, such as technical drawings and CAD, electronic system levels, assembly and packaging issues and appliance protection classes, reliability analysis, thermal management and cooling, electromagnetic compatibility (EMC), all the way to recycling requirements and environmental-friendly design principles. Enables readers to face various challenges of designing electronic systems, including coverage from various engineering disciplines; Written to be accessible to readers of varying backgrounds; Uses illustrations extensively to reinforce fundamental concepts; Organized to follow essential design process, although chapters are self-contained and can be read in any order.

  18. Unified Theory of Fundamental Interactions

    Institute of Scientific and Technical Information of China (English)

    WU Ning

    2003-01-01

    Based on local gauge invariance, four different kinds of fundamental interactions in nature are unified in a theory which has SU(3)C( )SU SU(2)L( )U(1)( )s Gravitational Gauge Group gauge symmetry. In this approach,gravitational field, like electromagnetic field, intermediate gauge field, and gluon field, is represented by gauge potential.Four kinds of fundamental interactions are formulated in the similar manner, and therefore can be unified in a direct or semi-direct product group. The model discussed in this paper is a renormalizable quantum model and can be regarded as an extension of the standard model to gravitational interactions, so it can be used to study quantum effects of gravitational interactions.

  19. Composing Europe's Fundamental Rights Area

    DEFF Research Database (Denmark)

    Storgaard, Louise Halleskov

    2015-01-01

    The article offers a perspective on how the objective of a strong and coherent European protection standard pursued by the fundamental rights amendments of the Lisbon Treaty can be achieved, as it proposes a discursive pluralistic framework to understand and guide the relationship between the EU...... Court of Justice and the European Court of Human Rights. It is argued that this framework – which is suggested as an alternative to the EU law approach to the Strasbourg system applied by the CJEU in Opinion 2/13 and its Charter-based case law – has a firm doctrinal, case law and normative basis....... The article ends by addressing three of the most pertinent challenges to European fundamental rights protection through the prism of the proposed framework....

  20. New clinical findings on the longevity gene in disease, health, & longevity: Sirtuin 1 often decreases with advanced age & serious diseases in most parts of the human body, while relatively high & constant Sirtuin 1 regardless of age was first found in the hippocampus of supercentenarians.

    Science.gov (United States)

    Omura, Yoshiaki; Lu, Dominic P; Jones, Marilyn; O'Young, Brian; Duvvi, Harsha; Paluch, Kamila; Shimotsuura, Yasuhiro; Ohki, Motomu

    2011-01-01

    The expression of the longevity gene, Sirtuin 1, was non-invasively measured using Electro-Magnetic Field (EMF) resonance phenomenon between a known amount of polyclonal antibody of the C-terminal of Sirtuin 1 & Sirtuin 1 molecule inside of the body. Our measurement of over 100 human adult males and females, ranging between 20-122 years old, indicated that the majority of subjects had Sirtuin 1 levels of 5-10 pg BDORT units in most parts of the body. When Sirtuin 1 was less than 1 pg, the majority of the people had various degrees of tumors or other serious diseases. When Sirtuin 1 levels were less than 0.25 pg BDORT units, a high incidence of AIDS was also detected. Very few people had Sirtuin 1 levels of over 25 pg BDORT units in most parts of the body. We selected 7 internationally recognized supercentenarians who lived between 110-122 years old. To our surprise, most of their body Sirtuin 1 levels were between 2.5-10 pg BDORT units. However, by evaluating different parts of the brain, we found that both sides of the Hippocampus had a much higher amount of Sirtuin 1, between 25-100 pg BDORT units. With most subjects, Sirtuin 1 was found to be higher in the Hippocampus than in the rest of the body and remains relatively constant regardless of age. We found that Aspartame, plastic eye contact lenses, and asbestos in dental apparatuses, which reduce normal cell telomeres, also significantly reduce Sirtuin 1. In addition, we found that increasing normal cell telomere by electrical or mechanical stimulation of True ST-36 increases the expression of the Sirtuin 1 gene in people in which expression is low. This measurement of Sirtuin 1 in the Hippocampus has become a reliable indicator for detecting potential longevity of an individual.

  1. Fundamental Scaling Laws in Nanophotonics

    OpenAIRE

    Ke Liu; Shuai Sun; Arka Majumdar; Volker J. Sorger

    2016-01-01

    The success of information technology has clearly demonstrated that miniaturization often leads to unprecedented performance, and unanticipated applications. This hypothesis of “smaller-is-better” has motivated optical engineers to build various nanophotonic devices, although an understanding leading to fundamental scaling behavior for this new class of devices is missing. Here we analyze scaling laws for optoelectronic devices operating at micro and nanometer length-scale. We show that optoe...

  2. Fundamental Composite (Goldstone) Higgs Dynamics

    DEFF Research Database (Denmark)

    Cacciapaglia, G.; Sannino, Francesco

    2014-01-01

    We provide a unified description, both at the effective and fundamental Lagrangian level, of models of composite Higgs dynamics where the Higgs itself can emerge, depending on the way the electroweak symmetry is embedded, either as a pseudo-Goldstone boson or as a massive excitation of the conden......We provide a unified description, both at the effective and fundamental Lagrangian level, of models of composite Higgs dynamics where the Higgs itself can emerge, depending on the way the electroweak symmetry is embedded, either as a pseudo-Goldstone boson or as a massive excitation...... of the condensate. We show that, in general, these states mix with repercussions on the electroweak physics and phenomenology. Our results will help clarify the main differences, similarities, benefits and shortcomings of the different ways one can naturally realize a composite nature of the electroweak sector...... transforming according to the fundamental representation of the gauge group. This minimal choice enables us to use recent first principle lattice results to make the first predictions for the massive spectrum for models of composite (Goldstone) Higgs dynamics. These results are of the upmost relevance to guide...

  3. How does Planck’s constant influence the macroscopic world?

    Science.gov (United States)

    Yang, Pao-Keng

    2016-09-01

    In physics, Planck’s constant is a fundamental physical constant accounting for the energy-quantization phenomenon in the microscopic world. The value of Planck’s constant also determines in which length scale the quantum phenomenon will become conspicuous. Some students think that if Planck’s constant were to have a larger value than it has now, the quantum effect would only become observable in a world with a larger size, whereas the macroscopic world might remain almost unchanged. After reasoning from some basic physical principles and theories, we found that doubling Planck’s constant might result in a radical change on the geometric sizes and apparent colors of macroscopic objects, the solar spectrum and luminosity, the climate and gravity on Earth, as well as energy conversion between light and materials such as the efficiency of solar cells and light-emitting diodes. From the discussions in this paper, students can appreciate how Planck’s constant affects various aspects of the world in which we are living now.

  4. Low uncertainty Boltzmann constant determinations and the kelvin redefinition.

    Science.gov (United States)

    Fischer, J

    2016-03-28

    At its 25th meeting, the General Conference on Weights and Measures (CGPM) approved Resolution 1 'On the future revision of the International System of Units, the SI', which sets the path towards redefinition of four base units at the next CGPM in 2018. This constitutes a decisive advance towards the formal adoption of the new SI and its implementation. Kilogram, ampere, kelvin and mole will be defined in terms of fixed numerical values of the Planck constant, elementary charge, Boltzmann constant and Avogadro constant, respectively. The effect of the new definition of the kelvin referenced to the value of the Boltzmann constant k is that the kelvin is equal to the change of thermodynamic temperature T that results in a change of thermal energy kT by 1.380 65×10(-23) J. A value of the Boltzmann constant suitable for defining the kelvin is determined by fundamentally different primary thermometers such as acoustic gas thermometers, dielectric constant gas thermometers, noise thermometers and the Doppler broadening technique. Progress to date of the measurements and further perspectives are reported. Necessary conditions to be met before proceeding with changing the definition are given. The consequences of the new definition of the kelvin on temperature measurement are briefly outlined. © 2016 The Author(s).

  5. Three pion nucleon coupling constants

    CERN Document Server

    Arriola, E Ruiz; Perez, R Navarro

    2016-01-01

    There exist four pion nucleon coupling constants, $f_{\\pi^0, pp}$, $-f_{\\pi^0, nn}$, $f_{\\pi^+, pn} /\\sqrt{2}$ and $ f_{\\pi^-, np} /\\sqrt{2}$ which coincide when up and down quark masses are identical and the electron charge is zero. While there is no reason why the pion-nucleon-nucleon coupling constants should be identical in the real world, one expects that the small differences might be pinned down from a sufficiently large number of independent and mutually consistent data. Our discussion provides a rationale for our recent determination $$f_p^2 = 0.0759(4) \\, , \\quad f_{0}^2 = 0.079(1) \\,, \\quad f_{c}^2 = 0.0763(6) \\, , $$ based on a partial wave analysis of the $3\\sigma$ self-consistent nucleon-nucleon Granada-2013 database comprising 6713 published data in the period 1950-2013.

  6. Recent variations of fundamental parameters and their implications for gravitation

    CERN Document Server

    Dent, Thomas

    2010-01-01

    We compare the sensitivity of a recent bound on time variation of the fine structure constant from optical clocks with bounds on time varying fundamental constants from atomic clocks sensitive to the electron-to-proton mass ratio, from radioactive decay rates in meteorites, and from the Oklo natural reactor. Tests of the Weak Equivalence Principle also lead to comparable bounds on present time variations of constants, as well as putting the strongest limits on variations tracking the gravitational potential. For recent time variations, the "winner in sensitivity" depends on possible relations between the variations of different couplings in the standard model of particle physics. WEP tests are currently the most sensitive within scenarios with unification of gauge interactions. A detection of time variation in atomic clocks would favour dynamical dark energy and put strong constraints on the dynamics of a cosmological scalar field.

  7. Thermocouple time constant measurement by cross power spectra

    Science.gov (United States)

    Strahle, W. C.; Muthukrishnan, M.

    1976-01-01

    A method of measuring thermocouple time constants is outlined which requires Fourier signal processing. In this method, two thermocouples of differing time constants are placed in a gas flow as closely as possible to one another, and the time constant of the first thermocouple is determined directly from the extremum of the imaginary part of the ratio of the ensemble averaged cross-power spectrum to the ensemble averaged auto-power spectrum of that thermocouple. A coherence function is given for assuring the quality of the data, and results are presented for an experimental test of the method. Some problems with the method are briefly noted.

  8. Why isn't the solar constant a constant?

    CERN Document Server

    Li, K J; Xu, J C; Gao, P X; Yang, L H; Liang, H F; Zhan, L S

    2012-01-01

    In order to probe the mechanism of variations of the Solar Constant on the inter-solar-cycle scale, total solar irradiance (TSI, the so-called Solar Constant) in the time interval of 7 November 1978 to 20 September 2010 is decomposed into three components through the empirical mode decomposition and time-frequency analyses. The first component is the rotation signal, counting up to 42.31% of the total variation of TSI, which is understood to be mainly caused by large magnetic structures, including sunspot groups. The second is an annual-variation signal, counting up to 15.17% of the total variation, the origin of which is not known at this point in time. Finally, the third is the inter-solar-cycle signal, counting up to 42.52%, which are inferred to be caused by the network magnetic elements in quiet regions, whose magnetic flux ranges from $(4.27-38.01)\\times10^{19}$ Mx.

  9. Fine-structure constant: Is it really a constant?

    Science.gov (United States)

    Bekenstein, Jacob D.

    1982-03-01

    It is often claimed that the fine-structure "constant" α is shown to be strictly constant in time by a variety of astronomical and geophysical results. These constrain its fractional rate of change α˙α to at least some orders of magnitude below the Hubble rate H0. We argue that the conclusion is not as straightforward as claimed since there are good physical reasons to expect α˙α<

  10. Dielectric constant of liquid alkanes and hydrocarbon mixtures

    Science.gov (United States)

    Sen, A. D.; Anicich, V. G.; Arakelian, T.

    1992-01-01

    The complex dielectric constants of n-alkanes with two to seven carbon atoms have been measured. The measurements were conducted using a slotted-line technique at 1.2 GHz and at atmospheric pressure. The temperature was varied from the melting point to the boiling point of the respective alkanes. The real part of the dielectric constant was found to decrease with increasing temperature and correlate with the change in the molar volume. An upper limit to all the loss tangents was established at 0.001. The complex dielectric constants of a few mixtures of liquid alkanes were also measured at room temperature. For a pentane-octane mixture the real part of the dielectric constant could be explained by the Clausius-Mosotti theory. For the mixtures of n-hexane-ethylacetate and n-hexane-acetone the real part of the dielectric constants could be explained by the Onsager theory extended to mixtures. The dielectric constant of the n-hexane-acetone mixture displayed deviations from the Onsager theory at the highest fractions of acetone. The dipole moments of ethylacetate and acetone were determined for dilute mixtures using the Onsager theory and were found to be in agreement with their accepted gas-phase values. The loss tangents of the mixtures exhibited a linear relationship with the volume fraction for low concentrations of the polar liquids.

  11. Fundamental Limits to Cellular Sensing

    Science.gov (United States)

    ten Wolde, Pieter Rein; Becker, Nils B.; Ouldridge, Thomas E.; Mugler, Andrew

    2016-03-01

    In recent years experiments have demonstrated that living cells can measure low chemical concentrations with high precision, and much progress has been made in understanding what sets the fundamental limit to the precision of chemical sensing. Chemical concentration measurements start with the binding of ligand molecules to receptor proteins, which is an inherently noisy process, especially at low concentrations. The signaling networks that transmit the information on the ligand concentration from the receptors into the cell have to filter this receptor input noise as much as possible. These networks, however, are also intrinsically stochastic in nature, which means that they will also add noise to the transmitted signal. In this review, we will first discuss how the diffusive transport and binding of ligand to the receptor sets the receptor correlation time, which is the timescale over which fluctuations in the state of the receptor, arising from the stochastic receptor-ligand binding, decay. We then describe how downstream signaling pathways integrate these receptor-state fluctuations, and how the number of receptors, the receptor correlation time, and the effective integration time set by the downstream network, together impose a fundamental limit on the precision of sensing. We then discuss how cells can remove the receptor input noise while simultaneously suppressing the intrinsic noise in the signaling network. We describe why this mechanism of time integration requires three classes (groups) of resources—receptors and their integration time, readout molecules, energy—and how each resource class sets a fundamental sensing limit. We also briefly discuss the scheme of maximum-likelihood estimation, the role of receptor cooperativity, and how cellular copy protocols differ from canonical copy protocols typically considered in the computational literature, explaining why cellular sensing systems can never reach the Landauer limit on the optimal trade

  12. Fundamental cycles and graph embeddings

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    In this paper, we investigate fundamental cycles in a graph G and their relations with graph embeddings. We show that a graph G may be embedded in an orientable surface with genus at least g if and only if for any spanning tree T , there exists a sequence of fundamental cycles C1, C2, . . . , C2g with C2i-1 ∩ C2i≠ф for 1≤ i ≤g. In particular, among β(G) fundamental cycles of any spanning tree T of a graph G, there are exactly 2γM (G) cycles C1, C2, . . . , C2γM (G) such that C2i-1 ∩ C2i≠ф for 1 ≤i≤γM (G), where β(G) and γM (G) are the Betti number and the maximum genus of G, respectively. This implies that it is possible to construct an orientable embedding with large genus of a graph G from an arbitrary spanning tree T (which may have very large number of odd components in G\\E(T )). This is different from the earlier work of Xuong and Liu, where spanning trees with small odd components are needed. In fact, this makes a common generalization of Xuong, Liu and Fu et al. Furthermore, we show that (1) this result is useful for locating the maximum genus of a graph having a specific edge-cut. Some known results for embedded graphs are also concluded; (2) the maximum genus problem may be reduced to the maximum matching problem. Based on this result and the algorithm of Micali-Vazirani, we present a new efficient algorithm to determine the maximum genus of a graph in O((β(G)) 25 ) steps. Our method is straight and quite different from the algorithm of Furst, Gross and McGeoch which depends on a result of Giles where matroid parity method is needed.

  13. Computing fundamentals introduction to computers

    CERN Document Server

    Wempen, Faithe

    2014-01-01

    The absolute beginner's guide to learning basic computer skills Computing Fundamentals, Introduction to Computers gets you up to speed on basic computing skills, showing you everything you need to know to conquer entry-level computing courses. Written by a Microsoft Office Master Instructor, this useful guide walks you step-by-step through the most important concepts and skills you need to be proficient on the computer, using nontechnical, easy-to-understand language. You'll start at the very beginning, getting acquainted with the actual, physical machine, then progress through the most common

  14. Computing fundamentals digital literacy edition

    CERN Document Server

    Wempen, Faithe

    2014-01-01

    Computing Fundamentals has been tailor made to help you get up to speed on your Computing Basics and help you get proficient in entry level computing skills. Covering all the key topics, it starts at the beginning and takes you through basic set-up so that you'll be competent on a computer in no time.You'll cover: Computer Basics & HardwareSoftwareIntroduction to Windows 7Microsoft OfficeWord processing with Microsoft Word 2010Creating Spreadsheets with Microsoft ExcelCreating Presentation Graphics with PowerPointConnectivity and CommunicationWeb BasicsNetwork and Internet Privacy and Securit

  15. Photovoltaics fundamentals, technology and practice

    CERN Document Server

    Mertens, Konrad

    2013-01-01

    Concise introduction to the basic principles of solar energy, photovoltaic systems, photovoltaic cells, photovoltaic measurement techniques, and grid connected systems, overviewing the potential of photovoltaic electricity for students and engineers new to the topic After a brief introduction to the topic of photovoltaics' history and the most important facts, Chapter 1 presents the subject of radiation, covering properties of solar radiation, radiation offer, and world energy consumption. Chapter 2 looks at the fundamentals of semiconductor physics. It discusses the build-up of semiconducto

  16. Solid Lubrication Fundamentals and Applications

    Science.gov (United States)

    Miyoshi, Kazuhisa

    2001-01-01

    Solid Lubrication Fundamentals and Applications description of the adhesion, friction, abrasion, and wear behavior of solid film lubricants and related tribological materials, including diamond and diamond-like solid films. The book details the properties of solid surfaces, clean surfaces, and contaminated surfaces as well as discussing the structures and mechanical properties of natural and synthetic diamonds; chemical-vapor-deposited diamond film; surface design and engineering toward wear-resistant, self-lubricating diamond films and coatings. The author provides selection and design criteria as well as applications for synthetic and natural coatings in the commercial, industrial and aerospace industries..

  17. Fundamentals of liquid crystal devices

    CERN Document Server

    Yang, Deng-Ke

    2014-01-01

    Revised throughout to cover the latest developments in the fast moving area of display technology, this 2nd edition of Fundamentals of Liquid Crystal Devices, will continue to be a valuable resource for those wishing to understand the operation of liquid crystal displays. Significant updates include new material on display components, 3D LCDs and blue-phase displays which is one of the most promising new technologies within the field of displays and it is expected that this new LC-technology will reduce the response time and the number of optical components of LC-modules. Prof. Yang is a pion

  18. Fundamentals of ultrasonic phased arrays

    CERN Document Server

    Schmerr, Lester W

    2014-01-01

    This book describes in detail the physical and mathematical foundations of ultrasonic phased array measurements.?The book uses linear systems theory to develop a comprehensive model of the signals and images that can be formed with phased arrays. Engineers working in the field of ultrasonic nondestructive evaluation (NDE) will find in this approach a wealth of information on how to design, optimize and interpret ultrasonic inspections with phased arrays. The fundamentals and models described in the book will also be of significant interest to other fields, including the medical ultrasound and

  19. Fundamental Limits of Ultrathin Metasurfaces

    CERN Document Server

    Arbabi, Amir

    2014-01-01

    We present universal theoretical limits on the operation and performance of non-magnetic passive ultrathin metasurfaces. In particular, we prove that their local transmission, reflection, and polarization conversion coefficients are confined to limited regions of the complex plane. As a result, full control over the phase of the light transmitted through such metasurfaces cannot be achieved if the polarization of the light is not to be affected at the same time. We also establish fundamental limits on the maximum polarization conversion efficiency of these metasurfaces, and show that they cannot achieve more than 25% polarization conversion efficiency in transmission.

  20. Fundamentals of soft matter science

    CERN Document Server

    Hirst, Linda S

    2012-01-01

    ""The publication is written at a very fundamental level, which will make it easily readable for undergraduate students. It will certainly also be a valuable text for students and postgraduates in interdisciplinary programmes, as not only physical aspects, but also the chemistry and applications are presented and discussed. … The book is well illustrated, and I really do like the examples and pictures provided for simple demonstration experiments, which can be done during the lectures. Also, the experimental techniques chapter at the end of the book may be helpful. The question sections are he

  1. Fundamentals of spread spectrum modulation

    CERN Document Server

    Ziemer, Rodger E

    2007-01-01

    This lecture covers the fundamentals of spread spectrum modulation, which can be defined as any modulation technique that requires a transmission bandwidth much greater than the modulating signal bandwidth, independently of the bandwidth of the modulating signal. After reviewing basic digital modulation techniques, the principal forms of spread spectrum modulation are described. One of the most important components of a spread spectrum system is the spreading code, and several types and their characteristics are described. The most essential operation required at the receiver in a spread spect

  2. Communication technology update and fundamentals

    CERN Document Server

    Grant, August E

    2014-01-01

    A classic now in its 14th edition, Communication Technology Update and Fundamentals is the single best resource for students and professionals looking to brush up on how these technologies have developed, grown, and converged, as well as what's in store for the future. It begins by developing the communication technology framework-the history, ecosystem, and structure-then delves into each type of technology, including everything from mass media, to computers and consumer electronics, to networking technologies. Each chapter is written by faculty and industry experts who p

  3. Fundamental Laser Welding Process Investigations

    DEFF Research Database (Denmark)

    Bagger, Claus; Olsen, Flemming Ove

    1998-01-01

    In a number of systematic laboratory investigations the fundamental behavior of the laser welding process was analyzed by the use of normal video (30 Hz), high speed video (100 and 400 Hz) and photo diodes. Sensors were positioned to monitor the welding process from both the top side and the rear...... side of the specimen.Special attention has been given to the dynamic nature of the laser welding process, especially during unstable welding conditions. In one series of experiments, the stability of the process has been varied by changing the gap distance in lap welding. In another series...

  4. Electronic imaging fundamentals: basic theory.

    Science.gov (United States)

    Vizy, K N

    1983-01-01

    Introduction of the computer into the field of medical imaging, as typified by the extensive use of digital subtraction angiography (DSA), created an important need for a basic understanding of the principles of digital imaging. This paper reviews these fundamental principles, starting with the definition of images and the interaction of these images with television display systems, then continuing with a detailed description of the way in which imaging systems are specified. This work defines the basic terms and concepts that will be used throughout the contents of this issue.

  5. Foam engineering fundamentals and applications

    CERN Document Server

    2012-01-01

    Containing contributions from leading academic and industrial researchers, this book provides a much needed update of foam science research. The first section of the book presents an accessible summary of the theory and fundamentals of foams. This includes chapters on morphology, drainage, Ostwald ripening, coalescence, rheology, and pneumatic foams. The second section demonstrates how this theory is used in a wide range of industrial applications, including foam fractionation, froth flotation and foam mitigation. It includes chapters on suprafroths, flotation of oil sands, foams in enhancing petroleum recovery, Gas-liquid Mass Transfer in foam, foams in glass manufacturing, fire-fighting foam technology and consumer product foams.

  6. Reconstruction of fundamental SUSY parameters

    Energy Technology Data Exchange (ETDEWEB)

    P. M. Zerwas et al.

    2003-09-25

    We summarize methods and expected accuracies in determining the basic low-energy SUSY parameters from experiments at future e{sup +}e{sup -} linear colliders in the TeV energy range, combined with results from LHC. In a second step we demonstrate how, based on this set of parameters, the fundamental supersymmetric theory can be reconstructed at high scales near the grand unification or Planck scale. These analyses have been carried out for minimal supergravity [confronted with GMSB for comparison], and for a string effective theory.

  7. Fundamentals of magnetism and electricity

    CERN Document Server

    Arya, SN

    2009-01-01

    Fundamentals of Magnetism and Electricity is a textbook on the physics of electricity, magnetism, and electromagnetic fields and waves. It is written mainly with the physics student in mind, although it will also be of use to students of electrical and electronic engineering. The approach is concise but clear, and the author has assumed that the reader will be familiar with the basic phenomena. The theory, however, is set out in a completely self-contained and coherent way and developed to the point where the reader can appreciate the beauty and coherence of the Maxwell equations.

  8. Quantum Uncertainty and Fundamental Interactions

    Directory of Open Access Journals (Sweden)

    Tosto S.

    2013-04-01

    Full Text Available The paper proposes a simplified theoretical approach to infer some essential concepts on the fundamental interactions between charged particles and their relative strengths at comparable energies by exploiting the quantum uncertainty only. The worth of the present approach relies on the way of obtaining the results, rather than on the results themselves: concepts today acknowledged as fingerprints of the electroweak and strong interactions appear indeed rooted in the same theoretical frame including also the basic principles of special and general relativity along with the gravity force.

  9. Autodesk Combustion 4 fundamentals courseware

    CERN Document Server

    Autodesk,

    2005-01-01

    Whether this is your first experience with Combustion software or you're upgrading to take advantage of the many new features and tools, this guide will serve as your ultimate resource to this all-in-one professional compositing application. Much more than a point-and-click manual, this guide explains the principles behind the software, serving as an overview of the package and associated techniques. Written by certified Autodesk training specialists for motion graphic designers, animators, and visual effects artists, Combustion 4 Fundamentals Courseware provides expert advice for all skill le

  10. Fundamentals of gas particle flow

    CERN Document Server

    Rudinger, G

    1980-01-01

    Fundamentals of Gas-Particle Flow is an edited, updated, and expanded version of a number of lectures presented on the "Gas-Solid Suspensions” course organized by the von Karman Institute for Fluid Dynamics. Materials presented in this book are mostly analytical in nature, but some experimental techniques are included. The book focuses on relaxation processes, including the viscous drag of single particles, drag in gas-particles flow, gas-particle heat transfer, equilibrium, and frozen flow. It also discusses the dynamics of single particles, such as particles in an arbitrary flow, in a r

  11. The fundamental solution of the Keldysh type operator

    Institute of Scientific and Technical Information of China (English)

    CHEN ShuXing

    2009-01-01

    In this paper we discuss the fundamental solution of the Keldysh type operator Lαu △=б2u/бx2+yб2u/бy2+αбu/бy, which is a basic mixed type operator different from the Tricomi operator. The fundamental solution of the Keldysh type operator with α>-1/2 is obtained. It is shown that the fundamental solution for such an operator generally has stronger singularity than that for the Tricomi operator. Particularly, the fundamental solution of the Keldysh type operator with α<1/2 has to be defined by using the finite part of divergent integrals in the theory of distributions.

  12. The fundamental solution of the Keldysh type operator

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    In this paper we discuss the fundamental solution of the Keldysh type operator Lαu■ 2u x2 + y 2yu y2 + α u y1 , which is a basic mixed type operator different from the Tricomi operator. The fundamental solution of the Keldysh type operator with α>-1/2 is obtained. It is shown that the fundamental solution for such an operator generally has stronger singularity than that for the Tricomi operator. Particularly, the fundamental solution of the Keldysh type operator with α<1/2 has to be defined by using the finite part of divergent integrals in the theory of distributions.

  13. Accurate lineshape spectroscopy and the Boltzmann constant.

    Science.gov (United States)

    Truong, G-W; Anstie, J D; May, E F; Stace, T M; Luiten, A N

    2015-10-14

    Spectroscopy has an illustrious history delivering serendipitous discoveries and providing a stringent testbed for new physical predictions, including applications from trace materials detection, to understanding the atmospheres of stars and planets, and even constraining cosmological models. Reaching fundamental-noise limits permits optimal extraction of spectroscopic information from an absorption measurement. Here, we demonstrate a quantum-limited spectrometer that delivers high-precision measurements of the absorption lineshape. These measurements yield a very accurate measurement of the excited-state (6P1/2) hyperfine splitting in Cs, and reveals a breakdown in the well-known Voigt spectral profile. We develop a theoretical model that accounts for this breakdown, explaining the observations to within the shot-noise limit. Our model enables us to infer the thermal velocity dispersion of the Cs vapour with an uncertainty of 35 p.p.m. within an hour. This allows us to determine a value for Boltzmann's constant with a precision of 6 p.p.m., and an uncertainty of 71 p.p.m.

  14. Cryptography in constant parallel time

    CERN Document Server

    Applebaum, Benny

    2013-01-01

    Locally computable (NC0) functions are 'simple' functions for which every bit of the output can be computed by reading a small number of bits of their input. The study of locally computable cryptography attempts to construct cryptographic functions that achieve this strong notion of simplicity and simultaneously provide a high level of security. Such constructions are highly parallelizable and they can be realized by Boolean circuits of constant depth.This book establishes, for the first time, the possibility of local implementations for many basic cryptographic primitives such as one-way func

  15. Henry's law constants of polyols

    Directory of Open Access Journals (Sweden)

    S. Compernolle

    2014-05-01

    Full Text Available Henry's law constants (HLC are derived for several polyols bearing between 2 and 6 hydroxyl groups, based on literature data for water activity, vapour pressure and/or solubility. Depending on the case, infinite dilution activity coefficients (IDACs, solid state pressures or activity coefficient ratios are obtained as intermediary results. For most compounds, these are the first values reported, while others compare favourably with literature data in most cases. Using these values and those from a previous work (Compernolle and Müller, 2014, an assessment is made on the partitioning of polyols, diacids and hydroxy acids to droplet and aqueous aerosol.

  16. Exact constants in approximation theory

    CERN Document Server

    Korneichuk, N

    1991-01-01

    This book is intended as a self-contained introduction for non-specialists, or as a reference work for experts, to the particular area of approximation theory that is concerned with exact constants. The results apply mainly to extremal problems in approximation theory, which in turn are closely related to numerical analysis and optimization. The book encompasses a wide range of questions and problems: best approximation by polynomials and splines; linear approximation methods, such as spline-approximation; optimal reconstruction of functions and linear functionals. Many of the results are base

  17. O que é Psicopatologia Fundamental

    Directory of Open Access Journals (Sweden)

    Manoel Tosta Berlinck

    Full Text Available Se para os romanos, posição significava lugar onde uma pessoa ou coisa estaria colocada, para os gregos, a noção de posição é de natureza muito mais relacional. Da posição determinada pela postura do corpo, diferenciavam pelo menos duas outras: a do historiador, que não inventa, apenas ouviu falar por aí, e a do teatro, que mostrava o corpo humano em um estado natural de pathos (sofrimento. Ao se pensar o psiquismo e o aparelho psíquico como prolongamentos do sistema imunológico e já que, segundo o autor, pathos é sempre somático, a psique é, seguindo a tradição socrática, estritamente corporal. É próprio, pois, à Psicopatologia Fundamental, reconhecer a existência de múltiplas posições corporais-discursivas e reconhecer que aqueles que ocupam outras posições reconheçam a especificidade de sua posição. A partir do conceito de posição, e seus desdobramentos em pathos e logos, o autor desenvolve sua concepção de Psicopatologia Fundamental.

  18. The Fundamental Manifold of Spheroids

    CERN Document Server

    Zaritsky, D; Zabludoff, A I; Zaritsky, Dennis; Gonzalez, Anthony H.; Zabludoff, Ann I.

    2006-01-01

    We present a unifying empirical description of the structural and kinematic properties of all spheroids embedded in dark matter halos. We find that the stellar spheroidal components of galaxy clusters, which we call cluster spheroids (CSphs) and which are typically one hundred times the size of normal elliptical galaxies, lie on a "fundamental plane" as tight as that defined by ellipticals (rms in effective radius of ~0.07), but that has a different slope. The slope, as measured by the coefficient of the log(sigma) term, declines significantly and systematically between the fundamental planes of ellipticals, brightest cluster galaxies (BCGs), and CSphs.We attribute this decline primarily to a continuous change in M_e/L_e, the mass-to-light ratio within the effective radius r_e, with spheroid scale. The magnitude of the slope change requires that it arises principally from differences in the relative distributions of luminous and dark matter, rather than from stellar population differences such as in age and m...

  19. Mathematics for natural scientists fundamentals and basics

    CERN Document Server

    Kantorovich, Lev

    2016-01-01

    This book, the first in a two part series, covers a course of mathematics tailored specifically for physics, engineering and chemistry students at the undergraduate level. It is unique in that it begins with logical concepts of mathematics first encountered at A-level and covers them in thorough detail, filling in the gaps in students' knowledge and reasoning. Then the book aids the leap between A-level and university-level mathematics, with complete proofs provided throughout and all complex mathematical concepts and techniques presented in a clear and transparent manner. Numerous examples and problems (with answers) are given for each section and, where appropriate, mathematical concepts are illustrated in a physics context. This text gives an invaluable foundation to students and a comprehensive aid to lecturers. Mathematics for Natural Scientists: Fundamentals and Basics is the first of two volumes. Advanced topics and their applications in physics are covered in the second volume.

  20. Organic nanophotonics fundamentals and applications

    CERN Document Server

    Zhao, Yong Sheng

    2014-01-01

    This comprehensive text collects the progress made in recent years in the fabrication, processing, and performance of organic nanophotonic materials and devices. The first part of the book addresses photonic nanofabrications in a chapter on multiphoton processes in nanofabrication and microscopy imaging. The second part of the book is focused on nanoscale light sources for integrated nanophotonic circuits, and is composed of three chapters on organic nano/microcavities, organic laser materials, and polymer light-emitting electrochemical cells (LECs). The third part is focused on the interactio

  1. Kepler's Constant and WDS Orbit

    CERN Document Server

    Siregar, S

    2012-01-01

    The aim of this work are to find a Kepler's constant by using polynomial regression of the angular separation \\rho = \\rho(t) and the position angle \\theta = \\theta(t). The Kepler's constant obtained is used to derive the element of orbit. As a case study the angular separation and the position angle of the WDS 00063 +5826 and the WDS 04403-5857 were investigated. For calculating the element of orbit the Thiele-Innes van den Bos method is used. The rough data of the angular separation \\rho(t) and the position angle \\theta(t) are taken from the US Naval Observatory, Washington. This work also presents the masses and absolute bolometric magnitudes of each star.These stars include into the main-sequence stars with the spectral class G5V for WDS04403-5857and the type of spectrum G3V for WDS 00063+5826. The life time of the primary star and the secondary star of WDS 04403-5857 nearly equal to 20 Gyr. The life time of the primary star and the secondary star of WDS 00063+5826 are 20 Gyr and 19 Gyr, respectively.

  2. Electrostatic accelerators fundamentals and applications

    CERN Document Server

    2005-01-01

    Electrostatic accelerators are an important and widespread subgroup within the broad spectrum of modern, large particle acceleration devices. They are specifically designed for applications that require high-quality ion beams in terms of energy stability and emittance at comparatively low energies (a few MeV). Their ability to accelerate virtually any kind of ion over a continuously tunable range of energies make them a highly versatile tool for investigations in many research fields including, but not limited to, atomic and nuclear spectroscopy, heavy ion reactions, accelerator mass spectroscopy as well as ion-beam analysis and modification. The book is divided into three parts. The first part concisely introduces the field of accelerator technology and techniques that emphasize their major modern applications. The second part treats the electrostatic accelerator per se: its construction and operational principles as well as its maintenance. The third part covers all relevant applications in which electrosta...

  3. Detector Fundamentals for Reachback Analysts

    Energy Technology Data Exchange (ETDEWEB)

    Karpius, Peter Joseph [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Myers, Steven Charles [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-08-03

    This presentation is a part of the DHS LSS spectroscopy course and provides an overview of the following concepts: detector system components, intrinsic and absolute efficiency, resolution and linearity, and operational issues and limits.

  4. Fundamental Hyperelastic Material Study Project

    Data.gov (United States)

    National Aeronautics and Space Administration — This research is part of an innovative effort to use hyperelastic materials to produce flexible and seamless aircraft structures that reduce drag and...

  5. Fundamentals of algebraic graph transformation

    CERN Document Server

    Ehrig, Hartmut; Prange, Ulrike; Taentzer, Gabriele

    2006-01-01

    Graphs are widely used to represent structural information in the form of objects and connections between them. Graph transformation is the rule-based manipulation of graphs, an increasingly important concept in computer science and related fields. This is the first textbook treatment of the algebraic approach to graph transformation, based on algebraic structures and category theory. Part I is an introduction to the classical case of graph and typed graph transformation. In Part II basic and advanced results are first shown for an abstract form of replacement systems, so-called adhesive high-level replacement systems based on category theory, and are then instantiated to several forms of graph and Petri net transformation systems. Part III develops typed attributed graph transformation, a technique of key relevance in the modeling of visual languages and in model transformation. Part IV contains a practical case study on model transformation and a presentation of the AGG (attributed graph grammar) tool envir...

  6. Graphs with constant μ and μ

    NARCIS (Netherlands)

    van Dam, E.R.; Haemers, W.H.

    1995-01-01

    A graph G has constant u = u(G) if any two vertices that are not adjacent have u common neighbours. G has constant u and u if G has constant u = u(G), and its complement G has constant u = u(G). If such a graph is regular, then it is strongly regular, otherwise precisely two vertex degrees occur. We

  7. Cosmological constant and curved 5D geometry

    CERN Document Server

    Ito, M

    2002-01-01

    We study the value of cosmological constant in de Sitter brane embedded in five dimensions with positive, vanishing and negative bulk cosmological constant. In the case of negative bulk cosmological constant, we show that not zero but tiny four-dimensional cosmological constant can be realized by tiny deviation from bulk curvature of the Randall-Sundrum model.

  8. Stability constant estimator user`s guide

    Energy Technology Data Exchange (ETDEWEB)

    Hay, B.P.; Castleton, K.J.; Rustad, J.R.

    1996-12-01

    The purpose of the Stability Constant Estimator (SCE) program is to estimate aqueous stability constants for 1:1 complexes of metal ions with ligands by using trends in existing stability constant data. Such estimates are useful to fill gaps in existing thermodynamic databases and to corroborate the accuracy of reported stability constant values.

  9. Holographic dark energy with cosmological constant

    Science.gov (United States)

    Hu, Yazhou; Li, Miao; Li, Nan; Zhang, Zhenhui

    2015-08-01

    Inspired by the multiverse scenario, we study a heterotic dark energy model in which there are two parts, the first being the cosmological constant and the second being the holographic dark energy, thus this model is named the ΛHDE model. By studying the ΛHDE model theoretically, we find that the parameters d and Ωhde are divided into a few domains in which the fate of the universe is quite different. We investigate dynamical behaviors of this model, and especially the future evolution of the universe. We perform fitting analysis on the cosmological parameters in the ΛHDE model by using the recent observational data. We find the model yields χ2min=426.27 when constrained by Planck+SNLS3+BAO+HST, comparable to the results of the HDE model (428.20) and the concordant ΛCDM model (431.35). At 68.3% CL, we obtain -0.07<ΩΛ0<0.68 and correspondingly 0.04<Ωhde0<0.79, implying at present there is considerable degeneracy between the holographic dark energy and cosmological constant components in the ΛHDE model.

  10. Computability, complexity, and languages fundamentals of theoretical computer science

    CERN Document Server

    Davis, Martin D; Rheinboldt, Werner

    1983-01-01

    Computability, Complexity, and Languages: Fundamentals of Theoretical Computer Science provides an introduction to the various aspects of theoretical computer science. Theoretical computer science is the mathematical study of models of computation. This text is composed of five parts encompassing 17 chapters, and begins with an introduction to the use of proofs in mathematics and the development of computability theory in the context of an extremely simple abstract programming language. The succeeding parts demonstrate the performance of abstract programming language using a macro expa

  11. EU Law Autonomy Versus European Fundamental Rights Protection

    DEFF Research Database (Denmark)

    Storgaard, Louise Halleskov

    2015-01-01

    rights protection. It argues that the concerns for EU law autonomy expressed in the Opinion for the most part are unwarranted and that the Court, through the use of classic constitutionalist language, seeks to position EU law as the superior European fundamental rights regime. The article furthermore...

  12. Fundamental plant biology enabled by the space shuttle.

    Science.gov (United States)

    Paul, Anna-Lisa; Wheeler, Ray M; Levine, Howard G; Ferl, Robert J

    2013-01-01

    The relationship between fundamental plant biology and space biology was especially synergistic in the era of the Space Shuttle. While all terrestrial organisms are influenced by gravity, the impact of gravity as a tropic stimulus in plants has been a topic of formal study for more than a century. And while plants were parts of early space biology payloads, it was not until the advent of the Space Shuttle that the science of plant space biology enjoyed expansion that truly enabled controlled, fundamental experiments that removed gravity from the equation. The Space Shuttle presented a science platform that provided regular science flights with dedicated plant growth hardware and crew trained in inflight plant manipulations. Part of the impetus for plant biology experiments in space was the realization that plants could be important parts of bioregenerative life support on long missions, recycling water, air, and nutrients for the human crew. However, a large part of the impetus was that the Space Shuttle enabled fundamental plant science essentially in a microgravity environment. Experiments during the Space Shuttle era produced key science insights on biological adaptation to spaceflight and especially plant growth and tropisms. In this review, we present an overview of plant science in the Space Shuttle era with an emphasis on experiments dealing with fundamental plant growth in microgravity. This review discusses general conclusions from the study of plant spaceflight biology enabled by the Space Shuttle by providing historical context and reviews of select experiments that exemplify plant space biology science.

  13. Constant training in direct ophthalmoscopy

    Directory of Open Access Journals (Sweden)

    Younan HC

    2017-08-01

    Full Text Available Helen-Cara Younan, Rishi Iyer, Janaki Natasha DesaiFaculty of Medicine, Imperial College London, London, UKWe read with great interest the review by Ricci and Ferraz on the advances in training and practice in ophthalmoscopy simulation.1As final year medical students, we have recently experienced direct ophthalmoscopy teaching and agree with the authors that “simulation is a helpful tool in ophthalmoscopy training”.1 Indeed, in our experience, simulation is useful in teaching a wide variety of clinical skills including venepuncture, intravenous cannulation, and catheterization. We were taught all of these clinical skills in our first clinical year of study through use of simulation models. With regards to our direct ophthalmoscopy teaching, we were first taught to recognize the normal retina and different retinal pathologies using images, before practicing our technique and recognition of those images in a model similar to the THELMA (The Human Eye Learning Model Assistant described by the authors.1However, we feel that the use of simulation models alone is not enough to provide confidence and competency in direct ophthalmoscopy among medical students. The authors conclude that “constant training is a well-known strategy for skill enhancement”,1 and we have found that a lack of constant training in direct ophthalmoscopy is evident. After learning venepuncture, cannulation, and catheterization on the simulation models, we were able to observe doctors performing these skills before performing them on patients either in the wards or in theatre. These are skills that we are constantly trained in across a wide variety of medical and surgical attachments. However, opportunities to observe and practice ophthalmoscopy during our attachments are more limited, and thus we are not continuing to use the skills we learn.Authors' replyLucas Holderegger Ricci,1 Caroline Amaral Ferraz21Department of Ophthalmology, School of Medicine, Laureate

  14. Fundamental triangulation networks in Denmark

    Directory of Open Access Journals (Sweden)

    Borre Kai

    2014-04-01

    Full Text Available The first triangulation activity on Danish ground was carried out by the astronomer Tycho Brahe who resided on the island Hven. He wanted to determine the longitude difference of his observatory Uraniborg to Copenhagen. A by-product was a map of his island made in 1579. In 1761 the Royal Danish Academy of Sciences and Letters initiated a mapping project which should be based on the principle of triangulation. Eventually 24 maps were printed in varying scales, predominantly in 1:120 000. The last map was engraved in 1842. The Danish GradeMeasurement initiated remeasurements and redesign of the fundamental triangulation network. This network served scientific as well as cartographic purposes in more than a century. Only in the 1960s all triangulation sides were measured electronically. A combined least-squares adjustment followed in the 1970s

  15. Molecular imaging. Fundamentals and applications

    Energy Technology Data Exchange (ETDEWEB)

    Tian, Jie (ed.) [Chinese Academy of Sciences, Beijing (China). Intelligent Medical Research Center

    2013-07-01

    Covers a wide range of new theory, new techniques and new applications. Contributed by many experts in China. The editor has obtained the National Science and Technology Progress Award twice. ''Molecular Imaging: Fundamentals and Applications'' is a comprehensive monograph which describes not only the theory of the underlying algorithms and key technologies but also introduces a prototype system and its applications, bringing together theory, technology and applications. By explaining the basic concepts and principles of molecular imaging, imaging techniques, as well as research and applications in detail, the book provides both detailed theoretical background information and technical methods for researchers working in medical imaging and the life sciences. Clinical doctors and graduate students will also benefit from this book.

  16. Silicon photonics fundamentals and devices

    CERN Document Server

    Deen, M Jamal

    2012-01-01

    The creation of affordable high speed optical communications using standard semiconductor manufacturing technology is a principal aim of silicon photonics research. This would involve replacing copper connections with optical fibres or waveguides, and electrons with photons. With applications such as telecommunications and information processing, light detection, spectroscopy, holography and robotics, silicon photonics has the potential to revolutionise electronic-only systems. Providing an overview of the physics, technology and device operation of photonic devices using exclusively silicon and related alloys, the book includes: * Basic Properties of Silicon * Quantum Wells, Wires, Dots and Superlattices * Absorption Processes in Semiconductors * Light Emitters in Silicon * Photodetectors , Photodiodes and Phototransistors * Raman Lasers including Raman Scattering * Guided Lightwaves * Planar Waveguide Devices * Fabrication Techniques and Material Systems Silicon Photonics: Fundamentals and Devices outlines ...

  17. Fundamentals of reversible flowchart languages

    DEFF Research Database (Denmark)

    Yokoyama, Tetsuo; Axelsen, Holger Bock; Glück, Robert

    2016-01-01

    . Although reversible flowcharts are superficially similar to classical flowcharts, there are crucial differences: atomic steps are limited to locally invertible operations, and join points require an explicit orthogonalizing conditional expression. Despite these constraints, we show that reversible......Abstract This paper presents the fundamentals of reversible flowcharts. They are intended to naturally represent the structure and control flow of reversible (imperative) programming languages in a simple computation model, in the same way classical flowcharts do for conventional languages......, structured reversible flowcharts are as expressive as unstructured ones, as shown by a reversible version of the classic Structured Program Theorem. We illustrate how reversible flowcharts can be concretized with two example programming languages, complete with syntax and semantics: a low-level unstructured...

  18. Phononic crystals fundamentals and applications

    CERN Document Server

    Adibi, Ali

    2016-01-01

    This book provides an in-depth analysis as well as an overview of phononic crystals. This book discusses numerous techniques for the analysis of phononic crystals and covers, among other material, sonic and ultrasonic structures, hypersonic planar structures and their characterization, and novel applications of phononic crystals. This is an ideal book for those working with micro and nanotechnology, MEMS (microelectromechanical systems), and acoustic devices. This book also: Presents an introduction to the fundamentals and properties of phononic crystals Covers simulation techniques for the analysis of phononic crystals Discusses sonic and ultrasonic, hypersonic and planar, and three-dimensional phononic crystal structures Illustrates how phononic crystal structures are being deployed in communication systems and sensing systems.

  19. Fluid mechanics fundamentals and applications

    CERN Document Server

    Cengel, Yunus

    2013-01-01

    Cengel and Cimbala's Fluid Mechanics Fundamentals and Applications, communicates directly with tomorrow's engineers in a simple yet precise manner. The text covers the basic principles and equations of fluid mechanics in the context of numerous and diverse real-world engineering examples. The text helps students develop an intuitive understanding of fluid mechanics by emphasizing the physics, using figures, numerous photographs and visual aids to reinforce the physics. The highly visual approach enhances the learning of Fluid mechanics by students. This text distinguishes itself from others by the way the material is presented - in a progressive order from simple to more difficult, building each chapter upon foundations laid down in previous chapters. In this way, even the traditionally challenging aspects of fluid mechanics can be learned effectively. McGraw-Hill is also proud to offer ConnectPlus powered by Maple with the third edition of Cengel/Cimbabla, Fluid Mechanics. This innovative and powerful new sy...

  20. Optical Metamaterials Fundamentals and Applications

    CERN Document Server

    Cai, Wenshan

    2010-01-01

    Metamaterials—artificially structured materials with engineered electromagnetic properties—have enabled unprecedented flexibility in manipulating electromagnetic waves and producing new functionalities. In just a few years, the field of optical metamaterials has emerged as one of the most exciting topics in the science of light, with stunning and unexpected outcomes that have fascinated scientists and the general public alike. This volume details recent advances in the study of optical metamaterials, ranging from fundamental aspects to up-to-date implementations, in one unified treatment. Important recent developments and applications such as superlenses and cloaking devices are also treated in detail and made understandable. Optical Metamaterials will serve as a very timely book for both newcomers and advanced researchers in this rapidly evolving field. Early praise for Optical Metamaterials: "...this book is timely bringing to students and other new entrants to the field the most up to date concepts. Th...