Jones, Bernard J. T.
2017-04-01
Preface; Notation and conventions; Part I. 100 Years of Cosmology: 1. Emerging cosmology; 2. The cosmic expansion; 3. The cosmic microwave background; 4. Recent cosmology; Part II. Newtonian Cosmology: 5. Newtonian cosmology; 6. Dark energy cosmological models; 7. The early universe; 8. The inhomogeneous universe; 9. The inflationary universe; Part III. Relativistic Cosmology: 10. Minkowski space; 11. The energy momentum tensor; 12. General relativity; 13. Space-time geometry and calculus; 14. The Einstein field equations; 15. Solutions of the Einstein equations; 16. The Robertson-Walker solution; 17. Congruences, curvature and Raychaudhuri; 18. Observing and measuring the universe; Part IV. The Physics of Matter and Radiation: 19. Physics of the CMB radiation; 20. Recombination of the primeval plasma; 21. CMB polarisation; 22. CMB anisotropy; Part V. Precision Tools for Precision Cosmology: 23. Likelihood; 24. Frequentist hypothesis testing; 25. Statistical inference: Bayesian; 26. CMB data processing; 27. Parametrising the universe; 28. Precision cosmology; 29. Epilogue; Appendix A. SI, CGS and Planck units; Appendix B. Magnitudes and distances; Appendix C. Representing vectors and tensors; Appendix D. The electromagnetic field; Appendix E. Statistical distributions; Appendix F. Functions on a sphere; Appendix G. Acknowledgements; References; Index.
Weak gravitational lensing towards high-precision cosmology
Berge, Joel
2007-01-01
This thesis aims at studying weak gravitational lensing as a tool for high-precision cosmology. We first present the development and validation of a precise and accurate tool for measuring gravitational shear, based on the shapelets formalism. We then use shapelets on real images for the first time, we analyze CFHTLS images, and combine them with XMM-LSS data. We measure the normalisation of the density fluctuations power spectrum σ 8 , and the one of the mass-temperature relation for galaxy clusters. The analysis of the Hubble space telescope COSMOS field confirms our σ 8 measurement and introduces tomography. Finally, aiming at optimizing future surveys, we compare the individual and combined merits of cluster counts and power spectrum tomography. Our results demonstrate that next generation surveys will allow weak lensing to yield its full potential in the high-precision cosmology era. (author) [fr
Precision cosmology with time delay lenses: high resolution imaging requirements
Meng, Xiao-Lei; Liao, Kai [Department of Astronomy, Beijing Normal University, 19 Xinjiekouwai Street, Beijing, 100875 (China); Treu, Tommaso; Agnello, Adriano [Department of Physics, University of California, Broida Hall, Santa Barbara, CA 93106 (United States); Auger, Matthew W. [Institute of Astronomy, University of Cambridge, Madingley Road, Cambridge CB3 0HA (United Kingdom); Marshall, Philip J., E-mail: xlmeng919@gmail.com, E-mail: tt@astro.ucla.edu, E-mail: aagnello@physics.ucsb.edu, E-mail: mauger@ast.cam.ac.uk, E-mail: liaokai@mail.bnu.edu.cn, E-mail: dr.phil.marshall@gmail.com [Kavli Institute for Particle Astrophysics and Cosmology, Stanford University, 452 Lomita Mall, Stanford, CA 94305 (United States)
2015-09-01
Lens time delays are a powerful probe of cosmology, provided that the gravitational potential of the main deflector can be modeled with sufficient precision. Recent work has shown that this can be achieved by detailed modeling of the host galaxies of lensed quasars, which appear as ''Einstein Rings'' in high resolution images. The distortion of these arcs and counter-arcs, as measured over a large number of pixels, provides tight constraints on the difference between the gravitational potential between the quasar image positions, and thus on cosmology in combination with the measured time delay. We carry out a systematic exploration of the high resolution imaging required to exploit the thousands of lensed quasars that will be discovered by current and upcoming surveys with the next decade. Specifically, we simulate realistic lens systems as imaged by the Hubble Space Telescope (HST), James Webb Space Telescope (JWST), and ground based adaptive optics images taken with Keck or the Thirty Meter Telescope (TMT). We compare the performance of these pointed observations with that of images taken by the Euclid (VIS), Wide-Field Infrared Survey Telescope (WFIRST) and Large Synoptic Survey Telescope (LSST) surveys. We use as our metric the precision with which the slope γ' of the total mass density profile ρ{sub tot}∝ r{sup −γ'} for the main deflector can be measured. Ideally, we require that the statistical error on γ' be less than 0.02, such that it is subdominant to other sources of random and systematic uncertainties. We find that survey data will likely have sufficient depth and resolution to meet the target only for the brighter gravitational lens systems, comparable to those discovered by the SDSS survey. For fainter systems, that will be discovered by current and future surveys, targeted follow-up will be required. However, the exposure time required with upcoming facilitites such as JWST, the Keck Next Generation
Precision cosmology with time delay lenses: High resolution imaging requirements
Meng, Xiao -Lei [Beijing Normal Univ., Beijing (China); Univ. of California, Santa Barbara, CA (United States); Treu, Tommaso [Univ. of California, Santa Barbara, CA (United States); Univ. of California, Los Angeles, CA (United States); Agnello, Adriano [Univ. of California, Santa Barbara, CA (United States); Univ. of California, Los Angeles, CA (United States); Auger, Matthew W. [Univ. of Cambridge, Cambridge (United Kingdom); Liao, Kai [Beijing Normal Univ., Beijing (China); Univ. of California, Santa Barbara, CA (United States); Univ. of California, Los Angeles, CA (United States); Marshall, Philip J. [Stanford Univ., Stanford, CA (United States)
2015-09-28
Lens time delays are a powerful probe of cosmology, provided that the gravitational potential of the main deflector can be modeled with sufficient precision. Recent work has shown that this can be achieved by detailed modeling of the host galaxies of lensed quasars, which appear as ``Einstein Rings'' in high resolution images. The distortion of these arcs and counter-arcs, as measured over a large number of pixels, provides tight constraints on the difference between the gravitational potential between the quasar image positions, and thus on cosmology in combination with the measured time delay. We carry out a systematic exploration of the high resolution imaging required to exploit the thousands of lensed quasars that will be discovered by current and upcoming surveys with the next decade. Specifically, we simulate realistic lens systems as imaged by the Hubble Space Telescope (HST), James Webb Space Telescope (JWST), and ground based adaptive optics images taken with Keck or the Thirty Meter Telescope (TMT). We compare the performance of these pointed observations with that of images taken by the Euclid (VIS), Wide-Field Infrared Survey Telescope (WFIRST) and Large Synoptic Survey Telescope (LSST) surveys. We use as our metric the precision with which the slope γ' of the total mass density profile ρ_{tot}∝ r–γ' for the main deflector can be measured. Ideally, we require that the statistical error on γ' be less than 0.02, such that it is subdominant to other sources of random and systematic uncertainties. We find that survey data will likely have sufficient depth and resolution to meet the target only for the brighter gravitational lens systems, comparable to those discovered by the SDSS survey. For fainter systems, that will be discovered by current and future surveys, targeted follow-up will be required. Furthermore, the exposure time required with upcoming facilitites such as JWST, the Keck Next Generation Adaptive
Precision cosmology and the landscape
Bousso, Raphael; Bousso, Raphael
2006-01-01
After reviewing the cosmological constant problem--why is Lambda not huge?--I outline the two basic approaches that had emerged by the late 1980s, and note that each made a clear prediction. Precision cosmological experiments now indicate that the cosmological constant is nonzero. This result strongly favors the environmental approach, in which vacuum energy can vary discretely among widely separated regions in the universe. The need to explain this variation from first principles constitutes an observational constraint on fundamental theory. I review arguments that string theory satisfies this constraint, as it contains a dense discretuum of metastable vacua. The enormous landscape of vacua calls for novel, statistical methods of deriving predictions, and it prompts us to reexamine our description of spacetime on the largest scales. I discuss the effects of cosmological dynamics, and I speculate that weighting vacua by their entropy production may allow for prior-free predictions that do not resort to explicitly anthropic arguments
The Age of Precision Cosmology
Chuss, David T.
2012-01-01
In the past two decades, our understanding of the evolution and fate of the universe has increased dramatically. This "Age of Precision Cosmology" has been ushered in by measurements that have both elucidated the details of the Big Bang cosmology and set the direction for future lines of inquiry. Our universe appears to consist of 5% baryonic matter; 23% of the universe's energy content is dark matter which is responsible for the observed structure in the universe; and 72% of the energy density is so-called "dark energy" that is currently accelerating the expansion of the universe. In addition, our universe has been measured to be geometrically flat to 1 %. These observations and related details of the Big Bang paradigm have hinted that the universe underwent an epoch of accelerated expansion known as Uinflation" early in its history. In this talk, I will review the highlights of modern cosmology, focusing on the contributions made by measurements of the cosmic microwave background, the faint afterglow of the Big Bang. I will also describe new instruments designed to measure the polarization of the cosmic microwave background in order to search for evidence of cosmic inflation.
Neutrino physics and precision cosmology
Hannestad, Steen
2016-01-01
I review the current status of structure formation bounds on neutrino properties such as mass and energy density. I also discuss future cosmological bounds as well as a variety of different scenarios for reconciling cosmology with the presence of light sterile neutrinos....
Towards precision medicine; a new biomedical cosmology.
Vegter, M W
2018-02-10
Precision Medicine has become a common label for data-intensive and patient-driven biomedical research. Its intended future is reflected in endeavours such as the Precision Medicine Initiative in the USA. This article addresses the question whether it is possible to discern a new 'medical cosmology' in Precision Medicine, a concept that was developed by Nicholas Jewson to describe comprehensive transformations involving various dimensions of biomedical knowledge and practice, such as vocabularies, the roles of patients and physicians and the conceptualisation of disease. Subsequently, I will elaborate my assessment of the features of Precision Medicine with the help of Michel Foucault, by exploring how precision medicine involves a transformation along three axes: the axis of biomedical knowledge, of biomedical power and of the patient as a self. Patients are encouraged to become the managers of their own health status, while the medical domain is reframed as a data-sharing community, characterised by changing power relationships between providers and patients, producers and consumers. While the emerging Precision Medicine cosmology may surpass existing knowledge frameworks; it obscures previous traditions and reduces research-subjects to mere data. This in turn, means that the individual is both subjected to the neoliberal demand to share personal information, and at the same time has acquired the positive 'right' to become a member of the data-sharing community. The subject has to constantly negotiate the meaning of his or her data, which can either enable self-expression, or function as a commanding Superego.
Precision cosmology the first half million years
Jones, Bernard J T
2017-01-01
Cosmology seeks to characterise our Universe in terms of models based on well-understood and tested physics. Today we know our Universe with a precision that once would have been unthinkable. This book develops the entire mathematical, physical and statistical framework within which this has been achieved. It tells the story of how we arrive at our profound conclusions, starting from the early twentieth century and following developments up to the latest data analysis of big astronomical datasets. It provides an enlightening description of the mathematical, physical and statistical basis for understanding and interpreting the results of key space- and ground-based data. Subjects covered include general relativity, cosmological models, the inhomogeneous Universe, physics of the cosmic background radiation, and methods and results of data analysis. Extensive online supplementary notes, exercises, teaching materials, and exercises in Python make this the perfect companion for researchers, teachers and students i...
Precision Cosmology: The First Half Million Years
Jones, Bernard J. T.
2017-06-01
Cosmology seeks to characterise our Universe in terms of models based on well-understood and tested physics. Today we know our Universe with a precision that once would have been unthinkable. This book develops the entire mathematical, physical and statistical framework within which this has been achieved. It tells the story of how we arrive at our profound conclusions, starting from the early twentieth century and following developments up to the latest data analysis of big astronomical datasets. It provides an enlightening description of the mathematical, physical and statistical basis for understanding and interpreting the results of key space- and ground-based data. Subjects covered include general relativity, cosmological models, the inhomogeneous Universe, physics of the cosmic background radiation, and methods and results of data analysis. Extensive online supplementary notes, exercises, teaching materials, and exercises in Python make this the perfect companion for researchers, teachers and students in physics, mathematics, and astrophysics.
Precision cosmology with weak gravitational lensing
Hearin, Andrew P.
In recent years, cosmological science has developed a highly predictive model for the universe on large scales that is in quantitative agreement with a wide range of astronomical observations. While the number and diversity of successes of this model provide great confidence that our general picture of cosmology is correct, numerous puzzles remain. In this dissertation, I analyze the potential of planned and near future galaxy surveys to provide new understanding of several unanswered questions in cosmology, and address some of the leading challenges to this observational program. In particular, I study an emerging technique called cosmic shear, the weak gravitational lensing produced by large scale structure. I focus on developing strategies to optimally use the cosmic shear signal observed in galaxy imaging surveys to uncover the physics of dark energy and the early universe. In chapter 1 I give an overview of a few unsolved mysteries in cosmology and I motivate weak lensing as a cosmological probe. I discuss the use of weak lensing as a test of general relativity in chapter 2 and assess the threat to such tests presented by our uncertainty in the physics of galaxy formation. Interpreting the cosmic shear signal requires knowledge of the redshift distribution of the lensed galaxies. This redshift distribution will be significantly uncertain since it must be determined photometrically. In chapter 3 I investigate the influence of photometric redshift errors on our ability to constrain dark energy models with weak lensing. The ability to study dark energy with cosmic shear is also limited by the imprecision in our understanding of the physics of gravitational collapse. In chapter 4 I present the stringent calibration requirements on this source of uncertainty. I study the potential of weak lensing to resolve a debate over a long-standing anomaly in CMB measurements in chapter 5. Finally, in chapter 6 I summarize my findings and conclude with a brief discussion of my
Cosmology for high energy physicists
Albrecht, A.
1987-11-01
The standard big bang model of cosmology is presented. Although not perfect, its many successes make it a good starting point for most discussions of cosmology. Places are indicated where well understood laboratory physics is incorporated into the big bang, leading to successful predictions. Much less established aspects of high energy physics and some of the new ideas they have introduced into the field of cosmology are discussed, such as string theory, inflation and monopoles. 49 refs., 5 figs
Heitmann, Katrin [Los Alamos National Laboratory; Habib, Salman [Los Alamos National Laboratory; Higdon, David [Los Alamos National Laboratory; Williams, Brian J [Los Alamos National Laboratory; White, Martin [Los Alamos National Laboratory; Wagner, Christian [Los Alamos National Laboratory
2008-01-01
The power spectrum of density fluctuations is a foundational source of cosmological information. Precision cosmological probes targeted primarily at investigations of dark energy require accurate theoretical determinations of the power spectrum in the nonlinear regime. To exploit the observational power of future cosmological surveys, accuracy demands on the theory are at the one percent level or better. Numerical simulations are currently the only way to produce sufficiently error-controlled predictions for the power spectrum. The very high computational cost of (precision) N-body simulations is a major obstacle to obtaining predictions in the nonlinear regime, while scanning over cosmological parameters. Near-future observations, however, are likely to provide a meaningful constraint only on constant dark energy equation of state 'wCDM' cosmologies. In this paper we demonstrate that a limited set of only 37 cosmological models -- the 'Coyote Universe' suite -- can be used to predict the nonlinear matter power spectrum at the required accuracy over a prior parameter range set by cosmic microwave background observations. This paper is the second in a series of three, with the final aim to provide a high-accuracy prediction scheme for the nonlinear matter power spectrum for wCDM cosmologies.
Interacting dark sector and precision cosmology
Buen-Abad, Manuel A.; Schmaltz, Martin; Lesgourgues, Julien; Brinckmann, Thejs
2018-01-01
We consider a recently proposed model in which dark matter interacts with a thermal background of dark radiation. Dark radiation consists of relativistic degrees of freedom which allow larger values of the expansion rate of the universe today to be consistent with CMB data (H0-problem). Scattering between dark matter and radiation suppresses the matter power spectrum at small scales and can explain the apparent discrepancies between ΛCDM predictions of the matter power spectrum and direct measurements of Large Scale Structure LSS (σ8-problem). We go beyond previous work in two ways: 1. we enlarge the parameter space of our previous model and allow for an arbitrary fraction of the dark matter to be interacting and 2. we update the data sets used in our fits, most importantly we include LSS data with full k-dependence to explore the sensitivity of current data to the shape of the matter power spectrum. We find that LSS data prefer models with overall suppressed matter clustering due to dark matter - dark radiation interactions over ΛCDM at 3–4 σ. However recent weak lensing measurements of the power spectrum are not yet precise enough to clearly distinguish two limits of the model with different predicted shapes for the linear matter power spectrum. In two appendices we give a derivation of the coupled dark matter and dark radiation perturbation equations from the Boltzmann equation in order to clarify a confusion in the recent literature, and we derive analytic approximations to the solutions of the perturbation equations in the two physically interesting limits of all dark matter weakly interacting or a small fraction of dark matter strongly interacting.
Patel, Ekta; Besla, Gurtina; Mandel, Kaisey
2017-07-01
In the era of high-precision astrometry, space observatories like the Hubble Space Telescope (HST) and Gaia are providing unprecedented 6D phase-space information of satellite galaxies. Such measurements can shed light on the structure and assembly history of the Local Group, but improved statistical methods are needed to use them efficiently. Here we illustrate such a method using analogues of the Local Group's two most massive satellite galaxies, the Large Magellanic Cloud (LMC) and Triangulum (M33), from the Illustris dark-matter-only cosmological simulation. We use a Bayesian inference scheme combining measurements of positions, velocities and specific orbital angular momenta (j) of the LMC/M33 with importance sampling of their simulated analogues to compute posterior estimates of the Milky Way (MW) and Andromeda's (M31) halo masses. We conclude that the resulting host halo mass is more susceptible to bias when using measurements of the current position and velocity of satellites, especially when satellites are at short-lived phases of their orbits (I.e. at pericentre). Instead, the j value of a satellite is well conserved over time and provides a more reliable constraint on host mass. The inferred virial mass of the MW (M31) using j of the LMC (M33) is {{M}}_{vir, MW} = 1.02^{+0.77}_{-0.55} × 10^{12} M⊙ ({{M}}_{vir, M31} = 1.37^{+1.39}_{-0.75} × 10^{12} M⊙). Choosing simulated analogues whose j values are consistent with the conventional picture of a previous (<3 Gyr ago), close encounter (<100 kpc) of M33 about M31 results in a very low virial mass for M31 (˜1012 M⊙). This supports the new scenario put forth in Patel, Besla & Sohn, wherein M33 is on its first passage about M31 or on a long-period orbit. We conclude that this Bayesian inference scheme, utilizing satellite j, is a promising method to reduce the current factor of 2 spread in the mass range of the MW and M31. This method is easily adaptable to include additional satellites as new 6D
Precision cosmological measurements: Independent evidence for dark energy
Bothun, Greg; Hsu, Stephen D.H.; Murray, Brian
2008-01-01
Using recent precision measurements of cosmological parameters, we re-examine whether these observations alone, independent of type Ia supernova surveys, are sufficient to imply the existence of dark energy. We find that best measurements of the age of the Universe t 0 , the Hubble parameter H 0 and the matter fraction Ω m strongly favor an equation of state defined by (w<-1/3). This result is consistent with the existence of a repulsive, acceleration-causing component of energy if the Universe is nearly flat
High energy physics and cosmology
Silk, J.I.
1991-01-01
This research will focus on the implications of recent theories and experiments in high energy physics of the evolution of the early universe, and on the constraints that cosmological considerations can place on such theories. Several problems are under investigation, including studies of the nature of dark matter and the signature of annihilations in the galactic halo, where the resulting γ-ray fluxes are potentially observable, and in stars, where stellar evolution may be affects. We will develop constraints on the inflationary predictions of scale-free primordial fluctuations in a universe at critical closure density by studying their linear and non-linear evolution after they re-enter the particle horizon, examining the observable imprint of primordial density fluctuations on the cosmic microwave background radiation in both flat and curved cosmological models, and implications for observations of large-scale galaxy clustering and structure formation theories. We will also study spectral distortions in the microwave background radiation that are produced by exotic particle decays in the very early universe. We expect such astrophysical considerations to provide fruitful insights both into high-energy particle physics and into possible cosmological for the early universe
Precision cosmology from future lensed gravitational wave and electromagnetic signals.
Liao, Kai; Fan, Xi-Long; Ding, Xuheng; Biesiada, Marek; Zhu, Zong-Hong
2017-10-27
The standard siren approach of gravitational wave cosmology appeals to the direct luminosity distance estimation through the waveform signals from inspiralling double compact binaries, especially those with electromagnetic counterparts providing redshifts. It is limited by the calibration uncertainties in strain amplitude and relies on the fine details of the waveform. The Einstein telescope is expected to produce 10 4 -10 5 gravitational wave detections per year, 50-100 of which will be lensed. Here, we report a waveform-independent strategy to achieve precise cosmography by combining the accurately measured time delays from strongly lensed gravitational wave signals with the images and redshifts observed in the electromagnetic domain. We demonstrate that just 10 such systems can provide a Hubble constant uncertainty of 0.68% for a flat lambda cold dark matter universe in the era of third-generation ground-based detectors.
García-Bellido, J
2015-01-01
In these lectures I review the present status of the so-called Standard Cosmological Model, based on the hot Big Bang Theory and the Inflationary Paradigm. I will make special emphasis on the recent developments in observational cosmology, mainly the acceleration of the universe, the precise measurements of the microwave background anisotropies, and the formation of structure like galaxies and clusters of galaxies from tiny primordial fluctuations generated during inflation.
Contopoulos, G.; Kotsakis, D.
1987-01-01
An extensive first part on a wealth of observational results relevant to cosmology lays the foundation for the second and central part of the book; the chapters on general relativity, the various cosmological theories, and the early universe. The authors present in a complete and almost non-mathematical way the ideas and theoretical concepts of modern cosmology including the exciting impact of high-energy particle physics, e.g. in the concept of the ''inflationary universe''. The final part addresses the deeper implications of cosmology, the arrow of time, the universality of physical laws, inflation and causality, and the anthropic principle
Evidence for dark matter interactions in cosmological precision data?
Lesgourgues, Julien; Marques-Tavares, Gustavo; Schmaltz, Martin
2016-01-01
We study a two-parameter extension of the cosmological standard model ΛCDM in which cold dark matter interacts with a new form of dark radiation. The two parameters correspond to the energy density in the dark radiation fluid ΔN fluid and the interaction strength between dark matter and dark radiation. The interactions give rise to a very weak ''dark matter drag'' which damps the growth of matter density perturbations throughout radiation domination, allowing to reconcile the tension between predictions of large scale structure from the CMB and direct measurements of σ 8 . We perform a precision fit to Planck CMB data, BAO, large scale structure, and direct measurements of the expansion rate of the universe today. Our model lowers the χ-squared relative to ΛCDM by about 12, corresponding to a preference for non-zero dark matter drag by more than 3σ. Particle physics models which naturally produce a dark matter drag of the required form include the recently proposed non-Abelian dark matter model in which the dark radiation corresponds to massless dark gluons
Vittorio, Nicola
2018-01-01
Modern cosmology has changed significantly over the years, from the discovery to the precision measurement era. The data now available provide a wealth of information, mostly consistent with a model where dark matter and dark energy are in a rough proportion of 3:7. The time is right for a fresh new textbook which captures the state-of-the art in cosmology. Written by one of the world's leading cosmologists, this brand new, thoroughly class-tested textbook provides graduate and undergraduate students with coverage of the very latest developments and experimental results in the field. Prof. Nicola Vittorio shows what is meant by precision cosmology, from both theoretical and observational perspectives.
High energy physics and cosmology
Silk, J.I.; Davis, M.
1989-01-01
This research will focus on the implications of recent theories and experiments in high energy physics for the evolution of the early Universe, and on the constraints that cosmological considerations can place on such theories. Several problems are under investigation, including the development of constraints on the inflationary predictions of scale--free primordial fluctuations in a universe at critical closure density by studying their linear and non-linear evolution after they re-enter the particle horizon. We will examine the observable imprint of primordial density fluctuations on the cosmic microwave background radiation curved cosmological models. Most astronomical evidence points to an open universe: one of our goals is to reconcile this conclusion with the particle physics input. We will investigate the response of the matter distribution to a network of cosmic strings produced during an early symmetry-breaking transition, and compute the resulting cosmic microwave background anisotropies. We will simulate the formation of large-scale structures whose dynamics are dominated by weakly interacting particles such as axions, massive neutrinos or photinos in order to model the formation of galaxies, galaxy clusters and superclusters. We will study of the distortions in the microwave background radiation, both spectral and angular, that are produced by ionized gas associated with forming clusters and groups of galaxies. We will also study constraints on exotic cooling mechanisms involving axions and majorons set by stellar evolution and the energy input into low mass stars by cold dark matter annihilation galactic nuclei. We will compute the detailed gamma ray spectrum predicted by various cold dark matter candidates undergoing annihilation in the galactic halo and bulge
[High energy physics and cosmology
Silk, J.I.; Davis, M.
1988-01-01
This research will focus on the implications of recent theories and experiments in high energy physics for the evolution of the early Universe, and on the constraints that cosmological considerations can place on such theories. Several problems are under investigation, including the development of constraints on the inflationary predictions of scale-free primordial fluctuations in a universe at critical closure density by studying their linear and non-linear evolution after they re-enter the particle horizon. We will examine the observable imprint of primordial density fluctuations on the cosmic microwave background radiation in curved cosmological models. Most astronomical evidence points to an open universe: one of our goals is to reconcile this conclusion with the particle physics input. We will investigate the response of the matter distribution to a network of cosmic strings produced during an early symmetry--breaking transition, and compute the resulting cosmic microwave background anisotropies. We will simulate the formation of large--scale structures whose dynamics are dominated by weakly interacting particles such as axions massive neutrinos or photinos in order to model the formation of galaxies, galaxy clusters and superclusters. We will study the distortions in the microwave background radiation, both spectral and angular, that are produced by ionized gas associated with forming clusters and groups of galaxies. We will also study constraints on exotic cooling mechanisms involving axions and majorons set by stellar evolution and the energy input into low mass stars by cold dark matter annihilation in galactic nuclei. We will compute the detailed gamma ray spectrum predicted by various cold dark matter candidates undergoing annihilation in the galactic halo and bulge
Microphysics, cosmology, and high energy astrophysics
Hoyle, F.
1974-01-01
The discussion of microphysics, cosmology, and high energy astrophysics includes particle motion in an electromagnetic field, conformal transformations, conformally invariant theory of gravitation, particle orbits, Friedman models with k = 0, +-1, the history and present status of steady-state cosmology, and the nature of mass. (U.S.)
Probing the BSM physics with CMB precision cosmology: an application to supersymmetry
Dalianis, Ioannis; Watanabe, Yuki
2018-02-01
The cosmic history before the BBN is highly determined by the physics that operates beyond the Standard Model (BSM) of particle physics and it is poorly constrained observationally. Ongoing and future precision measurements of the CMB observables can provide us with significant information about the pre-BBN era and hence possibly test the cosmological predictions of different BSM scenarios. Supersymmetry is a particularly motivated BSM theory and it is often the case that different superymmetry breaking schemes require different cosmic histories with specific reheating temperatures or low entropy production in order to be cosmologically viable. In this paper we quantify the effects of the possible alternative cosmic histories on the n s and r CMB observables assuming a generic non-thermal stage after cosmic inflation. We analyze TeV and especially multi-TeV super-symmetry breaking schemes assuming the neutralino and gravitino dark matter scenarios. We complement our analysis considering the Starobinsky R 2 inflation model to exemplify the improved CMB predictions that a unified description of the early universe cosmic evolution yields. Our analysis underlines the importance of the CMB precision measurements that can be viewed, to some extend, as complementary to the laboratory experimental searches for supersymmetry or other BSM theories.
Optically-Selected Cluster Catalogs As a Precision Cosmology Tool
Rozo, Eduardo; /Ohio State U. /Chicago U. /KICP, Chicago; Wechsler, Risa H.; /KICP, Chicago /KIPAC, Menlo Park; Koester, Benjamin P.; /Michigan U. /Chicago U., Astron.; Evrard, August E.; McKay, Timothy A.; /Michigan U.
2007-03-26
We introduce a framework for describing the halo selection function of optical cluster finders. We treat the problem as being separable into a term that describes the intrinsic galaxy content of a halo (the Halo Occupation Distribution, or HOD) and a term that captures the effects of projection and selection by the particular cluster finding algorithm. Using mock galaxy catalogs tuned to reproduce the luminosity dependent correlation function and the empirical color-density relation measured in the SDSS, we characterize the maxBCG algorithm applied by Koester et al. to the SDSS galaxy catalog. We define and calibrate measures of completeness and purity for this algorithm, and demonstrate successful recovery of the underlying cosmology and HOD when applied to the mock catalogs. We identify principal components--combinations of cosmology and HOD parameters--that are recovered by survey counts as a function of richness, and demonstrate that percent-level accuracies are possible in the first two components, if the selection function can be understood to {approx} 15% accuracy.
GRAVITATIONALLY CONSISTENT HALO CATALOGS AND MERGER TREES FOR PRECISION COSMOLOGY
Behroozi, Peter S.; Wechsler, Risa H.; Wu, Hao-Yi; Busha, Michael T.; Klypin, Anatoly A.; Primack, Joel R.
2013-01-01
We present a new algorithm for generating merger trees and halo catalogs which explicitly ensures consistency of halo properties (mass, position, and velocity) across time steps. Our algorithm has demonstrated the ability to improve both the completeness (through detecting and inserting otherwise missing halos) and purity (through detecting and removing spurious objects) of both merger trees and halo catalogs. In addition, our method is able to robustly measure the self-consistency of halo finders; it is the first to directly measure the uncertainties in halo positions, halo velocities, and the halo mass function for a given halo finder based on consistency between snapshots in cosmological simulations. We use this algorithm to generate merger trees for two large simulations (Bolshoi and Consuelo) and evaluate two halo finders (ROCKSTAR and BDM). We find that both the ROCKSTAR and BDM halo finders track halos extremely well; in both, the number of halos which do not have physically consistent progenitors is at the 1%-2% level across all halo masses. Our code is publicly available at http://code.google.com/p/consistent-trees. Our trees and catalogs are publicly available at http://hipacc.ucsc.edu/Bolshoi/.
Astrophysics, cosmology and high energy physics
Rees, M.J.
1983-01-01
A brief survey is given of some topics in astrophysics and cosmology, with special emphasis on the inter-relation between the properties of the early Universe and recent ideas in high energy physics, and on simple order-of-magnitude arguments showing how the scales and dimensions of cosmic phenomena are related to basic physical constants. (orig.)
Novikov, I.D.
1979-01-01
Progress made by this Commission over the period 1976-1978 is reviewed. Topics include the Hubble constant, deceleration parameter, large-scale distribution of matter in the universe, radio astronomy and cosmology, space astronomy and cosmology, formation of galaxies, physics near the cosmological singularity, and unconventional cosmological models. (C.F.)
Cosmological Results from High-z Supernovae
Tonry, John L.; Schmidt, Brian P.; Barris, Brian; Candia, Pablo; Challis, Peter; Clocchiatti, Alejandro; Coil, Alison L.; Filippenko, Alexei V.; Garnavich, Peter; Hogan, Craig; Holland, Stephen T.; Jha, Saurabh; Kirshner, Robert P.; Krisciunas, Kevin; Leibundgut, Bruno; Li, Weidong; Matheson, Thomas; Phillips, Mark M.; Riess, Adam G.; Schommer, Robert; Smith, R. Chris; Sollerman, Jesper; Spyromilio, Jason; Stubbs, Christopher W.; Suntzeff, Nicholas B.
2003-09-01
The High-z Supernova Search Team has discovered and observed eight new supernovae in the redshift interval z=0.3-1.2. These independent observations, analyzed by similar but distinct methods, confirm the results of Riess and Perlmutter and coworkers that supernova luminosity distances imply an accelerating universe. More importantly, they extend the redshift range of consistently observed Type Ia supernovae (SNe Ia) to z~1, where the signature of cosmological effects has the opposite sign of some plausible systematic effects. Consequently, these measurements not only provide another quantitative confirmation of the importance of dark energy, but also constitute a powerful qualitative test for the cosmological origin of cosmic acceleration. We find a rate for SN Ia of (1.4+/-0.5)×10-4h3Mpc-3yr-1 at a mean redshift of 0.5. We present distances and host extinctions for 230 SN Ia. These place the following constraints on cosmological quantities: if the equation of state parameter of the dark energy is w=-1, then H0t0=0.96+/-0.04, and ΩΛ-1.4ΩM=0.35+/-0.14. Including the constraint of a flat universe, we find ΩM=0.28+/-0.05, independent of any large-scale structure measurements. Adopting a prior based on the Two Degree Field (2dF) Redshift Survey constraint on ΩM and assuming a flat universe, we find that the equation of state parameter of the dark energy lies in the range -1.48-1, we obtain wInstitute, which is operated by the Association of Universities for Research in Astronomy (AURA), Inc., under NASA contract NAS 5-26555. This research is primarily associated with proposal GO-8177, but also uses and reports results from proposals GO-7505, 7588, 8641, and 9118. Based in part on observations taken with the Canada-France-Hawaii Telescope, operated by the National Research Council of Canada, le Centre National de la Recherche Scientifique de France, and the University of Hawaii. CTIO: Based in part on observations taken at the Cerro Tololo Inter
High precision redundant robotic manipulator
Young, K.K.D.
1998-01-01
A high precision redundant robotic manipulator for overcoming contents imposed by obstacles or imposed by a highly congested work space is disclosed. One embodiment of the manipulator has four degrees of freedom and another embodiment has seven degrees of freedom. Each of the embodiments utilize a first selective compliant assembly robot arm (SCARA) configuration to provide high stiffness in the vertical plane, a second SCARA configuration to provide high stiffness in the horizontal plane. The seven degree of freedom embodiment also utilizes kinematic redundancy to provide the capability of avoiding obstacles that lie between the base of the manipulator and the end effector or link of the manipulator. These additional three degrees of freedom are added at the wrist link of the manipulator to provide pitch, yaw and roll. The seven degrees of freedom embodiment uses one revolute point per degree of freedom. For each of the revolute joints, a harmonic gear coupled to an electric motor is introduced, and together with properly designed based servo controllers provide an end point repeatability of less than 10 microns. 3 figs
Pierluigi Monaco
2016-10-01
Full Text Available Precision cosmology has recently triggered new attention on the topic of approximate methods for the clustering of matter on large scales, whose foundations date back to the period from the late 1960s to early 1990s. Indeed, although the prospect of reaching sub-percent accuracy in the measurement of clustering poses a challenge even to full N-body simulations, an accurate estimation of the covariance matrix of clustering statistics, not to mention the sampling of parameter space, requires usage of a large number (hundreds in the most favourable cases of simulated (mock galaxy catalogs. Combination of few N-body simulations with a large number of realizations performed with approximate methods gives the most promising approach to solve these problems with a reasonable amount of resources. In this paper I review this topic, starting from the foundations of the methods, then going through the pioneering efforts of the 1990s, and finally presenting the latest extensions and a few codes that are now being used in present-generation surveys and thoroughly tested to assess their performance in the context of future surveys.
High precision anatomy for MEG.
Troebinger, Luzia; López, José David; Lutti, Antoine; Bradbury, David; Bestmann, Sven; Barnes, Gareth
2014-02-01
Precise MEG estimates of neuronal current flow are undermined by uncertain knowledge of the head location with respect to the MEG sensors. This is either due to head movements within the scanning session or systematic errors in co-registration to anatomy. Here we show how such errors can be minimized using subject-specific head-casts produced using 3D printing technology. The casts fit the scalp of the subject internally and the inside of the MEG dewar externally, reducing within session and between session head movements. Systematic errors in matching to MRI coordinate system are also reduced through the use of MRI-visible fiducial markers placed on the same cast. Bootstrap estimates of absolute co-registration error were of the order of 1mm. Estimates of relative co-registration error were <1.5mm between sessions. We corroborated these scalp based estimates by looking at the MEG data recorded over a 6month period. We found that the between session sensor variability of the subject's evoked response was of the order of the within session noise, showing no appreciable noise due to between-session movement. Simulations suggest that the between-session sensor level amplitude SNR improved by a factor of 5 over conventional strategies. We show that at this level of coregistration accuracy there is strong evidence for anatomical models based on the individual rather than canonical anatomy; but that this advantage disappears for errors of greater than 5mm. This work paves the way for source reconstruction methods which can exploit very high SNR signals and accurate anatomical models; and also significantly increases the sensitivity of longitudinal studies with MEG. © 2013. Published by Elsevier Inc. All rights reserved.
High precision anatomy for MEG☆
Troebinger, Luzia; López, José David; Lutti, Antoine; Bradbury, David; Bestmann, Sven; Barnes, Gareth
2014-01-01
Precise MEG estimates of neuronal current flow are undermined by uncertain knowledge of the head location with respect to the MEG sensors. This is either due to head movements within the scanning session or systematic errors in co-registration to anatomy. Here we show how such errors can be minimized using subject-specific head-casts produced using 3D printing technology. The casts fit the scalp of the subject internally and the inside of the MEG dewar externally, reducing within session and between session head movements. Systematic errors in matching to MRI coordinate system are also reduced through the use of MRI-visible fiducial markers placed on the same cast. Bootstrap estimates of absolute co-registration error were of the order of 1 mm. Estimates of relative co-registration error were < 1.5 mm between sessions. We corroborated these scalp based estimates by looking at the MEG data recorded over a 6 month period. We found that the between session sensor variability of the subject's evoked response was of the order of the within session noise, showing no appreciable noise due to between-session movement. Simulations suggest that the between-session sensor level amplitude SNR improved by a factor of 5 over conventional strategies. We show that at this level of coregistration accuracy there is strong evidence for anatomical models based on the individual rather than canonical anatomy; but that this advantage disappears for errors of greater than 5 mm. This work paves the way for source reconstruction methods which can exploit very high SNR signals and accurate anatomical models; and also significantly increases the sensitivity of longitudinal studies with MEG. PMID:23911673
The New Era of Precision Cosmology: Testing Gravity at Large Scales
Prescod-Weinstein, Chanda
2011-01-01
Cosmic acceleration may be the biggest phenomenological mystery in cosmology today. Various explanations for its cause have been proposed, including the cosmological constant, dark energy and modified gravities. Structure formation provides a strong test of any cosmic acceleration model because a successful dark energy model must not inhibit the development of observed large-scale structures. Traditional approaches to studies of structure formation in the presence of dark energy ore modified gravity implement the Press & Schechter formalism (PGF). However, does the PGF apply in all cosmologies? The search is on for a better understanding of universality in the PGF In this talk, I explore the potential for universality and talk about what dark matter haloes may be able to tell us about cosmology. I will also discuss the implications of this and new cosmological experiments for better understanding our theory of gravity.
High precision Standard Model Physics
Magnin, J.
2009-01-01
The main goal of the LHCb experiment, one of the four large experiments of the Large Hadron Collider, is to try to give answers to the question of why Nature prefers matter over antimatter? This will be done by studying the decay of b quarks and their antimatter partners, b-bar, which will be produced by billions in 14 TeV p-p collisions by the LHC. In addition, as 'beauty' particles mainly decay in charm particles, an interesting program of charm physics will be carried on, allowing to measure quantities as for instance the D 0 -D-bar 0 mixing, with incredible precision.
High - speed steel for precise cased tools
Karwiarz, J.; Mazur, A.
2001-01-01
The test results of high-vanadium high - speed steel (SWV9) for precise casted tools are presented. The face -milling cutters of NFCa80A type have been tested in industrial operating conditions. An average life - time of SWV9 steel tools was 3-10 times longer compare to the conventional high - speed milling cutters. Metallography of SWB9 precise casted steel revealed beneficial for tool properties distribution of primary vanadium carbides in the steel matrix. Presented results should be a good argument for wide application of high - vanadium high - speed steel for precise casted tools. (author)
High precision thermal neutron detectors
Radeka, V.; Schaknowski, N.A.; Smith, G.C.; Yu, B. [Brookhaven National Laboratory, Upton, NY (United States)
1994-12-31
Two-dimensional position sensitive detectors are indispensable in neutron diffraction experiments for determination of molecular and crystal structures in biology, solid-state physics and polymer chemistry. Some performance characteristics of these detectors are elementary and obvious, such as the position resolution, number of resolution elements, neutron detection efficiency, counting rate and sensitivity to gamma-ray background. High performance detectors are distinguished by more subtle characteristics such as the stability of the response (efficiency) versus position, stability of the recorded neutron positions, dynamic range, blooming or halo effects. While relatively few of them are needed around the world, these high performance devices are sophisticated and fairly complex, their development requires very specialized efforts. In this context, we describe here a program of detector development, based on {sup 3}He filled proportional chambers, which has been underway for some years at the Brookhaven National Laboratory. Fundamental approaches and practical considerations are outlined that have resulted in a series of high performance detectors with the best known position resolution, position stability, uniformity of response and reliability over time, for devices of this type.
Cosmology and Gravitation: the grand scheme for High-Energy Physics
Binétruy, P.
2014-12-10
These lectures describe how the Standard Model of cosmology ( Λ CDM) has developped, based on observational facts but also on ideas formed in the context of the theory of fundamental interactions, both gravitational and non-gravitational, the latter being described by the Standard Model of high energy physics. It focuses on the latest developments, in particular the precise knowledge of the early Universe provided by the observation of the Cosmic Microwave Background and the discovery of the present acceleration of the expansion of the Universe. While insisting on the successes of the Standard Model of cosmology, we will stress that it rests on three pillars which involve many open questions: the theory of inflation, the nature of dark matter and of dark energy. We will devote one chapter to each of these issues, describing in particular how this impacts our views on the theory of fundamental interactions. More technical parts are given in italics. They may be skipped altogether.
A high precision semi-analytic mass function
Del Popolo, Antonino [Dipartimento di Fisica e Astronomia, University of Catania, Viale Andrea Doria 6, I-95125 Catania (Italy); Pace, Francesco [Jodrell Bank Centre for Astrophysics, School of Physics and Astronomy, The University of Manchester, Manchester, M13 9PL (United Kingdom); Le Delliou, Morgan, E-mail: adelpopolo@oact.inaf.it, E-mail: francesco.pace@manchester.ac.uk, E-mail: delliou@ift.unesp.br [Instituto de Física Teorica, Universidade Estadual de São Paulo (IFT-UNESP), Rua Dr. Bento Teobaldo Ferraz 271, Bloco 2—Barra Funda, 01140-070 São Paulo, SP Brazil (Brazil)
2017-03-01
In this paper, extending past works of Del Popolo, we show how a high precision mass function (MF) can be obtained using the excursion set approach and an improved barrier taking implicitly into account a non-zero cosmological constant, the angular momentum acquired by tidal interaction of proto-structures and dynamical friction. In the case of the ΛCDM paradigm, we find that our MF is in agreement at the 3% level to Klypin's Bolshoi simulation, in the mass range M {sub vir} = 5 × 10{sup 9} h {sup −1} M {sub ⊙}–−5 × 10{sup 14} h {sup −1} M {sub ⊙} and redshift range 0 ∼< z ∼< 10. For z = 0 we also compared our MF to several fitting formulae, and found in particular agreement with Bhattacharya's within 3% in the mass range 10{sup 12}–10{sup 16} h {sup −1} M {sub ⊙}. Moreover, we discuss our MF validity for different cosmologies.
High-Precision Computation and Mathematical Physics
Bailey, David H.; Borwein, Jonathan M.
2008-01-01
At the present time, IEEE 64-bit floating-point arithmetic is sufficiently accurate for most scientific applications. However, for a rapidly growing body of important scientific computing applications, a higher level of numeric precision is required. Such calculations are facilitated by high-precision software packages that include high-level language translation modules to minimize the conversion effort. This paper presents a survey of recent applications of these techniques and provides some analysis of their numerical requirements. These applications include supernova simulations, climate modeling, planetary orbit calculations, Coulomb n-body atomic systems, scattering amplitudes of quarks, gluons and bosons, nonlinear oscillator theory, Ising theory, quantum field theory and experimental mathematics. We conclude that high-precision arithmetic facilities are now an indispensable component of a modern large-scale scientific computing environment.
High-speed precision motion control
Yamaguchi, Takashi; Pang, Chee Khiang
2011-01-01
Written for researchers and postgraduate students in Control Engineering, as well as professionals in the Hard Disk Drive industry, this book discusses high-precision and fast servo controls in Hard Disk Drives (HDDs). The editors present a number of control algorithms that enable fast seeking and high precision positioning, and propose problems from commercial products, making the book valuable to researchers in HDDs. Each chapter is self contained, and progresses from concept to technique, present application examples that can be used within automotive, aerospace, aeronautical, and manufactu
High precision, rapid laser hole drilling
Chang, Jim J.; Friedman, Herbert W.; Comaskey, Brian J.
2013-04-02
A laser system produces a first laser beam for rapidly removing the bulk of material in an area to form a ragged hole. The laser system produces a second laser beam for accurately cleaning up the ragged hole so that the final hole has dimensions of high precision.
High precision detector robot arm system
Shu, Deming; Chu, Yong
2017-01-31
A method and high precision robot arm system are provided, for example, for X-ray nanodiffraction with an X-ray nanoprobe. The robot arm system includes duo-vertical-stages and a kinematic linkage system. A two-dimensional (2D) vertical plane ultra-precision robot arm supporting an X-ray detector provides positioning and manipulating of the X-ray detector. A vertical support for the 2D vertical plane robot arm includes spaced apart rails respectively engaging a first bearing structure and a second bearing structure carried by the 2D vertical plane robot arm.
Automatic titrator for high precision plutonium assay
Jackson, D.D.; Hollen, R.M.
1986-01-01
Highly precise assay of plutonium metal is required for accountability measurements. We have developed an automatic titrator for this determination which eliminates analyst bias and requires much less analyst time. The analyst is only required to enter sample data and start the titration. The automated instrument titrates the sample, locates the end point, and outputs the results as a paper tape printout. Precision of the titration is less than 0.03% relative standard deviation for a single determination at the 250-mg plutonium level. The titration time is less than 5 min
Fiber Scrambling for High Precision Spectrographs
Kaplan, Zachary; Spronck, J. F. P.; Fischer, D.
2011-05-01
The detection of Earth-like exoplanets with the radial velocity method requires extreme Doppler precision and long-term stability in order to measure tiny reflex velocities in the host star. Recent planet searches have led to the detection of so called "super-Earths” (up to a few Earth masses) that induce radial velocity changes of about 1 m/s. However, the detection of true Earth analogs requires a precision of 10 cm/s. One of the largest factors limiting Doppler precision is variation in the Point Spread Function (PSF) from observation to observation due to changes in the illumination of the slit and spectrograph optics. Thus, this stability has become a focus of current instrumentation work. Fiber optics have been used since the 1980's to couple telescopes to high-precision spectrographs, initially for simpler mechanical design and control. However, fiber optics are also naturally efficient scramblers. Scrambling refers to a fiber's ability to produce an output beam independent of input. Our research is focused on characterizing the scrambling properties of several types of fibers, including circular, square and octagonal fibers. By measuring the intensity distribution after the fiber as a function of input beam position, we can simulate guiding errors that occur at an observatory. Through this, we can determine which fibers produce the most uniform outputs for the severest guiding errors, improving the PSF and allowing sub-m/s precision. However, extensive testing of fibers of supposedly identical core diameter, length and shape from the same manufacturer has revealed the "personality” of individual fibers. Personality describes differing intensity patterns for supposedly duplicate fibers illuminated identically. Here, we present our results on scrambling characterization as a function of fiber type, while studying individual fiber personality.
Cosmology [2011 European School of High-Energy Physics
Rubakov, V A [Moscow, INR (Russian Federation)
2014-07-01
In these lectures we first concentrate on the cosmological problems which, hopefully, have to do with the new physics to be probed at the LHC: the nature and origin of dark matter and generation of matter-antimatter asymmetry. We give several examples showing the LHC cosmological potential. These are WIMPs as cold dark matter, gravitinos as warm dark matter, and electroweak baryogenesis as a mechanism for generating matter-antimatter asymmetry. In the remaining part of the lectures we discuss the cosmological perturbations as a tool for studying the epoch preceeding the conventional hot stage of the cosmological evolution.
Cosmological Evolution of the Central Engine in High-Luminosity, High-Accretion Rate AGN
Matteo Guainazzi
2014-12-01
Full Text Available In this paper I discuss the status of observational studies aiming at probing the cosmological evolution of the central engine in high-luminosity, high-accretion rate Active Galactic Nuclei (AGN. X-ray spectroscopic surveys, supported by extensive multi-wavelength coverage, indicate a remarkable invariance of the accretion disk plus corona system, and of their coupling up to redshifts z≈6. Furthermore, hard X-ray (E >10 keV surveys show that nearby Seyfert Galaxies share the same central engine notwithstanding their optical classication. These results suggest that the high-luminosity, high accretion rate quasar phase of AGN evolution is homogeneous over cosmological times.
Troxel, M. A.; Ishak, Mustapha; Peel, Austin, E-mail: troxel@utdallas.edu, E-mail: mishak@utdallas.edu, E-mail: austin.peel@utdallas.edu [Department of Physics, The University of Texas at Dallas, Richardson, TX 75080 (United States)
2014-03-01
The study of relativistic, higher order, and nonlinear effects has become necessary in recent years in the pursuit of precision cosmology. We develop and apply here a framework to study gravitational lensing in exact models in general relativity that are not restricted to homogeneity and isotropy, and where full nonlinearity and relativistic effects are thus naturally included. We apply the framework to a specific, anisotropic galaxy cluster model which is based on a modified NFW halo density profile and described by the Szekeres metric. We examine the effects of increasing levels of anisotropy in the galaxy cluster on lensing observables like the convergence and shear for various lensing geometries, finding a strong nonlinear response in both the convergence and shear for rays passing through anisotropic regions of the cluster. Deviation from the expected values in a spherically symmetric structure are asymmetric with respect to path direction and thus will persist as a statistical effect when averaged over some ensemble of such clusters. The resulting relative difference in various geometries can be as large as approximately 2%, 8%, and 24% in the measure of convergence (1−κ) for levels of anisotropy of 5%, 10%, and 15%, respectively, as a fraction of total cluster mass. For the total magnitude of shear, the relative difference can grow near the center of the structure to be as large as 15%, 32%, and 44% for the same levels of anisotropy, averaged over the two extreme geometries. The convergence is impacted most strongly for rays which pass in directions along the axis of maximum dipole anisotropy in the structure, while the shear is most strongly impacted for rays which pass in directions orthogonal to this axis, as expected. The rich features found in the lensing signal due to anisotropic substructure are nearly entirely lost when one treats the cluster in the traditional FLRW lensing framework. These effects due to anisotropic structures are thus likely to
Recent high precision surveys at PEP
Sah, R.C.
1980-12-01
The task of surveying and aligning the components of PEP has provided an opportunity to develop new instruments and techniques for the purpose of high precision surveys. The new instruments are quick and easy to use, and they automatically encode survey data and read them into the memory of an on-line computer. When measurements of several beam elements have been taken, the on-line computer analyzes the measured data, compares them with desired parameters, and calculates the required adjustments to beam element support stands
Zhang Yuan Zhong
2002-01-01
This book is one of a series in the areas of high-energy physics, cosmology and gravitation published by the Institute of Physics. It includes courses given at a doctoral school on 'Relativistic Cosmology: Theory and Observation' held in Spring 2000 at the Centre for Scientific Culture 'Alessandro Volta', Italy, sponsored by SIGRAV-Societa Italiana di Relativita e Gravitazione (Italian Society of Relativity and Gravitation) and the University of Insubria. This book collects 15 review reports given by a number of outstanding scientists. They touch upon the main aspects of modern cosmology from observational matters to theoretical models, such as cosmological models, the early universe, dark matter and dark energy, modern observational cosmology, cosmic microwave background, gravitational lensing, and numerical simulations in cosmology. In particular, the introduction to the basics of cosmology includes the basic equations, covariant and tetrad descriptions, Friedmann models, observation and horizons, etc. The ...
Digitalization of highly precise fluxgate magnetometers
Cerman, Ales; Kuna, A.; Ripka, P.
2005-01-01
This paper describes the theory behind all three known ways of digitalizing the fluxgate magnetometers: analogue magnetometers with digitalized output using high resolution ADC, application of the delta-sigma modulation to the sensor feedback loop and fully digital signal detection. At present time...... the Delta-Sigma ADCs are mostly used for the digitalization of the highly precise fluxgate magnetorneters. The relevant part of the paper demonstrates some pitfalls of their application studied during the design of the magnetometer for the new Czech scientific satellite MIMOSA. The part discussing...... the application of the A-E modulation to the sensor feedback loop theoretically derives the main advantage of this method-increasing of the modulation order and shows its real potential compared to the analog magnetometer with consequential digitalization. The comparison is realized on the modular magnetometer...
High precision innovative micropump for artificial pancreas
Chappel, E.; Mefti, S.; Lettieri, G.-L.; Proennecke, S.; Conan, C.
2014-03-01
The concept of artificial pancreas, which comprises an insulin pump, a continuous glucose meter and a control algorithm, is a major step forward in managing patient with type 1 diabetes mellitus. The stability of the control algorithm is based on short-term precision micropump to deliver rapid-acting insulin and to specific integrated sensors able to monitor any failure leading to a loss of accuracy. Debiotech's MEMS micropump, based on the membrane pump principle, is made of a stack of 3 silicon wafers. The pumping chamber comprises a pillar check-valve at the inlet, a pumping membrane which is actuated against stop limiters by a piezo cantilever, an anti-free-flow outlet valve and a pressure sensor. The micropump inlet is tightly connected to the insulin reservoir while the outlet is in direct communication with the patient skin via a cannula. To meet the requirement of a pump dedicated to closed-loop application for diabetes care, in addition to the well-controlled displacement of the pumping membrane, the high precision of the micropump is based on specific actuation profiles that balance effect of pump elasticity in low-consumption push-pull mode.
High precision timing in a FLASH
Hoek, Matthias; Cardinali, Matteo; Dickescheid, Michael; Schlimme, Soeren; Sfienti, Concettina; Spruck, Bjoern; Thiel, Michaela [Institut fuer Kernphysik, Johannes Gutenberg-Universitaet Mainz (Germany)
2016-07-01
A segmented highly precise start counter (FLASH) was designed and constructed at the Institute for Nuclear Physics in Mainz. Besides determining a precise reference time, a Time-of-Flight measurement can be performed with two identical FLASH units. Thus, particle identification can be provided for mixed hadron beam environments. The detector design is based on the detection of Cherenkov light produced in fused silica radiator bars with fast multi-anode MCP-PMTs. The segmentation of the radiator improves the timing resolution while allowing a coarse position resolution along one direction. Both, the arrival time and the Time-over-Threshold are determined by the readout electronics, which enables walk correction of the arrival time. The performance of two FLASH units was investigated in test experiments at the Mainz Microton (MAMI) using an electron beam with an energy of 855 MeV and at CERN's PS T9 beam line with a mixed hadron beam with momenta between 3-8 GeV/c. Effective Time-walk correction methods based on Time-over-Threshold were developed for the data analysis. The achieved Time-Of-Flight resolution after applying all corrections was found to be 70 ps. Furthermore, the PID and position resolution capabilities are discussed in this contribution.
Towards High Productivity in Precision Grinding
W. Brian Rowe
2018-04-01
Full Text Available Over the last century, substantial advances have been made, based on improved understanding of the requirements of grinding processes, machines, control systems, materials, abrasives, wheel preparation, coolants, lubricants, and coolant delivery. This paper reviews a selection of areas in which the application of scientific principles and engineering ingenuity has led to the development of new grinding processes, abrasives, tools, machines, and systems. Topics feature a selection of areas where relationships between scientific principles and new techniques are yielding improved productivity and better quality. These examples point towards further advances that can fruitfully be pursued. Applications in modern grinding technology range from high-precision kinematics for grinding very large lenses and reflectors through to medium size grinding machine processes and further down to grinding very small components used in micro electro-mechanical systems (MEMS devices. The importance of material issues is emphasized for the range of conventional engineering steels, through to aerospace materials, ceramics, and composites. It is suggested that future advances in productivity will include the wider application of artificial intelligence and robotics to improve precision, process efficiency, and features required to integrate grinding processes into wider manufacturing systems.
High precision neutron polarization for PERC
Klauser, C.
2013-01-01
The decay of the free neutron into a proton, an electron and an anti-electron neutrino offers a simple system to study the semi-leptonic weak decay. High precision measurements of angular correlation coefficients of this decay provide the opportunity to test the standard model on the low energy frontier. The Proton Electron Radiation Channel PERC is part of a new generation of expriments pushing the accuracy of such an angular correlation coefficient measurement towards 10 -4 . Past experiments have been limited to an accuracy of 10 -3 with uncertainties on the neutron polarization as one of the leading systematic errors. This thesis focuses on the development of a stable, highly precise neutron polarization for a large, divergent cold neutron beam. A diagnostic tool that provides polarization higher than 99.99 % and analyzes with an accuracy of 10 -4 , the Opaque Test Bench, is presented and validated. It consists of two highly opaque polarized helium cells. The Opaque Test Bench reveals depolarizing effects in polarizing supermirrors commonly used for polarization in neutron decay experiments. These effects are investigated in detail. They are due to imperfect lateral magnetization in supermirror layers and can be minimized by significantly increased magnetizing fields and low incidence angle and supermirror factor m. A subsequent test in the crossed (X-SM) geometry demonstrated polarizations up to 99.97% from supermirrors only, improving neutron polarization with supermirrors by an order of magnitude. The thesis also discusses other neutron optical components of the PERC beamline: Monte-Carlo simulations of the beamline under consideration of the primary guide are carried out. In addition, calculation shows that PERC would statistically profit from an installation at the European Spallation source. Furthermore, beamline components were tested. A radio-frequency spin flipper was confirmed to work with an efficiency higher than 0.9999. (author) [de
Precision laser spectroscopy of highly charged ions
Kuehl, T.; Borneis, S.; Becker, S.; Dax, A.; Engel, T.; Grieser, R.; Huber, G.; Klaft, I.; Klepper, O.; Kohl, A.; Marx, D.; Meier, K.; Neumann, R.; Schmitt, F.; Seelig, P.; Voelker, L.
1996-01-01
Recently, intense beams of highly charged ions have become available at heavy ion cooler rings. The obstacle for producing these highly interesting candidates is the large binding energy of K-shell electrons in heavy systems in excess of 100 keV. One way to remove these electrons is to strip them off by passing the ion through material. In the cooler ring, the ions are cooled to a well defined velocity. At the SIS/ESR complex it is possible to produce, store, and cool highly charged ions up to bare uranium with intensities exceeding 10 8 atoms in the ring. This opens the door for precision laser spectroscopy of hydrogenlike-heavy ions, e.g. 209 Bi 82+ , and allows to examine the interaction of the single electron with the large fields of the heavy nucleus, exceeding any artificially produced electric and magnetic fields by orders of magnitude. In the electron cooler the interaction of electrons and highly charged ions otherwise only present in the hottest plasmas can be studied. (orig.)
Underground Study of Big Bang Nucleosynthesis in the Precision Era of Cosmology
Gustavino Carlo
2017-01-01
Full Text Available Big Bang Nucleosinthesis (BBN theory provides definite predictions for the abundance of light elements produced in the early universe, as far as the knowledge of the relevant nuclear processes of the BBN chain is accurate. At BBN energies (30 ≲ Ecm ≲ 300 MeV the cross section of many BBN processes is very low because of the Coulomb repulsion between the interacting nuclei. For this reason it is convenient to perform the measurements deep underground. Presently the world’s only facility operating underground is LUNA (Laboratory for Undergound Nuclear astrophysics at LNGS (“Laboratorio Nazionale del Gran Sasso”, Italy. In this presentation the BBN measurements of LUNA are briefly reviewed and discussed. It will be shown that the ongoing study of the D(p, γ3He reaction is of primary importance to derive the baryon density of universe Ωb with high accuracy. Moreover, this study allows to constrain the existence of the so called “dark radiation”, composed by undiscovered relativistic species permeating the universe, such as sterile neutrinos.
Underground Study of Big Bang Nucleosynthesis in the Precision Era of Cosmology
Gustavino, Carlo
2017-03-01
Big Bang Nucleosinthesis (BBN) theory provides definite predictions for the abundance of light elements produced in the early universe, as far as the knowledge of the relevant nuclear processes of the BBN chain is accurate. At BBN energies (30 ≲ Ecm ≲ 300 MeV) the cross section of many BBN processes is very low because of the Coulomb repulsion between the interacting nuclei. For this reason it is convenient to perform the measurements deep underground. Presently the world's only facility operating underground is LUNA (Laboratory for Undergound Nuclear astrophysics) at LNGS ("Laboratorio Nazionale del Gran Sasso", Italy). In this presentation the BBN measurements of LUNA are briefly reviewed and discussed. It will be shown that the ongoing study of the D(p, γ)3He reaction is of primary importance to derive the baryon density of universe Ωb with high accuracy. Moreover, this study allows to constrain the existence of the so called "dark radiation", composed by undiscovered relativistic species permeating the universe, such as sterile neutrinos.
High precision relative position sensing system for formation flying spacecraft
National Aeronautics and Space Administration — We propose to develop and test an optical sensing system that provides high precision relative position sensing for formation flying spacecraft. A high precision...
Theoretical Research at the High Energy Frontier: Cosmology and Beyond
Krauss, Lawrence M. [Arizona State Univ., Tempe, AZ (United States). Dept. of Physics and School of Earth and Space Exploration
2017-03-31
Radiation. Undoubtedly the most significant outstanding problem in high-energy physics is also a problem in cosmology, and indeed originated not from accelerators but from astrophysical observations: What is the origin and nature of the dark energy that appears to dominate the Universe? An understanding of quantum gravity, and perhaps a new understanding of quantum mechanics or quantum field theory may be required to fully address this problem. At the moment, the physics of black holes may provide the best opportunity to explore these issues, while the discovery of the Higgs suggests several new possible connections to physics that might be relevant for dark energy. Finally, pending confirmation of a gravitational wave signal from inflation, to date the only direct evidence for fundamental particle physics beyond the standard model comes, at least in part, from astrophysical neutrino observations. A remarkable convergence of theory, observation and experiment has been taking place that is allowing great strides to be made in our knowledge of the parameters that describe the universe, if not the origin of these parameters. Given the new discoveries now being made, and the incredible capabilities of future instruments, it is an exciting time to make progress in our fundamental understanding the origin and evolution of the Universe and the fundamental forces that guide that evolution. As a result, it is natural that our DOE theory research program at Arizona State University focuses in large part on the connections between particle physics and cosmology and astrophysics in order to improve our understanding of fundamental physics. Our areas of research cover all of the areas described above. Our group now consists of four faculty PI’s and their postdocs and students, complemented by long term visitor Frank Wilczek, and physics faculty colleagues Cecilia Lunardini, Richard Lebed, and Andrei Belitsky, whose interests overlap in areas ranging from particle theory and
Precision mechatronics based on high-precision measuring and positioning systems and machines
Jäger, Gerd; Manske, Eberhard; Hausotte, Tino; Mastylo, Rostyslav; Dorozhovets, Natalja; Hofmann, Norbert
2007-06-01
Precision mechatronics is defined in the paper as the science and engineering of a new generation of high precision systems and machines. Nanomeasuring and nanopositioning engineering represents important fields of precision mechatronics. The nanometrology is described as the today's limit of the precision engineering. The problem, how to design nanopositioning machines with uncertainties as small as possible will be discussed. The integration of several optical and tactile nanoprobes makes the 3D-nanopositioning machine suitable for various tasks, such as long range scanning probe microscopy, mask and wafer inspection, nanotribology, nanoindentation, free form surface measurement as well as measurement of microoptics, precision molds, microgears, ring gauges and small holes.
High precision spectrophotometric analysis of thorium
Palmieri, H.E.L.
1984-01-01
An accurate and precise determination of thorium is proposed. Precision of about 0,1% is required for the determination of macroquantities of thorium when processed. After an extensive literature search concerning this subject, spectrophotometric titration has been chosen, using dissodium ethylenediaminetetraacetate (EDTA) solution and alizarin-S as indicator. In order to obtain such a precision, an amount of 0,025 M EDTA solution precisely measured has been added and the titration was completed with less than 5 ml of 0,0025 M EDTA solution. It is usual to locate the end-point graphically, by plotting added titrant versus absorbance. The non-linear minimum square fit, using the Fletcher e Powell's minimization process and a computer programme. Besides the equivalence point, other parameters of titration were determined: the indicator concentration, the absorbance of the metal-indicator complex, and the stability constants of the metal-indicator and the metal-EDTA complexes. (Author) [pt
Thorium spectrophotometric analysis with high precision
Palmieri, H.E.L.
1983-06-01
An accurate and precise determination of thorium is proposed. Precision of about 0,1% is required for the determination of macroquantities of thorium processed. After an extensive literature search concerning this subject, spectrophotometric titration has been chosen, using disodium ethylenediaminetetraacetate (EDTA) solution and alizarin S as indicator. In order to obtain such a precision, an amount of 0,025 M EDTA solution precisely measured has been added and the titration was completed with less than 5 ml of 0,0025 M EDTA solution. It is usual to locate the end-point graphically, by plotting added titrant versus absorbance. The non-linear minimum square fit, using the Fletcher e Powell's minimization process and a computer program. (author)
Precision axial translator with high stability.
Bösch, M A
1979-08-01
We describe a new type of translator which is inherently stable against torsion and twisting. This concentric translator is also ideally suited for precise axial motion with clearance of the center line.
Advanced methods and algorithm for high precision astronomical imaging
Ngole-Mboula, Fred-Maurice
2016-01-01
One of the biggest challenges of modern cosmology is to gain a more precise knowledge of the dark energy and the dark matter nature. Fortunately, the dark matter can be traced directly through its gravitational effect on galaxies shapes. The European Spatial Agency Euclid mission will precisely provide data for such a purpose. A critical step is analyzing these data will be to accurately model the instrument Point Spread Function (PSF), which the focus of this thesis.We developed non parametric methods to reliably estimate the PSFs across an instrument field-of-view, based on unresolved stars images and accounting for noise, under sampling and PSFs spatial variability. At the core of these contributions, modern mathematical tools and concepts such as sparsity. An important extension of this work will be to account for the PSFs wavelength dependency. (author) [fr
Laser technology for high precision satellite tracking
Plotkin, H. H.
1974-01-01
Fixed and mobile laser ranging stations have been developed to track satellites equipped with retro-reflector arrays. These have operated consistently at data rates of once per second with range precision better than 50 cm, using Q-switched ruby lasers with pulse durations of 20 to 40 nanoseconds. Improvements are being incorporated to improve the precision to 10 cm, and to permit ranging to more distant satellites. These include improved reflector array designs, processing and analysis of the received reflection pulses, and use of sub-nanosecond pulse duration lasers.
High-precision positioning of radar scatterers
Dheenathayalan, P.; Small, D.; Schubert, A.; Hanssen, R.F.
2016-01-01
Remote sensing radar satellites cover wide areas and provide spatially dense measurements, with millions of scatterers. Knowledge of the precise position of each radar scatterer is essential to identify the corresponding object and interpret the estimated deformation. The absolute position accuracy
Efficient exploration of cosmology dependence in the EFT of LSS
Cataneo, Matteo [Dark Cosmology Centre, Niels Bohr Institute, University of Copenhagen, Juliane Maries Vej 30, 2100 Copenhagen (Denmark); Foreman, Simon; Senatore, Leonardo, E-mail: matteoc@dark-cosmology.dk, E-mail: sfore@stanford.edu, E-mail: senatore@stanford.edu [Stanford Institute for Theoretical Physics and Department of Physics, Stanford University, 382 Via Pueblo Mall, Stanford, CA 94306 (United States)
2017-04-01
The most effective use of data from current and upcoming large scale structure (LSS) and CMB observations requires the ability to predict the clustering of LSS with very high precision. The Effective Field Theory of Large Scale Structure (EFTofLSS) provides an instrument for performing analytical computations of LSS observables with the required precision in the mildly nonlinear regime. In this paper, we develop efficient implementations of these computations that allow for an exploration of their dependence on cosmological parameters. They are based on two ideas. First, once an observable has been computed with high precision for a reference cosmology, for a new cosmology the same can be easily obtained with comparable precision just by adding the difference in that observable, evaluated with much less precision. Second, most cosmologies of interest are sufficiently close to the Planck best-fit cosmology that observables can be obtained from a Taylor expansion around the reference cosmology. These ideas are implemented for the matter power spectrum at two loops and are released as public codes. When applied to cosmologies that are within 3σ of the Planck best-fit model, the first method evaluates the power spectrum in a few minutes on a laptop, with results that have 1% or better precision, while with the Taylor expansion the same quantity is instantly generated with similar precision. The ideas and codes we present may easily be extended for other applications or higher-precision results.
Autocalibration of high precision drift tubes
Bacci, C.; Bini, C.; Ciapetti, G.; De Zorzi, G.; Gauzzi, P.; Lacava, F.; Nisati, A.; Pontecorvo, L.; Rosati, S.; Veneziano, S.; Cambiaghi, M.; Casellotti, G.; Conta, C.; Fraternali, M.; Lanza, A.; Livan, M.; Polesello, G.; Rimoldi, A.; Vercesi, V.
1997-01-01
We present the results on MDT (monitored drift tubes) autocalibration studies obtained from the analysis of the data collected in Summer 1995 on the H8B Muon Test Beam. In particular we studied the possibility of autocalibration of the MDT using four or three layers of tubes, and we compared the calibration obtained using a precise external tracker with the output of the autocalibration procedure. Results show the feasibility of autocalibration with four and three tubes and the good accuracy of the autocalibration procedure. (orig.)
Steffen Hahn
2017-01-01
Full Text Available Presently, we are facing a 3σ tension in the most basic cosmological parameter, the Hubble constant H0. This tension arises when fitting the Lambda-cold-dark-matter model (ΛCDM to the high-precision temperature-temperature (TT power spectrum of the Cosmic Microwave Background (CMB and to local cosmological observations. We propose a resolution of this problem by postulating that the thermal photon gas of the CMB obeys an SU(2 rather than U(1 gauge principle, suggesting a high-z cosmological model which is void of dark-matter. Observationally, we rely on precise low-frequency intensity measurements in the CMB spectrum and on a recent model independent (low-z extraction of the relation between the comoving sound horizon rs at the end of the baryon drag epoch and H0 (rsH0=const. We point out that the commonly employed condition for baryon-velocity freeze-out is imprecise, judged by a careful inspection of the formal solution to the associated Euler equation. As a consequence, the above-mentioned 3σ tension actually transforms into a 5σ discrepancy. To make contact with successful low-z ΛCDM cosmology we propose an interpolation based on percolated/depercolated vortices of a Planck-scale axion condensate. For a first consistency test of such an all-z model we compute the angular scale of the sound horizon at photon decoupling.
Zeldovich, Ya.
1984-01-01
The knowledge is summed up of contemporary cosmology on the universe and its development resulting from a great number of highly sensitive observations and the application of contemporary physical theories to the entire universe. The questions are assessed of mass density in the universe, the structure and origin of the universe, its baryon asymmetry and the quantum explanation of the origin of the universe. Physical problems are presented which should be resolved for the future development of cosmology. (Ha)
High precision target center determination from a point cloud
K. Kregar
2013-10-01
Full Text Available Many applications of terrestrial laser scanners (TLS require the determination of a specific point from a point cloud. In this paper procedure of high precision planar target center acquisition from point cloud is presented. The process is based on an image matching algorithm but before we can deal with raster image to fit a target on it, we need to properly determine the best fitting plane and project points on it. The main emphasis of this paper is in the precision estimation and propagation through the whole procedure which allows us to obtain precision assessment of final results (target center coordinates. Theoretic precision estimations – obtained through the procedure were rather high so we compared them with the empiric precision estimations obtained as standard deviations of results of 60 independently scanned targets. An χ2-test confirmed that theoretic precisions are overestimated. The problem most probably lies in the overestimated precisions of the plane parameters due to vast redundancy of points. However, empirical precisions also confirmed that the proposed procedure can ensure a submillimeter precision level. The algorithm can automatically detect grossly erroneous results to some extent. It can operate when the incidence angles of a laser beam are as high as 80°, which is desirable property if one is going to use planar targets as tie points in scan registration. The proposed algorithm will also contribute to improve TLS calibration procedures.
Berstein, J.
1984-01-01
These lectures offer a self-contained review of the role of neutrinos in cosmology. The first part deals with the question 'What is a neutrino.' and describes in a historical context the theoretical ideas and experimental discoveries related to the different types of neutrinos and their properties. The basic differences between the Dirac neutrino and the Majorana neutrino are pointed out and the evidence for different neutrino 'flavours', neutrino mass, and neutrino oscillations is discussed. The second part summarizes current views on cosmology, particularly as they are affected by recent theoretical and experimental advances in high-energy particle physics. Finally, the close relationship between neutrino physics and cosmology is brought out in more detail, to show how cosmological constraints can limit the various theoretical possibilities for neutrinos and, more particularly, how increasing knowledge of neutrino properties can contribute to our understanding of the origin, history, and future of the Universe. The level is that of the beginning graduate student. (orig.)
Active vibration isolation of high precision machines
Collette, C; Artoos, K; Hauviller, C
2010-01-01
This paper provides a review of active control strategies used to isolate high precisionmachines (e.g. telescopes, particle colliders, interferometers, lithography machines or atomic force microscopes) from external disturbances. The objective of this review is to provide tools to develop the best strategy for a given application. Firstly, the main strategies are presented and compared, using single degree of freedom models. Secondly, the case of huge structures constituted of a large number of elements, like particle colliders or segmented telescopes, is considered.
The study of high precision neutron moisture gauge
Liu Shengkang; Bao Guanxiong; Sang Hai; Zhu Yuzhen
1993-01-01
The principle, structure and calibration experiment of the high precision neutron moisture gauge (insertion type) are described. The gauge has been appraised. The precision of the measuring moisture of coke is lower than 0.5%, and the range of the measuring moisture is 2%-12%. The economic benefit of the gauge application is good
High-precision gauging of metal rings
Carlin, Mats; Lillekjendlie, Bjorn
1994-11-01
Raufoss AS designs and produces air brake fittings for trucks and buses on the international market. One of the critical components in the fittings is a small, circular metal ring, which is going through 100% dimension control. This article describes a low-price, high accuracy solution developed at SINTEF Instrumentation based on image metrology and a subpixel resolution algorithm. The measurement system consists of a PC-plugg-in transputer video board, a CCD camera, telecentric optics and a machine vision strobe. We describe the measurement technique in some detail, as well as the robust statistical techniques found to be essential in the real life environment.
Precision probes of QCD at high energies
Alioli, Simone; Farina, Marco; Pappadopulo, Duccio; Ruderman, Joshua T.
2017-07-01
New physics, that is too heavy to be produced directly, can leave measurable imprints on the tails of kinematic distributions at the LHC. We use energetic QCD processes to perform novel measurements of the Standard Model (SM) Effective Field Theory. We show that the dijet invariant mass spectrum, and the inclusive jet transverse momentum spectrum, are sensitive to a dimension 6 operator that modifies the gluon propagator at high energies. The dominant effect is constructive or destructive interference with SM jet production. We compare differential next-to-leading order predictions from POWHEG to public 7 TeV jet data, including scale, PDF, and experimental uncertainties and their respective correlations. We constrain a New Physics (NP) scale of 3.5 TeV with current data. We project the reach of future 13 and 100 TeV measurements, which we estimate to be sensitive to NP scales of 8 and 60 TeV, respectively. As an application, we apply our bounds to constrain heavy vector octet colorons that couple to the QCD current. We project that effective operators will surpass bump hunts, in terms of coloron mass reach, even for sequential couplings.
Zhang Yuanzhong
2002-06-21
This book is one of a series in the areas of high-energy physics, cosmology and gravitation published by the Institute of Physics. It includes courses given at a doctoral school on 'Relativistic Cosmology: Theory and Observation' held in Spring 2000 at the Centre for Scientific Culture 'Alessandro Volta', Italy, sponsored by SIGRAV-Societa Italiana di Relativita e Gravitazione (Italian Society of Relativity and Gravitation) and the University of Insubria. This book collects 15 review reports given by a number of outstanding scientists. They touch upon the main aspects of modern cosmology from observational matters to theoretical models, such as cosmological models, the early universe, dark matter and dark energy, modern observational cosmology, cosmic microwave background, gravitational lensing, and numerical simulations in cosmology. In particular, the introduction to the basics of cosmology includes the basic equations, covariant and tetrad descriptions, Friedmann models, observation and horizons, etc. The chapters on the early universe involve inflationary theories, particle physics in the early universe, and the creation of matter in the universe. The chapters on dark matter (DM) deal with experimental evidence of DM, neutrino oscillations, DM candidates in supersymmetry models and supergravity, structure formation in the universe, dark-matter search with innovative techniques, and dark energy (cosmological constant), etc. The chapters about structure in the universe consist of the basis for structure formation, quantifying large-scale structure, cosmic background fluctuation, galaxy space distribution, and the clustering of galaxies. In the field of modern observational cosmology, galaxy surveys and cluster surveys are given. The chapter on gravitational lensing describes the lens basics and models, galactic microlensing and galaxy clusters as lenses. The last chapter, 'Numerical simulations in cosmology', deals with spatial and
Parameterized post-Newtonian cosmology
Sanghai, Viraj A A; Clifton, Timothy
2017-01-01
Einstein’s theory of gravity has been extensively tested on solar system scales, and for isolated astrophysical systems, using the perturbative framework known as the parameterized post-Newtonian (PPN) formalism. This framework is designed for use in the weak-field and slow-motion limit of gravity, and can be used to constrain a large class of metric theories of gravity with data collected from the aforementioned systems. Given the potential of future surveys to probe cosmological scales to high precision, it is a topic of much contemporary interest to construct a similar framework to link Einstein’s theory of gravity and its alternatives to observations on cosmological scales. Our approach to this problem is to adapt and extend the existing PPN formalism for use in cosmology. We derive a set of equations that use the same parameters to consistently model both weak fields and cosmology. This allows us to parameterize a large class of modified theories of gravity and dark energy models on cosmological scales, using just four functions of time. These four functions can be directly linked to the background expansion of the universe, first-order cosmological perturbations, and the weak-field limit of the theory. They also reduce to the standard PPN parameters on solar system scales. We illustrate how dark energy models and scalar-tensor and vector-tensor theories of gravity fit into this framework, which we refer to as ‘parameterized post-Newtonian cosmology’ (PPNC). (paper)
Parameterized post-Newtonian cosmology
Sanghai, Viraj A. A.; Clifton, Timothy
2017-03-01
Einstein’s theory of gravity has been extensively tested on solar system scales, and for isolated astrophysical systems, using the perturbative framework known as the parameterized post-Newtonian (PPN) formalism. This framework is designed for use in the weak-field and slow-motion limit of gravity, and can be used to constrain a large class of metric theories of gravity with data collected from the aforementioned systems. Given the potential of future surveys to probe cosmological scales to high precision, it is a topic of much contemporary interest to construct a similar framework to link Einstein’s theory of gravity and its alternatives to observations on cosmological scales. Our approach to this problem is to adapt and extend the existing PPN formalism for use in cosmology. We derive a set of equations that use the same parameters to consistently model both weak fields and cosmology. This allows us to parameterize a large class of modified theories of gravity and dark energy models on cosmological scales, using just four functions of time. These four functions can be directly linked to the background expansion of the universe, first-order cosmological perturbations, and the weak-field limit of the theory. They also reduce to the standard PPN parameters on solar system scales. We illustrate how dark energy models and scalar-tensor and vector-tensor theories of gravity fit into this framework, which we refer to as ‘parameterized post-Newtonian cosmology’ (PPNC).
System and method for high precision isotope ratio destructive analysis
Bushaw, Bruce A; Anheier, Norman C; Phillips, Jon R
2013-07-02
A system and process are disclosed that provide high accuracy and high precision destructive analysis measurements for isotope ratio determination of relative isotope abundance distributions in liquids, solids, and particulate samples. The invention utilizes a collinear probe beam to interrogate a laser ablated plume. This invention provides enhanced single-shot detection sensitivity approaching the femtogram range, and isotope ratios that can be determined at approximately 1% or better precision and accuracy (relative standard deviation).
High precision 3D coordinates location technology for pellet
Fan Yong; Zhang Jiacheng; Zhou Jingbin; Tang Jun; Xiao Decheng; Wang Chuanke; Dong Jianjun
2010-01-01
In inertial confinement fusion (ICF) system, manual way has been used to collimate the pellet traditionally, which is time-consuming and low-level automated. A new method based on Binocular Vision is proposed, which can place the prospecting apparatus on the public diagnosis platform to reach relevant engineering target and uses the high precision two dimension calibration board. Iterative method is adopted to satisfy 0.1 pixel for corner extraction precision. Furthermore, SVD decomposition is used to remove the singularity corners and advanced Zhang's calibration method is applied to promote camera calibration precision. Experiments indicate that the RMS of three dimension coordinate measurement precision is 25 μm, and the max system RMS of distance measurement is better than 100 μm, satisfying the system index requirement. (authors)
High-precision thermal and electrical characterization of thermoelectric modules
Kolodner, Paul
2014-05-01
This paper describes an apparatus for performing high-precision electrical and thermal characterization of thermoelectric modules (TEMs). The apparatus is calibrated for operation between 20 °C and 80 °C and is normally used for measurements of heat currents in the range 0-10 W. Precision thermometry based on miniature thermistor probes enables an absolute temperature accuracy of better than 0.010 °C. The use of vacuum isolation, thermal guarding, and radiation shielding, augmented by a careful accounting of stray heat leaks and uncertainties, allows the heat current through the TEM under test to be determined with a precision of a few mW. The fractional precision of all measured parameters is approximately 0.1%.
Particle physics and cosmology
Turner, M.S.; Schramm, D.N.
1985-01-01
During the past year, the research of the members of our group has spanned virtually all the topics at the interface of cosmology and particle physics: inflationary Universe scenarios, astrophysical and cosmological constraints on particle properties, ultra-high energy cosmic ray physics, quantum field theory in curved space-time, cosmology with extra dimensions, superstring cosmology, neutrino astronomy with large, underground detectors, and the formation of structure in the Universe
High-precision performance testing of the LHC power converters
Bastos, M; Dreesen, P; Fernqvist, G; Fournier, O; Hudson, G
2007-01-01
The magnet power converters for LHC were procured in three parts, power part, current transducers and control electronics, to enable a maximum of industrial participation in the manufacturing and still guarantee the very high precision (a few parts in 10-6) required by LHC. One consequence of this approach was several stages of system tests: factory reception tests, CERN reception tests, integration tests , short-circuit tests and commissioning on the final load in the LHC tunnel. The majority of the power converters for LHC have now been delivered, integrated into complete converter and high-precision performance testing is well advanced. This paper presents the techniques used for high-precision testing and the results obtained.
High current precision long pulse electron beam position monitor
Nelson, S D; Fessenden, T J; Holmes, C
2000-01-01
Precision high current long pulse electron beam position monitoring has typically experienced problems with high Q sensors, sensors damped to the point of lack of precision, or sensors that interact substantially with any beam halo thus obscuring the desired signal. As part of the effort to develop a multi-axis electron beam transport system using transverse electromagnetic stripline kicker technology, it is necessary to precisely determine the position and extent of long high energy beams for accurate beam position control (6 - 40 MeV, 1 - 4 kA, 2 μs beam pulse, sub millimeter beam position accuracy.) The kicker positioning system utilizes shot-to-shot adjustments for reduction of relatively slow (< 20 MHz) motion of the beam centroid. The electron beams passing through the diagnostic systems have the potential for large halo effects that tend to corrupt position measurements.
Sanders, Robert H
2016-01-01
The advent of sensitive high-resolution observations of the cosmic microwave background radiation and their successful interpretation in terms of the standard cosmological model has led to great confidence in this model's reality. The prevailing attitude is that we now understand the Universe and need only work out the details. In this book, Sanders traces the development and successes of Lambda-CDM, and argues that this triumphalism may be premature. The model's two major components, dark energy and dark matter, have the character of the pre-twentieth-century luminiferous aether. While there is astronomical evidence for these hypothetical fluids, their enigmatic properties call into question our assumptions of the universality of locally determined physical law. Sanders explains how modified Newtonian dynamics (MOND) is a significant challenge for cold dark matter. Overall, the message is hopeful: the field of cosmology has not become frozen, and there is much fundamental work ahead for tomorrow's cosmologis...
The acceptance of surface detector arrays for high energy cosmological muon neutrinos
Vo Van Thuan; Hoang Van Khanh
2011-01-01
In order to search for ultra-high energy cosmological earth-skimming muon neutrinos by the surface detector array (SD) similar to one of the Pierre Auger Observatory (PAO), we propose to use the transition electromagnetic radiation at the medium interface induced by earth-skimming muons for triggering a few of aligned neighboring Cherenkov SD stations. Simulations of the acceptance of a modeling SD array have been done to estimate the detection probability of earth-skimming muon neutrinos.
Application of high precision temperature control technology in infrared testing
Cao, Haiyuan; Cheng, Yong; Zhu, Mengzhen; Chu, Hua; Li, Wei
2017-11-01
In allusion to the demand of infrared system test, the principle of Infrared target simulator and the function of the temperature control are presented. The key technology of High precision temperature control is discussed, which include temperature gathering, PID control and power drive. The design scheme of temperature gathering is put forward. In order to reduce the measure error, discontinuously current and four-wire connection for the platinum thermal resistance are adopted. A 24-bits AD chip is used to improve the acquisition precision. Fuzzy PID controller is designed because of the large time constant and continuous disturbance of the environment temperature, which result in little overshoot, rapid response, high steady-state accuracy. Double power operational amplifiers are used to drive the TEC. Experiments show that the key performances such as temperature control precision and response speed meet the requirements.
Vernardos, G.; Fluke, C. J.; Croton, D.; Bate, N. F.
2014-01-01
As synoptic all-sky surveys begin to discover new multiply lensed quasars, the flow of data will enable statistical cosmological microlensing studies of sufficient size to constrain quasar accretion disk and supermassive black hole properties. In preparation for this new era, we are undertaking the GPU-Enabled, High Resolution cosmological MicroLensing parameter survey (GERLUMPH). We present here the GERLUMPH Data Release 1, which consists of 12,342 high resolution cosmological microlensing magnification maps and provides the first uniform coverage of the convergence, shear, and smooth matter fraction parameter space. We use these maps to perform a comprehensive numerical investigation of the mass-sheet degeneracy, finding excellent agreement with its predictions. We study the effect of smooth matter on microlensing induced magnification fluctuations. In particular, in the minima and saddle-point regions, fluctuations are enhanced only along the critical line, while in the maxima region they are always enhanced for high smooth matter fractions (≈0.9). We describe our approach to data management, including the use of an SQL database with a Web interface for data access and online analysis, obviating the need for individuals to download large volumes of data. In combination with existing observational databases and online applications, the GERLUMPH archive represents a fundamental component of a new microlensing eResearch cloud. Our maps and tools are publicly available at http://gerlumph.swin.edu.au/
Vernardos, G.; Fluke, C. J.; Croton, D. [Centre for Astrophysics and Supercomputing, Swinburne University of Technology, P.O. Box 218, Hawthorn, Victoria, 3122 (Australia); Bate, N. F. [Sydney Institute for Astronomy, School of Physics, A28, University of Sydney, NSW, 2006 (Australia)
2014-03-01
As synoptic all-sky surveys begin to discover new multiply lensed quasars, the flow of data will enable statistical cosmological microlensing studies of sufficient size to constrain quasar accretion disk and supermassive black hole properties. In preparation for this new era, we are undertaking the GPU-Enabled, High Resolution cosmological MicroLensing parameter survey (GERLUMPH). We present here the GERLUMPH Data Release 1, which consists of 12,342 high resolution cosmological microlensing magnification maps and provides the first uniform coverage of the convergence, shear, and smooth matter fraction parameter space. We use these maps to perform a comprehensive numerical investigation of the mass-sheet degeneracy, finding excellent agreement with its predictions. We study the effect of smooth matter on microlensing induced magnification fluctuations. In particular, in the minima and saddle-point regions, fluctuations are enhanced only along the critical line, while in the maxima region they are always enhanced for high smooth matter fractions (≈0.9). We describe our approach to data management, including the use of an SQL database with a Web interface for data access and online analysis, obviating the need for individuals to download large volumes of data. In combination with existing observational databases and online applications, the GERLUMPH archive represents a fundamental component of a new microlensing eResearch cloud. Our maps and tools are publicly available at http://gerlumph.swin.edu.au/.
Neutrino properties from cosmology
Hannestad, S.
2013-01-01
In recent years precision cosmology has become an increasingly powerful probe of particle physics. Perhaps the prime example of this is the very stringent cosmological upper bound on the neutrino mass. However, other aspects of neutrino physics, such as their decoupling history and possible non......-standard interactions, can also be probed using observations of cosmic structure. Here, I review the current status of cosmological bounds on neutrino properties and discuss the potential of future observations, for example by the recently approved EUCLID mission, to precisely measure neutrino properties....
High precision pulsar timing and spin frequency second derivatives
Liu, X. J.; Bassa, C. G.; Stappers, B. W.
2018-05-01
We investigate the impact of intrinsic, kinematic and gravitational effects on high precision pulsar timing. We present an analytical derivation and a numerical computation of the impact of these effects on the first and second derivative of the pulsar spin frequency. In addition, in the presence of white noise, we derive an expression to determine the expected measurement uncertainty of a second derivative of the spin frequency for a given timing precision, observing cadence and timing baseline and find that it strongly depends on the latter (∝t-7/2). We show that for pulsars with significant proper motion, the spin frequency second derivative is dominated by a term dependent on the radial velocity of the pulsar. Considering the data sets from three Pulsar Timing Arrays, we find that for PSR J0437-4715 a detectable spin frequency second derivative will be present if the absolute value of the radial velocity exceeds 33 km s-1. Similarly, at the current timing precision and cadence, continued timing observations of PSR J1909-3744 for about another eleven years, will allow the measurement of its frequency second derivative and determine the radial velocity with an accuracy better than 14 km s-1. With the ever increasing timing precision and observing baselines, the impact of the, largely unknown, radial velocities of pulsars on high precision pulsar timing can not be neglected.
High precision mass measurements in Ψ and Υ families revisited
Artamonov, A.S.; Baru, S.E.; Blinov, A.E.
2000-01-01
High precision mass measurements in Ψ and Υ families performed in 1980-1984 at the VEPP-4 collider with OLYA and MD-1 detectors are revisited. The corrections for the new value of the electron mass are presented. The effect of the updated radiative corrections has been calculated for the J/Ψ(1S) and Ψ(2S) mass measurements [ru
Properties of the proton therapy. A high precision radiotherapy
Anon.
2005-01-01
The proton therapy is a radiotherapy using protons beams. The protons present interesting characteristics but they need heavy technologies to be used, such particles accelerators, radiation protection wall and sophisticated technologies to reach the high precision allowed by their ballistic qualities (planning of treatment, beam conformation and patient positioning). (N.C.)
Layered compression for high-precision depth data.
Miao, Dan; Fu, Jingjing; Lu, Yan; Li, Shipeng; Chen, Chang Wen
2015-12-01
With the development of depth data acquisition technologies, access to high-precision depth with more than 8-b depths has become much easier and determining how to efficiently represent and compress high-precision depth is essential for practical depth storage and transmission systems. In this paper, we propose a layered high-precision depth compression framework based on an 8-b image/video encoder to achieve efficient compression with low complexity. Within this framework, considering the characteristics of the high-precision depth, a depth map is partitioned into two layers: 1) the most significant bits (MSBs) layer and 2) the least significant bits (LSBs) layer. The MSBs layer provides rough depth value distribution, while the LSBs layer records the details of the depth value variation. For the MSBs layer, an error-controllable pixel domain encoding scheme is proposed to exploit the data correlation of the general depth information with sharp edges and to guarantee the data format of LSBs layer is 8 b after taking the quantization error from MSBs layer. For the LSBs layer, standard 8-b image/video codec is leveraged to perform the compression. The experimental results demonstrate that the proposed coding scheme can achieve real-time depth compression with satisfactory reconstruction quality. Moreover, the compressed depth data generated from this scheme can achieve better performance in view synthesis and gesture recognition applications compared with the conventional coding schemes because of the error control algorithm.
High-precision multi-node clock network distribution.
Chen, Xing; Cui, Yifan; Lu, Xing; Ci, Cheng; Zhang, Xuesong; Liu, Bo; Wu, Hong; Tang, Tingsong; Shi, Kebin; Zhang, Zhigang
2017-10-01
A high precision multi-node clock network for multiple users was built following the precise frequency transmission and time synchronization of 120 km fiber. The network topology adopts a simple star-shaped network structure. The clock signal of a hydrogen maser (synchronized with UTC) was recovered from a 120 km telecommunication fiber link and then was distributed to 4 sub-stations. The fractional frequency instability of all substations is in the level of 10 -15 in a second and the clock offset instability is in sub-ps in root-mean-square average.
High-precision ground-based photometry of exoplanets
de Mooij Ernst J.W.
2013-04-01
Full Text Available High-precision photometry of transiting exoplanet systems has contributed significantly to our understanding of the properties of their atmospheres. The best targets are the bright exoplanet systems, for which the high number of photons allow very high signal-to-noise ratios. Most of the current instruments are not optimised for these high-precision measurements, either they have a large read-out overhead to reduce the readnoise and/or their field-of-view is limited, preventing simultaneous observations of both the target and a reference star. Recently we have proposed a new wide-field imager for the Observatoir de Mont-Megantic optimised for these bright systems (PI: Jayawardhana. The instruments has a dual beam design and a field-of-view of 17' by 17'. The cameras have a read-out time of 2 seconds, significantly reducing read-out overheads. Over the past years we have obtained significant experience with how to reach the high precision required for the characterisation of exoplanet atmospheres. Based on our experience we provide the following advice: Get the best calibrations possible. In the case of bad weather, characterise the instrument (e.g. non-linearity, dome flats, bias level, this is vital for better understanding of the science data. Observe the target for as long as possible, the out-of-transit baseline is as important as the transit/eclipse itself. A short baseline can lead to improperly corrected systematic and mis-estimation of the red-noise. Keep everything (e.g. position on detector, exposure time as stable as possible. Take care that the defocus is not too strong. For a large defocus, the contribution of the total flux from the sky-background in the aperture could well exceed that of the target, resulting in very strict requirements on the precision at which the background is measured.
High Precision Edge Detection Algorithm for Mechanical Parts
Duan Zhenyun
2018-04-01
Full Text Available High precision and high efficiency measurement is becoming an imperative requirement for a lot of mechanical parts. So in this study, a subpixel-level edge detection algorithm based on the Gaussian integral model is proposed. For this purpose, the step edge normal section line Gaussian integral model of the backlight image is constructed, combined with the point spread function and the single step model. Then gray value of discrete points on the normal section line of pixel edge is calculated by surface interpolation, and the coordinate as well as gray information affected by noise is fitted in accordance with the Gaussian integral model. Therefore, a precise location of a subpixel edge was determined by searching the mean point. Finally, a gear tooth was measured by M&M3525 gear measurement center to verify the proposed algorithm. The theoretical analysis and experimental results show that the local edge fluctuation is reduced effectively by the proposed method in comparison with the existing subpixel edge detection algorithms. The subpixel edge location accuracy and computation speed are improved. And the maximum error of gear tooth profile total deviation is 1.9 μm compared with measurement result with gear measurement center. It indicates that the method has high reliability to meet the requirement of high precision measurement.
High Precision Edge Detection Algorithm for Mechanical Parts
Duan, Zhenyun; Wang, Ning; Fu, Jingshun; Zhao, Wenhui; Duan, Boqiang; Zhao, Jungui
2018-04-01
High precision and high efficiency measurement is becoming an imperative requirement for a lot of mechanical parts. So in this study, a subpixel-level edge detection algorithm based on the Gaussian integral model is proposed. For this purpose, the step edge normal section line Gaussian integral model of the backlight image is constructed, combined with the point spread function and the single step model. Then gray value of discrete points on the normal section line of pixel edge is calculated by surface interpolation, and the coordinate as well as gray information affected by noise is fitted in accordance with the Gaussian integral model. Therefore, a precise location of a subpixel edge was determined by searching the mean point. Finally, a gear tooth was measured by M&M3525 gear measurement center to verify the proposed algorithm. The theoretical analysis and experimental results show that the local edge fluctuation is reduced effectively by the proposed method in comparison with the existing subpixel edge detection algorithms. The subpixel edge location accuracy and computation speed are improved. And the maximum error of gear tooth profile total deviation is 1.9 μm compared with measurement result with gear measurement center. It indicates that the method has high reliability to meet the requirement of high precision measurement.
Strategy for Realizing High-Precision VUV Spectro-Polarimeter
Ishikawa, R.; Narukage, N.; Kubo, M.; Ishikawa, S.; Kano, R.; Tsuneta, S.
2014-12-01
Spectro-polarimetric observations in the vacuum ultraviolet (VUV) range are currently the only means to measure magnetic fields in the upper chromosphere and transition region of the solar atmosphere. The Chromospheric Lyman-Alpha Spectro-Polarimeter (CLASP) aims to measure linear polarization at the hydrogen Lyman- α line (121.6 nm). This measurement requires a polarization sensitivity better than 0.1 %, which is unprecedented in the VUV range. We here present a strategy with which to realize such high-precision spectro-polarimetry. This involves the optimization of instrument design, testing of optical components, extensive analyses of polarization errors, polarization calibration of the instrument, and calibration with onboard data. We expect that this strategy will aid the development of other advanced high-precision polarimeters in the UV as well as in other wavelength ranges.
Precision crystal alignment for high-resolution electron microscope imaging
Wood, G.J.; Beeching, M.J.
1990-01-01
One of the more difficult tasks involved in obtaining quality high-resolution electron micrographs is the precise alignment of a specimen into the required zone. The current accepted procedure, which involves changing to diffraction mode and searching for symmetric point diffraction pattern, is insensitive to small amounts of misalignment and at best qualitative. On-line analysis of the fourier space representation of the image, both for determining and correcting crystal tilt, is investigated. 8 refs., 42 figs
High-precision thickness measurements using beta backscatter
Heckman, R.V.
1978-11-01
A two-axis, automated fixture for use with a high-intensity Pm-147 source and a photomultiplier-scintillation beta-backscatter probe for making thickness measurements has been designed and built. A custom interface was built to connect the system to a minicomputer, and software was written to position the tables, control the probe, and make the measurements. Measurements can be made in less time with much greater precision than by the method previously used
High-precision reflectivity measurements: improvements in the calibration procedure
Jupe, Marco; Grossmann, Florian; Starke, Kai; Ristau, Detlev
2003-05-01
The development of high quality optical components is heavily depending on precise characterization procedures. The reflectance and transmittance of laser components are the most important parameters for advanced laser applications. In the industrial fabrication of optical coatings, quality management is generally insured by spectral photometric methods according to ISO/DIS 15386 on a medium level of accuracy. Especially for high reflecting mirrors, a severe discrepancy in the determination of the absolute reflectivity can be found for spectral photometric procedures. In the first part of the CHOCLAB project, a method for measuring reflectance and transmittance with an enhanced precision was developed, which is described in ISO/WD 13697. In the second part of the CHOCLAB project, the evaluation and optimization for the presented method is scheduled. Within this framework international Round-Robin experiment is currently in progress. During this Round-Robin experiment, distinct deviations could be observed between the results of high precision measurement facilities of different partners. Based on the extended experiments, the inhomogeneity of the sample reflectivity was identified as one important origin for the deviation. Consequently, this inhomogeneity is also influencing the calibration procedure. Therefore, a method was developed that allows the calibration of the chopper blade using always the same position on the reference mirror. During the investigations, the homogeneity of several samples was characterized by a surface mapping procedure for 1064 nm. The measurement facility was extended to the additional wavelength 532 nm and a similar set-up was assembled at 10.6 μm. The high precision reflectivity procedure at the mentioned wavelengths is demonstrated for exemplary measurements.
High precision frequency estimation for harpsichord tuning classification
Tidhar, D.; Mauch, M.; Dixon, S.
2010-01-01
We present a novel music signal processing task of classifying the tuning of a harpsichord from audio recordings of standard musical works. We report the results of a classification experiment involving six different temperaments, using real harpsichord recordings as well as synthesised audio data. We introduce the concept of conservative transcription, and show that existing high-precision pitch estimation techniques are sufficient for our task if combined with conservative transcription. In...
High precision straw tube chamber with cathode readout
Bychkov, V.N.; Golutvin, I.A.; Ershov, Yu.V.
1992-01-01
The high precision straw chamber with cathode readout was constructed and investigated. The 10 mm straws were made of aluminized mylar strip with transparent longitudinal window. The X coordinate information has been taken from the cathode strips as induced charges and investigated via centroid method. The spatial resolution σ=120 μm has been obtained with signal/noise ratio about 60. The possible ways for improving the signal/noise ratio have been described. 7 refs.; 8 figs
A high precision straw tube chamber with cathode readout
Bychkov, V.N.; Golutvin, I.A.; Ershov, Yu.V.; Zubarev, E.V.; Ivanov, A.B.; Lysiakov, V.N.; Makhankov, A.V.; Movchan, S.A.; Peshekhonov, V.D.; Preda, T.
1993-01-01
The high precision straw chamber with cathode readout was constructed and investigated. The 10 mm diameter straws were made of aluminized Mylar with transparent longitudinal window. The X-coordinate information has been taken from cathode strips as induced charges and investigated with the centroid method. The spatial resolution σ x =103 μm was obtained at a signal-to-noise ratio of about 70. The possible ways to improve the signal-to-noise ratio are discussed. (orig.)
El-Khoury, P
1998-04-15
The study of exotic atoms, in which an orbiting electron of a normal atom is replaced by a negatively charged particle ({pi}{sup -}, {mu}{sup -}, p, {kappa}{sup -}, {sigma}{sup -},...) may provide information on the orbiting particle and the atomic nucleus, as well as on their interaction. In this work, we were interested in pionic atoms ({pi}{sup -14} N) on the one hand in order to determine the pion mass with high accuracy (4 ppm), and on the other hand in antiprotonic atoms (pp-bar) in order to study the strong nucleon-antinucleon interaction at threshold. In this respect, a high-resolution crystal spectrometer was coupled to a cyclotron trap which provides a high stop density for particles in gas targets at low pressure. Using curved crystals, an extended X-ray source could be imaged onto the detector. Charge-Coupled Devices were used as position sensitive detectors in order to measure the Bragg angle of the transition to a high precision. The use of gas targets resolved the ambiguity owing to the number of K electrons for the value of the pion mass, and, for the first time, strong interaction shift and broadening of the 2p level in antiprotonic hydrogen were measured directly. (author)
High-Precision Computation: Mathematical Physics and Dynamics
Bailey, D.H.; Barrio, R.; Borwein, J.M.
2010-01-01
At the present time, IEEE 64-bit oating-point arithmetic is suficiently accurate for most scientic applications. However, for a rapidly growing body of important scientic computing applications, a higher level of numeric precision is required. Such calculations are facilitated by high-precision software packages that include high-level language translation modules to minimize the conversion e ort. This pa- per presents a survey of recent applications of these techniques and provides someanalysis of their numerical requirements. These applications include supernova simulations, climate modeling, planetary orbit calculations, Coulomb n-body atomic systems, studies of the one structure constant, scattering amplitudes of quarks, glu- ons and bosons, nonlinear oscillator theory, experimental mathematics, evaluation of orthogonal polynomials, numerical integration of ODEs, computation of periodic orbits, studies of the splitting of separatrices, detection of strange nonchaotic at- tractors, Ising theory, quantum held theory, and discrete dynamical systems. We conclude that high-precision arithmetic facilities are now an indispensable compo- nent of a modern large-scale scientic computing environment.
High precision electrostatic potential calculations for cylindrically symmetric lenses
Edwards, David Jr.
2007-01-01
A method is developed for a potential calculation within cylindrically symmetric electrostatic lenses using mesh relaxation techniques, and it is capable of considerably higher accuracies than currently available. The method involves (i) creating very high order algorithms (orders of 6, 8, and 10) for determining the potentials at points in the net using surrounding point values, (ii) eliminating the effect of the large errors caused by singular points, and (iii) reducing gradients in the high gradient regions of the geometry, thereby allowing the algorithms used in these regions to achieve greater precisions--(ii) and (iii) achieved by the use of telescopic multiregions. In addition, an algorithm for points one unit from a metal surface is developed, allowing general mesh point algorithms to be used in these situations, thereby taking advantage of the enhanced precision of the latter. A maximum error function dependent on a sixth order gradient of the potential is defined. With this the single point algorithmic errors are able to be viewed over the entire net. Finally, it is demonstrated that by utilizing the above concepts and procedures, the potential of a point in a reasonably high gradient region of a test geometry can realize a precision of less than 10 -10
High-Precision Computation: Mathematical Physics and Dynamics
Bailey, D. H.; Barrio, R.; Borwein, J. M.
2010-04-01
At the present time, IEEE 64-bit oating-point arithmetic is suficiently accurate for most scientic applications. However, for a rapidly growing body of important scientic computing applications, a higher level of numeric precision is required. Such calculations are facilitated by high-precision software packages that include high-level language translation modules to minimize the conversion e ort. This pa- per presents a survey of recent applications of these techniques and provides someanalysis of their numerical requirements. These applications include supernova simulations, climate modeling, planetary orbit calculations, Coulomb n-body atomic systems, studies of the one structure constant, scattering amplitudes of quarks, glu- ons and bosons, nonlinear oscillator theory, experimental mathematics, evaluation of orthogonal polynomials, numerical integration of ODEs, computation of periodic orbits, studies of the splitting of separatrices, detection of strange nonchaotic at- tractors, Ising theory, quantum held theory, and discrete dynamical systems. We conclude that high-precision arithmetic facilities are now an indispensable compo- nent of a modern large-scale scientic computing environment.
High precision efficiency calibration of a HPGe detector
Nica, N.; Hardy, J.C.; Iacob, V.E.; Helmer, R.G.
2003-01-01
Many experiments involving measurements of γ rays require a very precise efficiency calibration. Since γ-ray detection and identification also requires good energy resolution, the most commonly used detectors are of the coaxial HPGe type. We have calibrated our 70% HPGe to ∼ 0.2% precision, motivated by the measurement of precise branching ratios (BR) in superallowed 0 + → 0 + β decays. These BRs are essential ingredients in extracting ft-values needed to test the Standard Model via the unitarity of the Cabibbo-Kobayashi-Maskawa matrix, a test that it currently fails by more than two standard deviations. To achieve the required high precision in our efficiency calibration, we measured 17 radioactive sources at a source-detector distance of 15 cm. Some of these were commercial 'standard' sources but we achieved the highest relative precision with 'home-made' sources selected because they have simple decay schemes with negligible side feeding, thus providing exactly matched γ-ray intensities. These latter sources were produced by us at Texas A and M by n-activation or by nuclear reactions. Another critical source among the 17 was a 60 Co source produced by Physikalisch-Technische Bundesanstalt, Braunschweig, Germany: its absolute activity was quoted to better than 0.06%. We used it to establish our absolute efficiency, while all the other sources were used to determine relative efficiencies, extending our calibration over a large energy range (40-3500 keV). Efficiencies were also determined with Monte Carlo calculations performed with the CYLTRAN code. The physical parameters of the Ge crystal were independently determined and only two (unmeasurable) dead-layers were adjusted, within physically reasonable limits, to achieve precise absolute agreement with our measured efficiencies. The combination of measured efficiencies at more than 60 individual energies and Monte Carlo calculations to interpolate between them allows us to quote the efficiency of our
Lesgourgues, Julien
2012-01-01
Neutrinos can play an important role in the evolution of the Universe, modifying some of the cosmological observables. In this contribution we summarize the main aspects of cosmological relic neutrinos and we describe how the precision of present cosmological data can be used to learn about neutrino properties, in particular their mass, providing complementary information to beta decay and neutrinoless double-beta decay experiments. We show how the analysis of current cosmological observations, such as the anisotropies of the cosmic microwave background or the distribution of large-scale structure, provides an upper bound on the sum of neutrino masses of order 1 eV or less, with very good perspectives from future cosmological measurements which are expected to be sensitive to neutrino masses well into the sub-eV range.
High precision capacitive beam phase probe for KHIMA project
Hwang, Ji-Gwang, E-mail: windy206@hanmail.net [Korea Institute of Radiological and Medical Sciences, 215–4, Gongneung-dong, Nowon-t, Seoul 139–706 (Korea, Republic of); Yang, Tae-Keun [Korea Institute of Radiological and Medical Sciences, 215–4, Gongneung-dong, Nowon-t, Seoul 139–706 (Korea, Republic of); Forck, Peter [GSI Helmholtz Centre for Ion Research, Darmstadt 64291, German (Germany)
2016-11-21
In the medium energy beam transport (MEBT) line of KHIMA project, a high precision beam phase probe monitor is required for a precise tuning of RF phase and amplitude of Radio Frequency Quadrupole (RFQ) accelerator and IH-DTL linac. It is also used for measuring a kinetic energy of ion beam by time-of-flight (TOF) method using two phase probes. The capacitive beam phase probe has been developed. The electromagnetic design of the high precision phase probe was performed to satisfy the phase resolution of 1° (@200 MHz). It was confirmed by the test result using a wire test bench. The measured phase accuracy of the fabricated phase probe is 1.19 ps. The pre-amplifier electronics with the 0.125 ∼ 1.61 GHz broad-band was designed and fabricated for amplifying the signal strength. The results of RF frequency and beam energy measurement using a proton beam from the cyclotron in KIRAMS is presented.
High precision ray tracing in cylindrically symmetric electrostatics
Edwards Jr, David, E-mail: dej122842@gmail.com
2015-11-15
Highlights: • High precision ray tracing is formulated using power series techniques. • Ray tracing is possible for fields generated by solution to laplace's equation. • Spatial and temporal orders of 4–10 are included. • Precisions in test geometries of hemispherical deflector analyzer of ∼10{sup −20} have been obtained. • This solution offers a considerable extension to the ray tracing accuracy over the current state of art. - Abstract: With the recent availability of a high order FDM solution to the curved boundary value problem, it is now possible to determine potentials in such geometries with considerably greater accuracy than had been available with the FDM method. In order for the algorithms used in the accurate potential calculations to be useful in ray tracing, an integration of those algorithms needs to be placed into the ray trace process itself. The object of this paper is to incorporate these algorithms into a solution of the equations of motion of the ray and, having done this, to demonstrate its efficacy. The algorithm incorporation has been accomplished by using power series techniques and the solution constructed has been tested by tracing the medial ray through concentric sphere geometries. The testing has indicated that precisions of ray calculations of 10{sup −20} are now possible. This solution offers a considerable extension to the ray tracing accuracy over the current state of art.
The various correction methods to the high precision aeromagnetic data
Xu Guocang; Zhu Lin; Ning Yuanli; Meng Xiangbao; Zhang Hongjian
2014-01-01
In the airborne geophysical survey, an outstanding achievement first depends on the measurement precision of the instrument, and the choice of measurement conditions, the reliability of data collection, followed by the correct method of measurement data processing, the rationality of the data interpretation. Obviously, geophysical data processing is an important task for the comprehensive interpretation of the measurement results, processing method is correct or not directly related to the quality of the final results. we have developed a set of personal computer software to aeromagnetic and radiometric survey data processing in the process of actual production and scientific research in recent years, and successfully applied to the production. The processing methods and flowcharts to the high precision aromagnetic data were simply introduced in this paper. However, the mathematical techniques of the various correction programes to IGRF and flying height and magnetic diurnal variation were stressily discussed in the paper. Their processing effectness were illustrated by taking an example as well. (authors)
Strategies for high-precision Global Positioning System orbit determination
Lichten, Stephen M.; Border, James S.
1987-01-01
Various strategies for the high-precision orbit determination of the GPS satellites are explored using data from the 1985 GPS field test. Several refinements to the orbit determination strategies were found to be crucial for achieving high levels of repeatability and accuracy. These include the fine tuning of the GPS solar radiation coefficients and the ground station zenith tropospheric delays. Multiday arcs of 3-6 days provided better orbits and baselines than the 8-hr arcs from single-day passes. Highest-quality orbits and baselines were obtained with combined carrier phase and pseudorange solutions.
International workshop on advanced materials for high precision detectors. Proceedings
Nicquevert, B.; Hauviller, C.
1994-01-01
These proceedings gather together the contributions to the Workshop on Advanced Materials for High Precision Detectors, which was held from 28-30 September 1994 in Archamps, Haute-Savoie, France. This meeting brought together international experts (researchers, physicists and engineers) in the field of advanced materials and their use in high energy physics detectors or spacecraft applications. Its purpose was to discuss the status of the different materials currently in use in the structures of detectors and spacecraft, together with their actual performances, technological implications and future prospects. Environmental effects, such as those of moisture and radiation, were discussed, as were design and manufacturing technologies. Some case studies were presented. (orig.)
Wainwright, J.
1990-01-01
The workshop on mathematical cosmology was devoted to four topics of current interest. This report contains a brief discussion of the historical background of each topic and a concise summary of the content of each talk. The topics were; the observational cosmology program, the cosmological perturbation program, isotropic singularities, and the evolution of Bianchi cosmologies. (author)
Precise muon drift tube detectors for high background rate conditions
Engl, Albert
2011-08-04
The muon spectrometer of the ATLAS-experiment at the Large Hadron Collider consists of drift tube chambers, which provide the precise measurement of trajectories of traversing muons. In order to determine the momentum of the muons with high precision, the measurement of the position of the muon in a single tube has to be more accurate than {sigma}{<=}100 {mu}m. The large cross section of proton-proton-collisions and the high luminosity of the accelerator cause relevant background of neutrons and {gamma}s in the muon spectrometer. During the next decade a luminosity upgrade to 5.10{sup 34} cm{sup -2}s{sup -1} is planned, which will increase the background counting rates considerably. In this context this work deals with the further development of the existing drift chamber technology to provide the required accuracy of the position measurement under high background conditions. Two approaches of improving the drift tube chambers are described: - In regions of moderate background rates a faster and more linear drift gas can provide precise position measurement without changing the existing hardware. - At very high background rates drift tube chambers consisting of tubes with a diameter of 15 mm are a valuable candidate to substitute the CSC muon chambers. The single tube resolution of the gas mixture Ar:CO{sub 2}:N{sub 2} in the ratio of 96:3:1 Vol %, which is more linear and faster as the currently used drift gas Ar:CO{sub 2} in the ratio of 97:3 Vol %, was determined at the Cosmic Ray Measurement Facility at Garching and at high {gamma}-background counting rates at the Gamma Irradiation Facility at CERN. The alternative gas mixture shows similar resolution without background. At high background counting rates it shows better resolution as the standard gas. To analyse the data the various parts of the setup have to be aligned precisely to each other. The change to an alternative gas mixture allows the use of the existing hardware. The second approach are drift tubes
Raychaudhuri, A.K.
1979-01-01
The subject is covered in chapters, entitled; introduction; Newtonian gravitation and cosmology; general relativity and relativistic cosmology; analysis of observational data; relativistic models not obeying the cosmological principle; microwave radiation background; thermal history of the universe and nucleosynthesis; singularity of cosmological models; gravitational constant as a field variable; cosmological models based on Einstein-Cartan theory; cosmological singularity in two recent theories; fate of perturbations of isotropic universes; formation of galaxies; baryon symmetric cosmology; assorted topics (including extragalactic radio sources; Mach principle). (U.K.)
High precision and stable structures for particle detectors
Da Mota Silva, S; Hauviller, Claude
1999-01-01
The central detectors used in High Energy Physics Experiments require the use of light and stable structures capable of supporting delicate and precise radiation detection elements. These structures need to be highly stable under environmental conditions where external vibrations, high radiation levels, temperature and humidity gradients should be taken into account. Their main design drivers are high dimension and dynamic stability, high stiffness to mass ratio and large radiation length. For most applications, these constraints lead us to choose Carbon Fiber Reinforced Plastics ( CFRP) as structural element. The construction of light and stable structures with CFRP for these applications can be achieved by careful design engineering and further confirmation at the prototyping phase. However, the experimental environment can influence their characteristics and behavior. In this case, theuse of adaptive structures could become a solution for this problem. We are studying structures in CFRP with bonded piezoel...
SKLUST device for high-precision gluing of MWPC
Amaglobeli, N.S.; Burov, R.V.; Sakandelidze, R.M.; Sakhelashvili, T.M.; Chiladze, B.G.; Glonti, G.L.; Glonti, L.N.
2005-01-01
The SKLUST device has been created for gluing precision plane-parallel anode, cathode of spacer bars and integral anode and cathode frames of the MWPCs or flat surfaces of the large-area cathode planes for them in the case that thin copper clad stesalit or glass-cloth-base laminate is used as the cathode, for example, for the CSC chambers. In contrast to usual gluing, in this device the glued components are not pressed to each other. SKLUST allows making high-precision products in laboratory conditions without preliminarily machining its components and receiving a precision article practically for any area at the plane parallelism from ±0.030 up to ±0.006 mm using a non-calibrated sheet of the foiled (or unfoiled) stesalit, glass-cloth-base laminate or other flexible materials to a tolerance for the thickness ±0.2-0.5 mm or worse. On the biggest of the existing devices it is possible to fabricate an article with the maximal sizes 2400x250 mm 2 at the thickness accuracy (6±0.015) mm (maximum deviation). Whereas in the technological cycle machining of blanks to the thickness or application of exact blanks is completely excluded, the manufacturing process becomes simpler, and the price of the articles essentially reduces, especially for mass production
Stompor, R.; Abroe, M.; Ade, P.; Balbi, A.; Barbosa, D.; Bock, J.; Borrill, J.; Boscaleri, A.; de Bernardis, P.; Ferreira, P.G.; Hanany, S.; Hristov, V.; Jaffe, A.H.; Lee, A.T.; Pascale, E.; Rabii, B.; Richards, P.L.; Smoot, G.F.; Winant, C.D.; Wu, J.H.P.
2001-01-01
We discuss the cosmological implications of the new constraints on the power spectrum of the cosmic microwave background (CMB) anisotropy derived from a new high-resolution analysis of the MAXIMA-1 measurement. The power spectrum indicates excess power at lsimilar to 860 over the average level of power at 411 less than or equal to l less than or equal to 785. This excess is statistically significant at the similar to 95 percent confidence level. Its position coincides with that of the third acoustic peak, as predicted by generic inflationary models selected to fit the first acoustic peak as observed in the data. The height of the excess power agrees with the predictions of a family of inflationary models with cosmological parameters that are fixed to fit the CMB data previously provided by BOOMERANG-LDB and MAXIMA-1 experiments. Our results therefore lend support for inflationary models and more generally for the dominance of adiabatic coherent perturbations in the structure formation of the universe. At the same time, they seem to disfavor a large variety of the nonstandard (but inflation-based) models that have been proposed to improve the quality of fits to the CMB data and the consistency with other cosmological observables. Within standard inflationary models, our results combined with the COBE/Differential Microwave Radiometer data give best-fit values and 95 percent confidence limits for the baryon density, Omega (b)h(2)similar or equal to 0.033 +/- 0.013, and the total density, Omega =0.9(-0.16)(+0.18). The primordial spectrum slope (n(s)) and the optical depth to the last scattering surface (tau (c)) are found to be degenerate and to obey the relation n(s) similar or equal to (0.99 +/- 0.14) + 0.46tau (c), for tau (c) less than or equal to 0.5 (all at 95 percent confidence levels)
Precision Muon Tracking Detectors for High-Energy Hadron Colliders
Gadow, Philipp; Kroha, Hubert; Richter, Robert
2016-01-01
Small-diameter muon drift tube (sMDT) chambers with 15 mm tube diameter are a cost-effective technology for high-precision muon tracking over large areas at high background rates as expected at future high-energy hadron colliders including HL-LHC. The chamber design and construction procedures have been optimized for mass production and provide sense wire positioning accuracy of better than 10 ?m. The rate capability of the sMDT chambers has been extensively tested at the CERN Gamma Irradiation Facility. It exceeds the one of the ATLAS muon drift tube (MDT) chambers, which are operated at unprecedentedly high background rates of neutrons and gamma-rays, by an order of magnitude, which is sufficient for almost the whole muon detector acceptance at FCC-hh at maximum luminosity. sMDT operational and construction experience exists from ATLAS muon spectrometer upgrades which are in progress or under preparation for LHC Phase 1 and 2.
Partridge, R.B.
1977-01-01
Some sixty years after the development of relativistic cosmology by Einstein and his colleagues, observations are finally beginning to have an important impact on our views of the Universe. The available evidence seems to support one of the simplest cosmological models, the hot Big Bang model. The aim of this paper is to assess the observational support for certain assumptions underlying the hot Big Bang model. These are that the Universe is isobaric and homogeneous on a large scale; that it is expanding from an initial state of high density and temperature; and that the proper theory to describe the dynamics of the Universe is unmodified General Relativity. The properties of the cosmic microwave background radiation and recent observations of the abundance of light elements, in particular, support these assumptions. Also examined here are the data bearing on the related questions of the geometry and the future of the Universe (is it ever-expanding, or fated to recollapse). Finally, some difficulties and faults of the standard model are discussed, particularly various aspects of the 'initial condition' problem. It appears that the simplest Big Bang cosmological model calls for a highly specific set of initial conditions to produce the presently observed properties of the Universe. (Auth.)
High-precision micro/nano-scale machining system
Kapoor, Shiv G.; Bourne, Keith Allen; DeVor, Richard E.
2014-08-19
A high precision micro/nanoscale machining system. A multi-axis movement machine provides relative movement along multiple axes between a workpiece and a tool holder. A cutting tool is disposed on a flexible cantilever held by the tool holder, the tool holder being movable to provide at least two of the axes to set the angle and distance of the cutting tool relative to the workpiece. A feedback control system uses measurement of deflection of the cantilever during cutting to maintain a desired cantilever deflection and hence a desired load on the cutting tool.
Future high precision experiments and new physics beyond Standard Model
Luo, Mingxing.
1993-01-01
High precision (< 1%) electroweak experiments that have been done or are likely to be done in this decade are examined on the basis of Standard Model (SM) predictions of fourteen weak neutral current observables and fifteen W and Z properties to the one-loop level, the implications of the corresponding experimental measurements to various types of possible new physics that enter at the tree or loop level were investigated. Certain experiments appear to have special promise as probes of the new physics considered here
Designing compensator of dual servo system for high precision positioning
Choi, Hyeun Seok; Song, Chi Woo; Han, Chang Soo; Choi, Tae Hoon; Lee, Nak Kyu; Na, Kyung Hwan
2003-01-01
The high precision positioning mechanism is used in various industrial fields. It is used in semiconductor manufacturing line, test instrument, bioengineering, and MEMS and so on. This paper presents a positioning mechanism with dual servo system. Dual servo system consists of a coarse stage and a fine motion stage. The course stage is driven by VCM and the actuator of fine stage is the PZT. The purposes of dual servo system are stability, higher bandwidth, and robustness. Lead compensator is applied to this control system, and is designed by PQ method. Designed compensator can improve property of positioning mechanism
Precise muon drift tube detectors for high background rate conditions
Engl, Albert; Dünnweber, Wolfgang
The muon spectrometer of the ATLAS-experiment at the Large H adron Collider consists of drift tube chambers, which provide the precise m easurement of trajec- tories of traversing muons. In order to determine the moment um of the muons with high precision, the measurement of the position of the m uon in a single tube has to be more accurate than σ ≤ 100 m. The large cross section of proton-proton-collisions and th e high luminosity of the accelerator cause relevant background of neutrons and γ s in the muon spectrome- ter. During the next decade a luminosity upgrade [1] to 5 10 34 cm − 2 s − 1 is planned, which will increase the background counting rates consider ably. In this context this work deals with the further development of the existing drift chamber tech- nology to provide the required accuracy of the position meas urement under high background conditions. Two approaches of improving the dri ft tube chambers are described: • In regions of moderate background rates a faster and more lin ear ...
Cosmological acceleration. Dark energy or modified gravity?
Bludman, S.
2006-05-01
We review the evidence for recently accelerating cosmological expansion or ''dark energy'', either a negative pressure constituent in General Relativity (Dark Energy) or modified gravity (Dark Gravity), without any constituent Dark Energy. If constituent Dark Energy does not exist, so that our universe is now dominated by pressure-free matter, Einstein gravity must be modified at low curvature. The vacuum symmetry of any Robertson-Walker universe then characterizes Dark Gravity as low- or high-curvature modifications of Einstein gravity. The dynamics of either kind of ''dark energy'' cannot be derived from the homogeneous expansion history alone, but requires also observing the growth of inhomogeneities. Present and projected observations are all consistent with a small fine tuned cosmological constant, but also allow nearly static Dark Energy or gravity modified at cosmological scales. The growth of cosmological fluctuations will potentially distinguish between static and ''dynamic'' ''dark energy''. But, cosmologically distinguishing the Concordance Model ΛCDM from modified gravity will require a weak lensing shear survey more ambitious than any now projected. Dvali-Gabadadze-Porrati low-curvature modifications of Einstein gravity may also be detected in refined observations in the solar system (Lue and Starkman) or at the intermediate Vainstein scale (Iorio) in isolated galaxy clusters. Dark Energy's epicyclic character, failure to explain the original Cosmic Coincidence (''Why so small now?'') without fine tuning, inaccessibility to laboratory or solar system tests, along with braneworld theories, now motivate future precision solar system, Vainstein-scale and cosmological-scale studies of Dark Gravity. (Orig.)
Cosmological acceleration. Dark energy or modified gravity?
Bludman, S
2006-05-15
We review the evidence for recently accelerating cosmological expansion or ''dark energy'', either a negative pressure constituent in General Relativity (Dark Energy) or modified gravity (Dark Gravity), without any constituent Dark Energy. If constituent Dark Energy does not exist, so that our universe is now dominated by pressure-free matter, Einstein gravity must be modified at low curvature. The vacuum symmetry of any Robertson-Walker universe then characterizes Dark Gravity as low- or high-curvature modifications of Einstein gravity. The dynamics of either kind of ''dark energy'' cannot be derived from the homogeneous expansion history alone, but requires also observing the growth of inhomogeneities. Present and projected observations are all consistent with a small fine tuned cosmological constant, but also allow nearly static Dark Energy or gravity modified at cosmological scales. The growth of cosmological fluctuations will potentially distinguish between static and ''dynamic'' ''dark energy''. But, cosmologically distinguishing the Concordance Model {lambda}CDM from modified gravity will require a weak lensing shear survey more ambitious than any now projected. Dvali-Gabadadze-Porrati low-curvature modifications of Einstein gravity may also be detected in refined observations in the solar system (Lue and Starkman) or at the intermediate Vainstein scale (Iorio) in isolated galaxy clusters. Dark Energy's epicyclic character, failure to explain the original Cosmic Coincidence (''Why so small now?'') without fine tuning, inaccessibility to laboratory or solar system tests, along with braneworld theories, now motivate future precision solar system, Vainstein-scale and cosmological-scale studies of Dark Gravity. (Orig.)
Electromagnetic Charge Radius of the Pion at High Precision
Ananthanarayan, B.; Caprini, Irinel; Das, Diganta
2017-09-01
We present a determination of the pion charge radius from high precision data on the pion vector form factor from both timelike and spacelike regions, using a novel formalism based on analyticity and unitarity. At low energies, instead of the poorly known modulus of the form factor, we use its phase, known with high accuracy from Roy equations for π π elastic scattering via the Fermi-Watson theorem. We use also the values of the modulus at several higher timelike energies, where the data from e+e- annihilation and τ decay are mutually consistent, as well as the most recent measurements at spacelike momenta. The experimental uncertainties are implemented by Monte Carlo simulations. The results, which do not rely on a specific parametrization, are optimal for the given input information and do not depend on the unknown phase of the form factor above the first inelastic threshold. Our prediction for the charge radius of the pion is rπ=(0.657 ±0.003 ) fm , which amounts to an increase in precision by a factor of about 2.7 compared to the Particle Data Group average.
Present status and future aspects of highly precise radiotherapy
Oita, Masataka; Takegawa, Yoshihiro; Maezawa, Hiroshi; Ikushima, Hitoshi; Osaki, Kyosuke; Nishitani, Hiromu
2006-01-01
This review describes about therapeutic equipments, irradiation technology, actual practice of highly precise radiotherapy (RT) and its tasks in future. Development of radiation equipments has made the therapy highly precise. At present, there are reportedly 836 linacs and 23 microtrons in Japan (March, 2005), most of which are computerized, new generation equipments. Image-guided RT, CT-linac system, real-time tumor-tracking RT (RTRT), tomotherapy and cyberknife are introduced owing to development of concerned devices and equipments. In addition, there are 7 facilities with proton and/or heavy ion beams. In parallel with the machine development above, irradiation has become to that from 2D to 3D by multi-gate technique with use of multi-leaf collimator and intensity-modulated RT is introduced. RTRT is an example of 4D RT. Practically, stereotactic irradiation (STI) to brain tumor has resulted in 1-year cumulative survival rate of 58% in 16 cases (23 foci, median size 1.2 cm and volume 0.57 ml) with median dose of 21.0 Gy in authors' hospital. STI in the early stage lung cancers is also practically conducted without severe adverse effects. Future tasks involve the further development of irradiation techniques and RT planning, QA/QC system, and raising of experts in related fields, which is a national problem. (T.I.)
Dynamics of High-Speed Precision Geared Rotor Systems
Lim Teik C.
2014-07-01
Full Text Available Gears are one of the most widely applied precision machine elements in power transmission systems employed in automotive, aerospace, marine, rail and industrial applications because of their reliability, precision, efficiency and versatility. Fundamentally, gears provide a very practical mechanism to transmit motion and mechanical power between two rotating shafts. However, their performance and accuracy are often hampered by tooth failure, vibrations and whine noise. This is most acute in high-speed, high power density geared rotor systems, which is the primary scope of this paper. The present study focuses on the development of a gear pair mathematical model for use to analyze the dynamics of power transmission systems. The theory includes the gear mesh representation derived from results of the quasi-static tooth contact analysis. This proposed gear mesh theory comprising of transmission error, mesh point, mesh stiffness and line-of-action nonlinear, time-varying parameters can be easily incorporated into a variety of transmission system models ranging from the lumped parameter type to detailed finite element representation. The gear dynamic analysis performed led to the discovery of the out-of-phase gear pair torsion modes that are responsible for much of the mechanical problems seen in gearing applications. The paper concludes with a discussion on effectual design approaches to minimize the influence of gear dynamics and to mitigate gear failure in practical power transmission systems.
Astroparticle physics and cosmology
Senjanovic, G.; Smirnov, A.Yu.; Thompson, G.
2001-01-01
In this volume a wide spectrum of topics of modern astroparticle physics, such as neutrino astrophysics, dark matter of the universe, high energy cosmic rays, topological defects in cosmology, γ-ray bursts, phase transitions at high temperatures, is covered. The articles written by top level experts in the field give a comprehensive view of the state-of-the-art of modern cosmology
Astroparticle physics and cosmology
Senjanovic, G; Smirnov, A Yu; Thompson, G [eds.
2001-11-15
In this volume a wide spectrum of topics of modern astroparticle physics, such as neutrino astrophysics, dark matter of the universe, high energy cosmic rays, topological defects in cosmology, {gamma}-ray bursts, phase transitions at high temperatures, is covered. The articles written by top level experts in the field give a comprehensive view of the state-of-the-art of modern cosmology.
BEAMGAA. A chance for high precision analysis of big samples
Goerner, W.; Berger, A.; Haase, O.; Segebade, Chr.; Alber, D.; Monse, G.
2005-01-01
In activation analysis of traces in small samples, the non-equivalence of the activating radiation doses of sample and calibration material gives rise to sometimes tolerable systematic errors. Conversely, analysis of major components usually demands high trueness and precision. To meet this, beam geometry activation analysis (BEAMGAA) procedures have been developed for instrumental photon (IPAA) and neutron activation analysis (INAA) in which the activating neutron/photon beam exhibits broad, flat-topped characteristics. This results in a very low lateral activating flux gradient compared to known radiation facilities, however, at significantly lower flux density. The axial flux gradient can be accounted for by a monitor-sample-monitor assembly. As a first approach, major components were determined in high purity substances as well as selenium in a cattle fodder additive. (author)
The QCD coupling and parton distributions at high precision
Bluemlein, Johannes
2010-07-01
A survey is given on the present status of the nucleon parton distributions and related precision calculations and precision measurements of the strong coupling constant α s (M 2 Z ). We also discuss the impact of these quantities on precision observables at hadron colliders. (orig.)
The QCD coupling and parton distributions at high precision
Bluemlein, Johannes
2010-07-15
A survey is given on the present status of the nucleon parton distributions and related precision calculations and precision measurements of the strong coupling constant {alpha}{sub s}(M{sup 2}{sub Z}). We also discuss the impact of these quantities on precision observables at hadron colliders. (orig.)
Observing exoplanet populations with high-precision astrometry
Sahlmann, Johannes
2012-06-01
This thesis deals with the application of the astrometry technique, consisting in measuring the position of a star in the plane of the sky, for the discovery and characterisation of extra-solar planets. It is feasible only with a very high measurement precision, which motivates the use of space observatories, the development of new ground-based astronomical instrumentation and of innovative data analysis methods: The study of Sun-like stars with substellar companions using CORALIE radial velocities and HIPPARCOS astrometry leads to the determination of the frequency of close brown dwarf companions and to the discovery of a dividing line between massive planets and brown dwarf companions; An observation campaign employing optical imaging with a very large telescope demonstrates sufficient astrometric precision to detect planets around ultra-cool dwarf stars and the first results of the survey are presented; Finally, the design and initial astrometric performance of PRIMA, ! a new dual-feed near-infrared interferometric observing facility for relative astrometry is presented.
High precision isotopic ratio analysis of volatile metal chelates
Hachey, D.L.; Blais, J.C.; Klein, P.D.
1980-01-01
High precision isotope ratio measurements have been made for a series of volatile alkaline earth and transition metal chelates using conventional GC/MS instrumentation. Electron ionization was used for alkaline earth chelates, whereas isobutane chemical ionization was used for transition metal studies. Natural isotopic abundances were determined for a series of Mg, Ca, Cr, Fe, Ni, Cu, Cd, and Zn chelates. Absolute accuracy ranged between 0.01 and 1.19 at. %. Absolute precision ranged between +-0.01-0.27 at. % (RSD +- 0.07-10.26%) for elements that contained as many as eight natural isotopes. Calibration curves were prepared using natural abundance metals and their enriched 50 Cr, 60 Ni, and 65 Cu isotopes covering the range 0.1-1010.7 at. % excess. A separate multiple isotope calibration curve was similarly prepared using enriched 60 Ni (0.02-2.15 at. % excess) and 62 Ni (0.23-18.5 at. % excess). The samples were analyzed by GC/CI/MS. Human plasma, containing enriched 26 Mg and 44 Ca, was analyzed by EI/MS. 1 figure, 5 tables
HIGH PRECISION ROVIBRATIONAL SPECTROSCOPY OF OH{sup +}
Markus, Charles R.; Hodges, James N.; Perry, Adam J.; Kocheril, G. Stephen; McCall, Benjamin J. [Department of Chemistry, University of Illinois, Urbana, IL 61801 (United States); Müller, Holger S. P., E-mail: bjmccall@illinois.edu [I. Physikalisches Institut, Universität zu Köln, Zülpicher Str. 77, D-50937 Köln (Germany)
2016-02-01
The molecular ion OH{sup +} has long been known to be an important component of the interstellar medium. Its relative abundance can be used to indirectly measure cosmic ray ionization rates of hydrogen, and it is the first intermediate in the interstellar formation of water. To date, only a limited number of pure rotational transitions have been observed in the laboratory making it necessary to indirectly calculate rotational levels from high-precision rovibrational spectroscopy. We have remeasured 30 transitions in the fundamental band with MHz-level precision, in order to enable the prediction of a THz spectrum of OH{sup +}. The ions were produced in a water cooled discharge of O{sub 2}, H{sub 2}, and He, and the rovibrational transitions were measured with the technique Noise Immune Cavity Enhanced Optical Heterodyne Velocity Modulation Spectroscopy. These values have been included in a global fit of field free data to a {sup 3}Σ{sup −} linear molecule effective Hamiltonian to determine improved spectroscopic parameters which were used to predict the pure rotational transition frequencies.
Ultracold Anions for High-Precision Antihydrogen Experiments.
Cerchiari, G; Kellerbauer, A; Safronova, M S; Safronova, U I; Yzombard, P
2018-03-30
Experiments with antihydrogen (H[over ¯]) for a study of matter-antimatter symmetry and antimatter gravity require ultracold H[over ¯] to reach ultimate precision. A promising path towards antiatoms much colder than a few kelvin involves the precooling of antiprotons by laser-cooled anions. Because of the weak binding of the valence electron in anions-dominated by polarization and correlation effects-only few candidate systems with suitable transitions exist. We report on a combination of experimental and theoretical studies to fully determine the relevant binding energies, transition rates, and branching ratios of the most promising candidate La^{-}. Using combined transverse and collinear laser spectroscopy, we determined the resonant frequency of the laser cooling transition to be ν=96.592 713(91) THz and its transition rate to be A=4.90(50)×10^{4} s^{-1}. Using a novel high-precision theoretical treatment of La^{-} we calculated yet unmeasured energy levels, transition rates, branching ratios, and lifetimes to complement experimental information on the laser cooling cycle of La^{-}. The new data establish the suitability of La^{-} for laser cooling and show that the cooling transition is significantly stronger than suggested by a previous theoretical study.
How does pressure gravitate? Cosmological constant problem confronts observational cosmology
Narimani, Ali; Afshordi, Niayesh; Scott, Douglas
2014-08-01
An important and long-standing puzzle in the history of modern physics is the gross inconsistency between theoretical expectations and cosmological observations of the vacuum energy density, by at least 60 orders of magnitude, otherwise known as the cosmological constant problem. A characteristic feature of vacuum energy is that it has a pressure with the same amplitude, but opposite sign to its energy density, while all the precision tests of General Relativity are either in vacuum, or for media with negligible pressure. Therefore, one may wonder whether an anomalous coupling to pressure might be responsible for decoupling vacuum from gravity. We test this possibility in the context of the Gravitational Aether proposal, using current cosmological observations, which probe the gravity of relativistic pressure in the radiation era. Interestingly, we find that the best fit for anomalous pressure coupling is about half-way between General Relativity (GR), and Gravitational Aether (GA), if we include Planck together with WMAP and BICEP2 polarization cosmic microwave background (CMB) observations. Taken at face value, this data combination excludes both GR and GA at around the 3 σ level. However, including higher resolution CMB observations (``highL'') or baryonic acoustic oscillations (BAO) pushes the best fit closer to GR, excluding the Gravitational Aether solution to the cosmological constant problem at the 4- 5 σ level. This constraint effectively places a limit on the anomalous coupling to pressure in the parametrized post-Newtonian (PPN) expansion, ζ4 = 0.105 ± 0.049 (+highL CMB), or ζ4 = 0.066 ± 0.039 (+BAO). These represent the most precise measurement of this parameter to date, indicating a mild tension with GR (for ΛCDM including tensors, with 0ζ4=), and also among different data sets.
How does pressure gravitate? Cosmological constant problem confronts observational cosmology
Narimani, Ali; Scott, Douglas; Afshordi, Niayesh
2014-01-01
An important and long-standing puzzle in the history of modern physics is the gross inconsistency between theoretical expectations and cosmological observations of the vacuum energy density, by at least 60 orders of magnitude, otherwise known as the cosmological constant problem. A characteristic feature of vacuum energy is that it has a pressure with the same amplitude, but opposite sign to its energy density, while all the precision tests of General Relativity are either in vacuum, or for media with negligible pressure. Therefore, one may wonder whether an anomalous coupling to pressure might be responsible for decoupling vacuum from gravity. We test this possibility in the context of the Gravitational Aether proposal, using current cosmological observations, which probe the gravity of relativistic pressure in the radiation era. Interestingly, we find that the best fit for anomalous pressure coupling is about half-way between General Relativity (GR), and Gravitational Aether (GA), if we include Planck together with WMAP and BICEP2 polarization cosmic microwave background (CMB) observations. Taken at face value, this data combination excludes both GR and GA at around the 3 σ level. However, including higher resolution CMB observations (''highL'') or baryonic acoustic oscillations (BAO) pushes the best fit closer to GR, excluding the Gravitational Aether solution to the cosmological constant problem at the 4- 5 σ level. This constraint effectively places a limit on the anomalous coupling to pressure in the parametrized post-Newtonian (PPN) expansion, ζ 4 = 0.105 ± 0.049 (+highL CMB), or ζ 4 = 0.066 ± 0.039 (+BAO). These represent the most precise measurement of this parameter to date, indicating a mild tension with GR (for ΛCDM including tensors, with 0ζ 4 =), and also among different data sets
Developing and implementing a high precision setup system
Peng, Lee-Cheng
The demand for high-precision radiotherapy (HPRT) was first implemented in stereotactic radiosurgery using a rigid, invasive stereotactic head frame. Fractionated stereotactic radiotherapy (SRT) with a frameless device was developed along a growing interest in sophisticated treatment with a tight margin and high-dose gradient. This dissertation establishes the complete management for HPRT in the process of frameless SRT, including image-guided localization, immobilization, and dose evaluation. The most ideal and precise positioning system can allow for ease of relocation, real-time patient movement assessment, high accuracy, and no additional dose in daily use. A new image-guided stereotactic positioning system (IGSPS), the Align RT3C 3D surface camera system (ART, VisionRT), which combines 3D surface images and uses a real-time tracking technique, was developed to ensure accurate positioning at the first place. The uncertainties of current optical tracking system, which causes patient discomfort due to additional bite plates using the dental impression technique and external markers, are found. The accuracy and feasibility of ART is validated by comparisons with the optical tracking and cone-beam computed tomography (CBCT) systems. Additionally, an effective daily quality assurance (QA) program for the linear accelerator and multiple IGSPSs is the most important factor to ensure system performance in daily use. Currently, systematic errors from the phantom variety and long measurement time caused by switching phantoms were discovered. We investigated the use of a commercially available daily QA device to improve the efficiency and thoroughness. Reasonable action level has been established by considering dosimetric relevance and clinic flow. As for intricate treatments, the effect of dose deviation caused by setup errors remains uncertain on tumor coverage and toxicity on OARs. The lack of adequate dosimetric simulations based on the true treatment coordinates from
A high-precision system for conformal intracranial radiotherapy
Tome, Wolfgang A.; Meeks, Sanford L.; Buatti, John M.; Bova, Francis J.; Friedman, William A.; Li Zuofeng
2000-01-01
Purpose: Currently, optimally precise delivery of intracranial radiotherapy is possible with stereotactic radiosurgery and fractionated stereotactic radiotherapy. We report on an optimally precise optically guided system for three-dimensional (3D) conformal radiotherapy using multiple noncoplanar fixed fields. Methods and Materials: The optically guided system detects infrared light emitting diodes (IRLEDs) attached to a custom bite plate linked to the patient's maxillary dentition. The IRLEDs are monitored by a commercially available stereo camera system, which is interfaced to a personal computer. An IRLED reference is established with the patient at the selected stereotactic isocenter, and the computer reports the patient's current position based on the location of the IRLEDs relative to this reference position. Using this readout from the computer, the patient may be dialed directly to the desired position in stereotactic space. The patient is localized on the first day and a reference file is established for 5 different couch positions. The patient's image data are then imported into a commercial convolution-based 3D radiotherapy planning system. The previously established isocenter and couch positions are then used as a template upon which to design a conformal 3D plan with maximum beam separation. Results: The use of the optically guided system in conjunction with noncoplanar radiotherapy treatment planning using fixed fields allows the generation of highly conformal treatment plans that exhibit a high degree of dose homogeneity and a steep dose gradient. To date, this approach has been used to treat 28 patients. Conclusion: Because IRLED technology improves the accuracy of patient localization relative to the linac isocenter and allows real-time monitoring of patient position, one can choose treatment-field margins that only account for beam penumbra and image resolution without adding margin to account for larger and poorly defined setup uncertainty. This
Precision Viticulture from Multitemporal, Multispectral Very High Resolution Satellite Data
Kandylakis, Z.; Karantzalos, K.
2016-06-01
In order to exploit efficiently very high resolution satellite multispectral data for precision agriculture applications, validated methodologies should be established which link the observed reflectance spectra with certain crop/plant/fruit biophysical and biochemical quality parameters. To this end, based on concurrent satellite and field campaigns during the veraison period, satellite and in-situ data were collected, along with several grape samples, at specific locations during the harvesting period. These data were collected for a period of three years in two viticultural areas in Northern Greece. After the required data pre-processing, canopy reflectance observations, through the combination of several vegetation indices were correlated with the quantitative results from the grape/must analysis of grape sampling. Results appear quite promising, indicating that certain key quality parameters (like brix levels, total phenolic content, brix to total acidity, anthocyanin levels) which describe the oenological potential, phenolic composition and chromatic characteristics can be efficiently estimated from the satellite data.
Thermal-mechanical behavior of high precision composite mirrors
Kuo, C. P.; Lou, M. C.; Rapp, D.
1993-01-01
Composite mirror panels were designed, constructed, analyzed, and tested in the framework of a NASA precision segmented reflector task. The deformations of the reflector surface during the exposure to space enviroments were predicted using a finite element model. The composite mirror panels have graphite-epoxy or graphite-cyanate facesheets, separated by an aluminum or a composite honeycomb core. It is pointed out that in order to carry out detailed modeling of composite mirrors with high accuracy, it is necessary to have temperature dependent properties of the materials involved and the type and magnitude of manufacturing errors and material nonuniformities. The structural modeling and analysis efforts addressed the impact of key design and materials parameters on the performance of mirrors.
Thermal-mechanical behavior of high precision composite mirrors
Kuo, C.P.; Lou, M.C.; Rapp, D.
1993-01-01
Composite mirror panels were designed, constructed, analyzed, and tested in the framework of a NASA precision segmented reflector task. The deformations of the reflector surface during the exposure to space enviroments were predicted using a finite element model. The composite mirror panels have graphite-epoxy or graphite-cyanate facesheets, separated by an aluminum or a composite honeycomb core. It is pointed out that in order to carry out detailed modeling of composite mirrors with high accuracy, it is necessary to have temperature dependent properties of the materials involved and the type and magnitude of manufacturing errors and material nonuniformities. The structural modeling and analysis efforts addressed the impact of key design and materials parameters on the performance of mirrors. 4 refs.
High precision measurements of 26Naβ- decay
Grinyer, G. F.; Svensson, C. E.; Andreoiu, C.; Andreyev, A. N.; Austin, R. A.; Ball, G. C.; Chakrawarthy, R. S.; Finlay, P.; Garrett, P. E.; Hackman, G.; Hardy, J. C.; Hyland, B.; Iacob, V. E.; Koopmans, K. A.; Kulp, W. D.; Leslie, J. R.; MacDonald, J. A.; Morton, A. C.; Ormand, W. E.; Osborne, C. J.; Pearson, C. J.; Phillips, A. A.; Sarazin, F.; Schumaker, M. A.; Scraggs, H. C.; Schwarzenberg, J.; Smith, M. B.; Valiente-Dobón, J. J.; Waddington, J. C.; Wood, J. L.; Zganjar, E. F.
2005-04-01
High-precision measurements of the half-life and β-branching ratios for the β- decay of 26Na to 26Mg have been measured in β-counting and γ-decay experiments, respectively. A 4π proportional counter and fast tape transport system were employed for the half-life measurement, whereas the γ rays emitted by the daughter nucleus 26Mg were detected with the 8π γ-ray spectrometer, both located at TRIUMF's isotope separator and accelerator radioactive beam facility. The half-life of 26Na was determined to be T1/2=1.07128±0.00013±0.00021s, where the first error is statistical and the second systematic. The logft values derived from these experiments are compared with theoretical values from a full sd-shell model calculation.
The SFD - 80 M high precision double axis facing lathe
Bran, T.; Dragomir, I.; Rusu, I.; Stanciu, S.; Niculceanu, F.; Nica, O.; Popescu, M.; Bailescu, V.; Burcea, Gh.; Turcanu, V.
2001-01-01
A high precision double axis facing lathe was designed for machining the 'final end-cup' by exterior conical lathing. The lathe is semi-automatic and includes two independent identical units. The general constructive, dimensional and functional characteristics are presented as well as the specific power consumptions. As compared to other machines able to perform the same operations this machine presents the following novel aspects: - it is dedicated from the design stage to the workpiece to be machined; - the splinting speed is quasi-constant all along the processing span (irrespective of the cutting diameter at which the tool is fixed, in its trajectory generating the exterior cone). At 100% and 80% nominal power values the yield is 240 workpiece/hour and 192 workpiece/hour, respectively
High Precision Renormalization Group Study of the Roughening Transition
Hasenbusch, M; Pinn, K
1994-01-01
We confirm the Kosterlitz-Thouless scenario of the roughening transition for three different Solid-On-Solid models: the Discrete Gaussian model, the Absolute-Value-Solid-On-Solid model and the dual transform of the XY model with standard (cosine) action. The method is based on a matching of the renormalization group flow of the candidate models with the flow of a bona fide KT model, the exactly solvable BCSOS model. The Monte Carlo simulations are performed using efficient cluster algorithms. We obtain high precision estimates for the critical couplings and other non-universal quantities. For the XY model with cosine action our critical coupling estimate is $\\beta_R^{XY}=1.1197(5)$. For the roughening coupling of the Discrete Gaussian and the Absolute-Value-Solid-On-Solid model we find $K_R^{DG}=0.6645(6)$ and $K_R^{ASOS}=0.8061(3)$, respectively.
Optimal dynamic performance for high-precision actuators/stages
Preissner, C.; Lee, S.-H.; Royston, T. J.; Shu, D.
2002-01-01
System dynamic performance of actuator/stage groups, such as those found in optical instrument positioning systems and other high-precision applications, is dependent upon both individual component behavior and the system configuration. Experimental modal analysis techniques were implemented to determine the six degree of freedom stiffnesses and damping for individual actuator components. These experimental data were then used in a multibody dynamic computer model to investigate the effect of stage group configuration. Running the computer model through the possible stage configurations and observing the predicted vibratory response determined the optimal stage group configuration. Configuration optimization can be performed for any group of stages, provided there is stiffness and damping data available for the constituent pieces
High Precision Infrared Temperature Measurement System Based on Distance Compensation
Chen Jing
2017-01-01
Full Text Available To meet the need of real-time remote monitoring of human body surface temperature for optical rehabilitation therapy, a non-contact high-precision real-time temperature measurement method based on distance compensation was proposed, and the system design was carried out. The microcontroller controls the infrared temperature measurement module and the laser range module to collect temperature and distance data. The compensation formula of temperature with distance wass fitted according to the least square method. Testing had been performed on different individuals to verify the accuracy of the system. The results indicate that the designed non-contact infrared temperature measurement system has a residual error of less than 0.2°C and the response time isless than 0.1s in the range of 0 to 60cm. This provides a reference for developing long-distance temperature measurement equipment in optical rehabilitation therapy.
Ryden, Barbara
2017-01-01
This second edition of Introduction to Cosmology is an exciting update of an award-winning textbook. It is aimed primarily at advanced undergraduate students in physics and astronomy, but is also useful as a supplementary text at higher levels. It explains modern cosmological concepts, such as dark energy, in the context of the Big Bang theory. Its clear, lucid writing style, with a wealth of useful everyday analogies, makes it exceptionally engaging. Emphasis is placed on the links between theoretical concepts of cosmology and the observable properties of the universe, building deeper physical insights in the reader. The second edition includes recent observational results, fuller descriptions of special and general relativity, expanded discussions of dark energy, and a new chapter on baryonic matter that makes up stars and galaxies. It is an ideal textbook for the era of precision cosmology in the accelerating universe.
Neutrino flavor conversions in high-density astrophysical and cosmological environments
Saviano, Ninetta
2014-03-01
The topic of this thesis is the study of the neutrino flavor conversions in high-density environments: the supernovae and the the Early Universe. Remarkably, these represent the only two cases in which neutrinos themselves contribute to the ''background medium'' for their propagation, making their oscillations a non-linear phenomenon. In particular, in the dense supernova core, the neutrino-neutrino interactions can lead in some situations to surprising and counterintuitive collective phenomena, when the entire neutrino system oscillates coherently as a single collective mode. In this context, we have shown that during the early SN accretion phase (post-bounce times 10 -3 ) in order to suppress the sterile neutrino production and to find a better agreement between the cosmological and laboratory hints. Finally, we discuss the implications of our results on Big-Bang Nucleosynthesis and on the Cosmic Microwave Background from data measured by the Planck experiment.
Landsberg, P.T.; Evans, D.A.
1977-01-01
The subject is dealt with in chapters, entitled: cosmology -some fundamentals; Newtonian gravitation - some fundamentals; the cosmological differential equation - the particle model and the continuum model; some simple Friedmann models; the classification of the Friedmann models; the steady-state model; universe with pressure; optical effects of the expansion according to various theories of light; optical observations and cosmological models. (U.K.)
High precision measurements of the luminosity at LEP
Pietrzyk, B.
1994-01-01
The art of the luminosity measurements at LEP is presented. First generation LEP detectors have measured the absolute luminosity with the precision of 0.3-0.5%. The most precise present detectors have reached the 0.07% precision and the 0.05% is not excluded in future. Center-of-mass energy dependent relative precision of the luminosity detectors and the use of the theoretical cross-section in the LEP experiments are also discussed. (author). 18 refs., 6 figs., 6 tabs
Bardeen, J. M.
The last several years have seen a tremendous ferment of activity in astrophysical cosmology. Much of the theoretical impetus has come from particle physics theories of the early universe and candidates for dark matter, but what promise to be even more significant are improved direct observations of high z galaxies and intergalactic matter, deeper and more comprehensive redshift surveys, and the increasing power of computer simulations of the dynamical evolution of large scale structure. Upper limits on the anisotropy of the microwave background radiation are gradually getting tighter and constraining more severely theoretical scenarios for the evolution of the universe.
Bardeen, J.M.
1986-01-01
The last several years have seen a tremendous ferment of activity in astrophysical cosmology. Much of the theoretical impetus has come from particle physics theories of the early universe and candidates for dark matter, but what promise to be even more significant are improved direct observations of high z galaxies and intergalactic matter, deeper and more comprehensive redshift surveys, and the increasing power of computer simulations of the dynamical evolution of large scale structure. Upper limits on the anisotropy of the microwave background radiation are gradually getting tighter and constraining more severely theoretical scenarios for the evolution of the universe. 47 refs
Electroweak precision tests in high-energy diboson processes
Franceschini, Roberto; Panico, Giuliano; Pomarol, Alex; Riva, Francesco; Wulzer, Andrea
2018-02-01
A promising avenue to perform precision tests of the SM at the LHC is to measure differential cross-sections at high invariant mass, exploiting in this way the growth with the energy of the corrections induced by heavy new physics. We classify the leading growing-with-energy effects in longitudinal diboson and in associated Higgs production processes, showing that they can be encapsulated in four real "high-energy primary" parameters. We assess the reach on these parameters at the LHC and at future hadronic colliders, focusing in particular on the fully leptonic W Z channel that appears particularly promising. The reach is found to be superior to existing constraints by one order of magnitude, providing a test of the SM electroweak sector at the per-mille level, in competition with LEP bounds. Unlike LHC run-1 bounds, which only apply to new physics effects that are much larger than the SM in the high-energy tail of the distributions, the probe we study applies to a wider class of new physics scenarios where such large departures are not expected.
Precision, high dose radiotherapy: helium ion treatment of uveal melanoma
Saunders, W.M.; Char, D.H.; Quivey, J.M.; Castro, J.R.; Chen, G.T.Y.; Collier, J.M.; Cartigny, A.; Blakely, E.A.; Lyman, J.T.; Zink, S.R.
1985-02-01
The authors report on 75 patients with uveal melanoma who were treated by placing the Bragg peak of a helium ion beam over the tumor volume. The technique localizes the high dose region very tightly around the tumor volume. This allows critical structures, such as the optic disc and the macula, to be excluded from the high dose region as long as they are 3 to 4 mm away from the edge of the tumor. Careful attention to tumor localization, treatment planning, patient immobilization and treatment verification is required. With a mean follow-up of 22 months (3 to 60 months) the authors have had only five patients with a local recurrence, all of whom were salvaged with another treatment. Pretreatment visual acuity has generally been preserved as long as the tumor edge is at least 4 mm away from the macula and optic disc. The only serious complication to date has been an 18% incidence of neovascular glaucoma in the patients treated at our highest dose level. Clinical results and details of the technique are presented to illustrate potential clinical precision in administering high dose radiotherapy with charged particles such as helium ions or protons.
Precision, high dose radiotherapy: helium ion treatment of uveal melanoma
Saunders, W.M.; Char, D.H.; Quivey, J.M.
1985-01-01
The authors report on 75 patients with uveal melanoma who were treated by placing the Bragg peak of a helium ion beam over the tumor volume. The technique localizes the high dose region very tightly around the tumor volume. This allows critical structures, such as the optic disc and the macula, to be excluded from the high dose region as long as they are 3 to 4 mm away from the edge of the tumor. Careful attention to tumor localization, treatment planning, patient immobilization and treatment verification is required. With a mean follow-up of 22 months (3 to 60 months) the authors have had only five patients with a local recurrence, all of whom were salvaged with another treatment. Pretreatment visual acuity has generally been preserved as long as the tumor edge is at least 4 mm away from the macula and optic disc. The only serious complication to date has been an 18% incidence of neovascular glaucoma in the patients treated at our highest dose level. Clinical results and details of the technique are presented to illustrate potential clinical precision in administering high dose radiotherapy with charged particles such as helium ions or protons
Perturbations in loop quantum cosmology
Nelson, W; Agullo, I; Ashtekar, A
2014-01-01
The era of precision cosmology has allowed us to accurately determine many important cosmological parameters, in particular via the CMB. Confronting Loop Quantum Cosmology with these observations provides us with a powerful test of the theory. For this to be possible, we need a detailed understanding of the generation and evolution of inhomogeneous perturbations during the early, quantum gravity phase of the universe. Here, we have described how Loop Quantum Cosmology provides a completion of the inflationary paradigm, that is consistent with the observed power spectra of the CMB
High-precision efficiency calibration of a high-purity co-axial germanium detector
Blank, B., E-mail: blank@cenbg.in2p3.fr [Centre d' Etudes Nucléaires de Bordeaux Gradignan, UMR 5797, CNRS/IN2P3, Université de Bordeaux, Chemin du Solarium, BP 120, 33175 Gradignan Cedex (France); Souin, J.; Ascher, P.; Audirac, L.; Canchel, G.; Gerbaux, M.; Grévy, S.; Giovinazzo, J.; Guérin, H.; Nieto, T. Kurtukian; Matea, I. [Centre d' Etudes Nucléaires de Bordeaux Gradignan, UMR 5797, CNRS/IN2P3, Université de Bordeaux, Chemin du Solarium, BP 120, 33175 Gradignan Cedex (France); Bouzomita, H.; Delahaye, P.; Grinyer, G.F.; Thomas, J.C. [Grand Accélérateur National d' Ions Lourds, CEA/DSM, CNRS/IN2P3, Bvd Henri Becquerel, BP 55027, F-14076 CAEN Cedex 5 (France)
2015-03-11
A high-purity co-axial germanium detector has been calibrated in efficiency to a precision of about 0.15% over a wide energy range. High-precision scans of the detector crystal and γ-ray source measurements have been compared to Monte-Carlo simulations to adjust the dimensions of a detector model. For this purpose, standard calibration sources and short-lived online sources have been used. The resulting efficiency calibration reaches the precision needed e.g. for branching ratio measurements of super-allowed β decays for tests of the weak-interaction standard model.
Bias-limited extraction of cosmological parameters
Shimon, Meir; Itzhaki, Nissan; Rephaeli, Yoel, E-mail: meirs@wise.tau.ac.il, E-mail: nitzhaki@post.tau.ac.il, E-mail: yoelr@wise.tau.ac.il [School of Physics and Astronomy, Tel Aviv University, Tel Aviv 69978 (Israel)
2013-03-01
It is known that modeling uncertainties and astrophysical foregrounds can potentially introduce appreciable bias in the deduced values of cosmological parameters. While it is commonly assumed that these uncertainties will be accounted for to a sufficient level of precision, the level of bias has not been properly quantified in most cases of interest. We show that the requirement that the bias in derived values of cosmological parameters does not surpass nominal statistical error, translates into a maximal level of overall error O(N{sup −½}) on |ΔP(k)|/P(k) and |ΔC{sub l}|/C{sub l}, where P(k), C{sub l}, and N are the matter power spectrum, angular power spectrum, and number of (independent Fourier) modes at a given scale l or k probed by the cosmological survey, respectively. This required level has important consequences on the precision with which cosmological parameters are hoped to be determined by future surveys: in virtually all ongoing and near future surveys N typically falls in the range 10{sup 6}−10{sup 9}, implying that the required overall theoretical modeling and numerical precision is already very high. Future redshifted-21-cm observations, projected to sample ∼ 10{sup 14} modes, will require knowledge of the matter power spectrum to a fantastic 10{sup −7} precision level. We conclude that realizing the expected potential of future cosmological surveys, which aim at detecting 10{sup 6}−10{sup 14} modes, sets the formidable challenge of reducing the overall level of uncertainty to 10{sup −3}−10{sup −7}.
Observable cosmology and cosmological models
Kardashev, N.S.; Lukash, V.N.; Novikov, I.D.
1987-01-01
Modern state of observation cosmology is briefly discussed. Among other things, a problem, related to Hibble constant and slowdown constant determining is considered. Within ''pancake'' theory hot (neutrino) cosmological model explains well the large-scale structure of the Universe, but does not explain the galaxy formation. A cold cosmological model explains well light object formation, but contradicts data on large-scale structure
A Computer Controlled Precision High Pressure Measuring System
Sadana, S.; Yadav, S.; Jha, N.; Gupta, V. K.; Agarwal, R.; Bandyopadhyay, A. K.; Saxena, T. K.
2011-01-01
A microcontroller (AT89C51) based electronics has been designed and developed for high precision calibrator based on Digiquartz pressure transducer (DQPT) for the measurement of high hydrostatic pressure up to 275 MPa. The input signal from DQPT is converted into a square wave form and multiplied through frequency multiplier circuit over 10 times to input frequency. This input frequency is multiplied by a factor of ten using phased lock loop. Octal buffer is used to store the calculated frequency, which in turn is fed to microcontroller AT89C51 interfaced with a liquid crystal display for the display of frequency as well as corresponding pressure in user friendly units. The electronics developed is interfaced with a computer using RS232 for automatic data acquisition, computation and storage. The data is acquired by programming in Visual Basic 6.0. This system is interfaced with the PC to make it a computer controlled system. The system is capable of measuring the frequency up to 4 MHz with a resolution of 0.01 Hz and the pressure up to 275 MPa with a resolution of 0.001 MPa within measurement uncertainty of 0.025%. The details on the hardware of the pressure measuring system, associated electronics, software and calibration are discussed in this paper.
A high-precision synchronization circuit for clock distribution
Lu Chong; Tan Hongzhou; Duan Zhikui; Ding Yi
2015-01-01
In this paper, a novel structure of a high-precision synchronization circuit, HPSC, using interleaved delay units and a dynamic compensation circuit is proposed. HPSCs are designed for synchronization of clock distribution networks in large-scale integrated circuits, where high-quality clocks are required. The application of a hybrid structure of a coarse delay line and dynamic compensation circuit performs roughly the alignment of the clock signal in two clock cycles, and finishes the fine tuning in the next three clock cycles with the phase error suppressed under 3.8 ps. The proposed circuit is implemented and fabricated using a SMIC 0.13 μm 1P6M process with a supply voltage at 1.2 V. The allowed operation frequency ranges from 200 to 800 MHz, and the duty cycle ranges between [20%, 80%]. The active area of the core circuits is 245 × 134 μm 2 , and the power consumption is 1.64 mW at 500 MHz. (paper)
Characterisation of work function fluctuations for high-precision experiments
Kahlenberg, Jan; Bickmann, Edward; Heil, Werner; Otten, Ernst W.; Schmidt, Christian; Wunderle, Alexander [Johannes Gutenberg-Universitaet Mainz (Germany); Babutzka, Martin; Schoenung, Kerstin [Karlsruher Institut fuer Technologie (Germany); Beck, Marcus [Johannes Gutenberg-Universitaet Mainz (Germany); Helmholtz-Institut Mainz (Germany)
2016-07-01
For a wide range of high-precision experiments in physics, well-defined electric potentials for achieving high measurement accuracies are required. An accurate determination of the electric potential is crucial for the measurement of the neutrino mass (KATRIN) as well as the measurement of the e{sup -} anti ν{sub e} correlation coefficient a in free neutron decay (aSPECT). Work function fluctuations on the electrodes lead to uncertainties in the distribution of the electric potential. For aSPECT, the electric potential has to be known at an accuracy of 10 mV. However, due to the patch effect of gold, work function fluctuations of several 100 meV can occur. Therefore, the work function distributions of the gold-plated electrodes have been measured using a Kelvin probe. Furthermore, the change of work function distributions over time as well as the influence of relative humidity on the work function measurement have been investigated. For aSPECT, the work function distributions of the gold-plated electrodes have been measured using a Kelvin probe. Due to the patch effect of gold, work function fluctuations of up to 160 meV occur. This would lead to a significant uncertainty of the potential barrier, which should be known at an accuracy of 10 mV. Furthermore, the change of work function distributions over time as well as the influence of relative humidity on the work function measurement have been investigated.
Safarzadeh, Mohammadtaher; Ji, Alexander P.; Dooley, Gregory A.; Frebel, Anna; Scannapieco, Evan; Gómez, Facundo A.; O'Shea, Brian W.
2018-06-01
The smallest satellites of the Milky Way ceased forming stars during the epoch of reionization and thus provide archaeological access to galaxy formation at z > 6. Numerical studies of these ultrafaint dwarf galaxies (UFDs) require expensive cosmological simulations with high mass resolution that are carried out down to z = 0. However, if we are able to statistically identify UFD host progenitors at high redshifts with relatively high probabilities, we can avoid this high computational cost. To find such candidates, we analyse the merger trees of Milky Way type haloes from the high-resolution Caterpillar suite of dark matter only simulations. Satellite UFD hosts at z = 0 are identified based on four different abundance matching (AM) techniques. All the haloes at high redshifts are traced forward in time in order to compute the probability of surviving as satellite UFDs today. Our results show that selecting potential UFD progenitors based solely on their mass at z = 12 (8) results in a 10 per cent (20 per cent) chance of obtaining a surviving UFD at z = 0 in three of the AM techniques we adopted. We find that the progenitors of surviving satellite UFDs have lower virial ratios (η), and are preferentially located at large distances from the main MW progenitor, while they show no correlation with concentration parameter. Haloes with favorable locations and virial ratios are ≈3 times more likely to survive as satellite UFD candidates at z = 0.
High Precision Sunphotometer using Wide Dynamic Range (WDR) Camera Tracking
Liss, J.; Dunagan, S. E.; Johnson, R. R.; Chang, C. S.; LeBlanc, S. E.; Shinozuka, Y.; Redemann, J.; Flynn, C. J.; Segal-Rosenhaimer, M.; Pistone, K.; Kacenelenbogen, M. S.; Fahey, L.
2016-12-01
High Precision Sunphotometer using Wide Dynamic Range (WDR) Camera TrackingThe NASA Ames Sun-photometer-Satellite Group, DOE, PNNL Atmospheric Sciences and Global Change Division, and NASA Goddard's AERONET (AErosol RObotic NETwork) team recently collaborated on the development of a new airborne sunphotometry instrument that provides information on gases and aerosols extending far beyond what can be derived from discrete-channel direct-beam measurements, while preserving or enhancing many of the desirable AATS features (e.g., compactness, versatility, automation, reliability). The enhanced instrument combines the sun-tracking ability of the current 14-Channel NASA Ames AATS-14 with the sky-scanning ability of the ground-based AERONET Sun/sky photometers, while extending both AATS-14 and AERONET capabilities by providing full spectral information from the UV (350 nm) to the SWIR (1,700 nm). Strengths of this measurement approach include many more wavelengths (isolated from gas absorption features) that may be used to characterize aerosols and detailed (oversampled) measurements of the absorption features of specific gas constituents. The Sky Scanning Sun Tracking Airborne Radiometer (3STAR) replicates the radiometer functionality of the AATS-14 instrument but incorporates modern COTS technologies for all instruments subsystems. A 19-channel radiometer bundle design is borrowed from a commercial water column radiance instrument manufactured by Biospherical Instruments of San Diego California (ref, Morrow and Hooker)) and developed using NASA funds under the Small Business Innovative Research (SBIR) program. The 3STAR design also incorporates the latest in robotic motor technology embodied in Rotary actuators from Oriental motor Corp. having better than 15 arc seconds of positioning accuracy. Control system was designed, tested and simulated using a Hybrid-Dynamical modeling methodology. The design also replaces the classic quadrant detector tracking sensor with a
Cosmology with clusters in the CMB
Majumdar, Subhabrata
2008-01-01
Ever since the seminal work by Sunyaev and Zel'dovich describing the distortion of the CMB spectrum, due to photons passing through the hot inter cluster gas on its way to us from the surface of last scattering (the so called Sunyaev-Zel'dovich effect (SZE)), small scale distortions of the CMB by clusters has been used to detect clusters as well as to do cosmology with clusters. Cosmology with clusters in the CMB can be divided into three distinct regimes: a) when the clusters are completely unresolved and contribute to the secondary CMB distortions power spectrum at small angular scales; b) when we can just about resolve the clusters so as to detect the clusters through its total SZE flux such that the clusters can be tagged and counted for doing cosmology and c) when we can completely resolve the clusters so as to measure their sizes and other cluster structural properties and their evolution with redshift. In this article, we take a look at these three aspects of SZE cluster studies and their implication for using clusters as cosmological probes. We show that clusters can be used as effective probes of cosmology, when in all of these three cases, one explores the synergy between cluster physics and cosmology as well take clues about cluster physics from the latest high precision cluster observations (for example, from Chandra and XMM - Newton). As a specific case, we show how an observationally motivated cluster SZ template can explain the CBI-excess without the need for a high σ 8 . We also briefly discuss 'self-calibration' in cluster surveys and the prospect of using clusters as an ensemble of cosmic rulers to break degeneracies arising in cluster cosmology.
Zibner, F.; Fornaroli, C.; Holtkamp, J.; Shachaf, Lior; Kaplan, Natan; Gillner, A.
2017-08-01
High-precision laser micro machining gains more importance in industrial applications every month. Optical systems like the helical optics offer highest quality together with controllable and adjustable drilling geometry, thus as taper angle, aspect ratio and heat effected zone. The helical optics is based on a rotating Dove-prism which is mounted in a hollow shaft engine together with other optical elements like wedge prisms and plane plates. Although the achieved quality can be interpreted as extremely high the low process efficiency is a main reason that this manufacturing technology has only limited demand within the industrial market. The objective of the research studies presented in this paper is to dramatically increase process efficiency as well as process flexibility. During the last years, the average power of commercial ultra-short pulsed laser sources has increased significantly. The efficient utilization of the high average laser power in the field of material processing requires an effective distribution of the laser power onto the work piece. One approach to increase the efficiency is the application of beam splitting devices to enable parallel processing. Multi beam processing is used to parallelize the fabrication of periodic structures as most application only require a partial amount of the emitted ultra-short pulsed laser power. In order to achieve highest flexibility while using multi beam processing the single beams are diverted and re-guided in a way that enables the opportunity to process with each partial beam on locally apart probes or semimanufactures.
HIGH-PRECISION PREDICTIONS FOR THE ACOUSTIC SCALE IN THE NONLINEAR REGIME
Seo, Hee-Jong; Eckel, Jonathan; Eisenstein, Daniel J.; Mehta, Kushal; Metchnik, Marc; Pinto, Phillip; Xu Xiaoying; Padmanabhan, Nikhil; Takahashi, Ryuichi; White, Martin
2010-01-01
We measure shifts of the acoustic scale due to nonlinear growth and redshift distortions to a high precision using a very large volume of high-force-resolution simulations. We compare results from various sets of simulations that differ in their force, volume, and mass resolution. We find a consistency within 1.5σ for shift values from different simulations and derive shift α(z) - 1 = (0.300 ± 0.015) %[D(z)/D(0)] 2 using our fiducial set. We find a strong correlation with a non-unity slope between shifts in real space and in redshift space and a weak correlation between the initial redshift and low redshift. Density-field reconstruction not only removes the mean shifts and reduces errors on the mean, but also tightens the correlations. After reconstruction, we recover a slope of near unity for the correlation between the real and redshift space and restore a strong correlation between the initial and the low redshifts. We derive propagators and mode-coupling terms from our N-body simulations and compare with the Zel'dovich approximation and the shifts measured from the χ 2 fitting, respectively. We interpret the propagator and the mode-coupling term of a nonlinear density field in the context of an average and a dispersion of its complex Fourier coefficients relative to those of the linear density field; from these two terms, we derive a signal-to-noise ratio of the acoustic peak measurement. We attempt to improve our reconstruction method by implementing 2LPT and iterative operations, but we obtain little improvement. The Fisher matrix estimates of uncertainty in the acoustic scale is tested using 5000 h -3 Gpc 3 of cosmological Particle-Mesh simulations from Takahashi et al. At an expected sample variance level of 1%, the agreement between the Fisher matrix estimates based on Seo and Eisenstein and the N-body results is better than 10%.
High Precision Current Control for the LHC Main Power Converters
Thiesen, H; Hudson, G; King, Q; Montabonnet, V; Nisbet, D; Page, S
2010-01-01
Since restarting at the end of 2009, the LHC has reached a new energy record in March 2010 with the two 3.5 TeV beams. To achieve the performance required for the good functioning of the accelerator, the currents in the main circuits (Main Bends and Main Quadrupoles) must be controlled with a higher precision than ever previously requested for a particle accelerator at CERN: a few parts per million (ppm) of nominal current. This paper describes the different challenges that were overcome to achieve the required precision for the current control of the main circuits. Precision tests performed during the hardware commissioning of the LHC illustrate this paper.
High precision relocation of earthquakes at Iliamna Volcano, Alaska
Statz-Boyer, P.; Thurber, C.; Pesicek, J.; Prejean, S.
2009-01-01
In August 1996, a period of elevated seismicity commenced beneath Iliamna Volcano, Alaska. This activity lasted until early 1997, consisted of over 3000 earthquakes, and was accompanied by elevated emissions of volcanic gases. No eruption occurred and seismicity returned to background levels where it has remained since. We use waveform alignment with bispectrum-verified cross-correlation and double-difference methods to relocate over 2000 earthquakes from 1996 to 2005 with high precision (~ 100??m). The results of this analysis greatly clarify the distribution of seismic activity, revealing distinct features previously hidden by location scatter. A set of linear earthquake clusters diverges upward and southward from the main group of earthquakes. The events in these linear clusters show a clear southward migration with time. We suggest that these earthquakes represent either a response to degassing of the magma body, circulation of fluids due to exsolution from magma or heating of ground water, or possibly the intrusion of new dikes beneath Iliamna's southern flank. In addition, we speculate that the deeper, somewhat diffuse cluster of seismicity near and south of Iliamna's summit indicates the presence of an underlying magma body between about 2 and 4??km depth below sea level, based on similar features found previously at several other Alaskan volcanoes. ?? 2009 Elsevier B.V.
High precision refractometry based on Fresnel diffraction from phase plates.
Tavassoly, M Taghi; Naraghi, Roxana Rezvani; Nahal, Arashmid; Hassani, Khosrow
2012-05-01
When a transparent plane-parallel plate is illuminated at a boundary region by a monochromatic parallel beam of light, Fresnel diffraction occurs because of the abrupt change in phase imposed by the finite change in refractive index at the plate boundary. The visibility of the diffraction fringes varies periodically with changes in incident angle. The visibility period depends on the plate thickness and the refractive indices of the plate and the surrounding medium. Plotting the phase change versus incident angle or counting the visibility repetition in an incident-angle interval provides, for a given plate thickness, the refractive index of the plate very accurately. It is shown here that the refractive index of a plate can be determined without knowing the plate thickness. Therefore, the technique can be utilized for measuring plate thickness with high precision. In addition, by installing a plate with known refractive index in a rectangular cell filled with a liquid and following the described procedures, the refractive index of the liquid is obtained. The technique is applied to measure the refractive indices of a glass slide, distilled water, and ethanol. The potential and merits of the technique are also discussed.
High-Precision Direct Mass Determination of Unstable Isotopes
2002-01-01
The extension of systematic high-precision measurements of the nuclear mass to nuclei far from the valley of $\\beta$ stability is of great interest in nuclear physics and astrophysics. The mass, or binding energy, is a fundamental gross property and a key input parameter for nuclear matter calculations. It is also a sensitive probe for collective and single-particle effects in nuclear structure. \\\\ \\\\ For such purposes, nuclear masses need to be known to an accuracy of about 10$^{-7}$ (i.e. $\\Delta$M~$\\leq$~10~keV for A~=~100). To resolve a particular mass from its nuclear isomers and isobars, resolving power of 10$^6$ are often required. To achieve this, the ions delivered by the on-line mass separator ISOLDE are confined in a Penning quadrupole trap. This trap is placed in the very homogeneous and stable magnetic field of a superconducting magnet. Here, the cyclotron frequency and hence the mass are determined. \\\\ \\\\ The first measurements using this new technique have been completed for a long chain of Cs ...
Software Development of High-Precision Ephemerides of Solar System
Jong-Seob Shin
1995-06-01
Full Text Available We solved n-body problem about 9 plants, moon, and 4 minor planets with relativistic effect related to the basic equation of motion of the solar system. Perturbations including figure potential of the earth and the moon and solid earth tidal effect were considered on this relativistic equation of motion. The orientations employed precession and nutation for the earth, and lunar libration model with Eckert's lunar libration model based on J2000.0 were used for the moon. Finally, we developed heliocentric ecliptic position and velocity of each planet using this software package named the SSEG (Solar System Ephemerides Generator by long-term (more than 100 years simulation on CRAY-2S super computer, through testing each subroutine on personal computer and short-time (within 800days running on SUN3/280 workstation. Epoch of input data JD2440400.5 were adopted in order to compare our results to the data archived from JPL's DE200 by Standish and Newhall. Above equation of motion was integrated numerically having 1-day step-size interval through 40,000 days (about 110 years long as total computing interval. We obtained high-precision ephemerides of the planets with maximum error, less than ~2 x 10-8AU (≈±3km compared with DE200 data(except for mars and moon.
Interferometric Star Tracker for High Precision Pointing, Phase I
National Aeronautics and Space Administration — Optical Physics Company (OPC) proposes to adapt the precision star tracker it is currently developing under several DoD contracts for deep space lasercom beam...
A Low-Cost, High-Precision Navigator, Phase II
National Aeronautics and Space Administration — Toyon Research Corporation proposes to develop and demonstrate a prototype low-cost precision navigation system using commercial-grade gyroscopes and accelerometers....
French Meteor Network for High Precision Orbits of Meteoroids
Atreya, P.; Vaubaillon, J.; Colas, F.; Bouley, S.; Gaillard, B.; Sauli, I.; Kwon, M. K.
2011-01-01
There is a lack of precise meteoroids orbit from video observations as most of the meteor stations use off-the-shelf CCD cameras. Few meteoroids orbit with precise semi-major axis are available using film photographic method. Precise orbits are necessary to compute the dust flux in the Earth s vicinity, and to estimate the ejection time of the meteoroids accurately by comparing them with the theoretical evolution model. We investigate the use of large CCD sensors to observe multi-station meteors and to compute precise orbit of these meteoroids. An ideal spatial and temporal resolution to get an accuracy to those similar of photographic plates are discussed. Various problems faced due to the use of large CCD, such as increasing the spatial and the temporal resolution at the same time and computational problems in finding the meteor position are illustrated.
Radio emission from Supernovae and High Precision Astrometry
Perez-Torres, M. A.
1999-11-01
The present thesis work makes contributions in two scientific fronts: differential astrometry over the largest angular scales ever attempted (approx. 15 arcdegrees) and numerical simulations of radio emission from very young supernovae. In the first part, we describe the results of the use of very-long-baseline interferometry (VLBI) in one experiment designed to measure with very high precision the angular distance between the radio sources 1150+812 (QSO) and 1803+784 (BL Lac). We observed the radio sources on 19 November 1993 using an intercontinental array of radio telescopes, which simultaneously recorded at 2.3 and 8.4 GHz. VLBI differential astrometry is capable, Nature allowing, of yielding source positions with precisions well below the milliarcsecond level. To achieve this precision, we first had to accurately model the rotation of the interferometric fringes via the most precise models of Earth Orientation Parameters (EOP; precession, polar motion and UT1, nutation). With this model, we successfully connected our phase delay data at both frequencies and, using difference astrometric techniques, determined the coordinates of 1803+784 relative to those of 1150+812-within the IERS reference frame--with an standard error of about 0.6 mas in each coordinate. We then corrected for several effects including propagation medium (mainly the atmosphere and ionosphere), and opacity and source-structure effects within the radio sources. We stress that our dual-frequency measurements allowed us to accurately subtract the ionosphere contribution from our data. We also used GPS-based TEC measurements to independently find the ionosphere contribution, and showed that these contributions agree with our dual-frequency measurements within about 2 standard deviations in the less favorables cases (the longest baselines), but are usually well within one standard deviation. Our estimates of the relative positions, whether using dual-frequency-based or GPS-based ionosphere
On the cosmological propagation of high energy particles in magnetic fields
Alves Batista, Rafael
2015-04-01
In the present work the connection between high energy particles and cosmic magnetic fields is explored. Particularly, the focus lies on the propagation of ultra-high energy cosmic rays (UHECRs) and very-high energy gamma rays (VHEGRs) over cosmological distances, under the influence of cosmic magnetic fields. The first part of this work concerns the propagation of UHECRs in the magnetized cosmic web, which was studied both analytically and numerically. A parametrization for the suppression of the UHECR flux at energies ∝ 10 18 eV due to diffusion in extragalactic magnetic fields was found, making it possible to set an upper limit on the energy at which this magnetic horizon effect sets in, which is
High precision patterning of ITO using femtosecond laser annealing process
Cheng, Chung-Wei; Lin, Cen-Ying
2014-01-01
Highlights: • We have reported a process of fabrication of crystalline indium tin oxide (c-ITO) patterns using femtosecond laser-induced crystallization with a Gaussian beam profile followed by chemical etching. • The experimental results have demonstrated that the ablation and crystallization threshold fluences of a-ITO thin film are well-defined, the line width of the c-ITO patterns is controllable. • Fast fabrication of the two parallel sub-micro (∼0.5 μm) c-ITO line patterns using a single femtosecond laser beam and a single scanning path can be achieved. • A long-length sub-micro c-ITO line pattern is fabricated, and the feasibility of fabricating c-ITO patterns is confirmed, which are expected to be used in micro-electronics devices. - Abstract: High precision patterning of crystalline indium tin oxide (c-ITO) patterns on amorphous ITO (a-ITO) thin films by femtosecond laser-induced crystallization with a Gaussian beam profile followed by chemical etching is demonstrated. In the proposed approach, the a-ITO thin film is selectively transformed into a c-ITO structure via a low heat affect zone and the well-defined thresholds (ablation and crystallization) supplied by the femtosecond laser pulse. The experimental results show that by careful control of the laser fluence above the crystallization threshold, c-ITO patterns with controllable line widths and ridge-free characteristics can be accomplished. By careful control of the laser fluence above the ablation threshold, fast fabrication of the two parallel sub-micro c-ITO line patterns using a single femtosecond laser beam and single scanning path can be achieved. Along-length sub-micro c-ITO line pattern is fabricated, and the feasibility of fabricating c-ITO patterns is confirmed, which are expected to be used in micro-electronics devices
a High Precision dem Extraction Method Based on Insar Data
Wang, Xinshuang; Liu, Lingling; Shi, Xiaoliang; Huang, Xitao; Geng, Wei
2018-04-01
In the 13th Five-Year Plan for Geoinformatics Business, it is proposed that the new InSAR technology should be applied to surveying and mapping production, which will become the innovation driving force of geoinformatics industry. This paper will study closely around the new outline of surveying and mapping and then achieve the TerraSAR/TanDEM data of Bin County in Shaanxi Province in X band. The studying steps are as follows; Firstly, the baseline is estimated from the orbital data; Secondly, the interferometric pairs of SAR image are accurately registered; Thirdly, the interferogram is generated; Fourth, the interferometric correlation information is estimated and the flat-earth phase is removed. In order to solve the phase noise and the discontinuity phase existing in the interferometric image of phase, a GAMMA adaptive filtering method is adopted. Aiming at the "hole" problem of missing data in low coherent area, the interpolation method of low coherent area mask is used to assist the phase unwrapping. Then, the accuracy of the interferometric baseline is estimated from the ground control points. Finally, 1 : 50000 DEM is generated, and the existing DEM data is used to verify the accuracy through statistical analysis. The research results show that the improved InSAR data processing method in this paper can obtain the high-precision DEM of the study area, exactly the same with the topography of reference DEM. The R2 can reach to 0.9648, showing a strong positive correlation.
Open problems in string cosmology
Toumbas, N.
2010-01-01
Some of the open problems in string cosmology are highlighted within the context of the recently constructed thermal and quantum superstring cosmological solutions. Emphasis is given on the high temperature cosmological regime, where it is argued that thermal string vacua in the presence of gravito-magnetic fluxes can be used to bypass the Hagedorn instabilities of string gas cosmology. This article is based on a talk given at the workshop on ''Cosmology and Strings'', Corfu, September 6-13, 2009. (Abstract Copyright [2010], Wiley Periodicals, Inc.)
High Precision GNSS Guidance for Field Mobile Robots
Ladislav Jurišica
2012-11-01
Full Text Available In this paper, we discuss GNSS (Global Navigation Satellite System guidance for field mobile robots. Several GNSS systems and receivers, as well as multiple measurement methods and principles of GNSS systems are examined. We focus mainly on sources of errors and investigate diverse approaches for precise measuring and effective use of GNSS systems for real-time robot localization. The main body of the article compares two GNSS receivers and their measurement methods. We design, implement and evaluate several mathematical methods for precise robot localization.
High precision ages from the Torres del Paine Intrusion, Chile
Michel, J.; Baumgartner, L.; Cosca, M.; Ovtcharova, M.; Putlitz, B.; Schaltegger, U.
2006-12-01
The upper crustal bimodal Torres del Paine Intrusion, southern Chile, consists of the lower Paine-Mafic- Complex and the upper Paine-Granite. Geochronologically this bimodal complex is not well studied except for a few existing data from Halpern (1973) and Sanchez (2006). The aim of this study is to supplement the existing data and to constrain the age relations between the major magmatic pulses by applying high precision U-Pb dating on accessory zircons and 40Ar/39Ar-laser-step-heating-ages on biotites from the Torres del Paine Intrusion. The magmatic rocks from mafic complex are fine to medium-grained and vary in composition from quartz- monzonites to granodiorites and gabbros. Coarse-grained olivine gabbros have intruded these rocks in the west. The granitic body is represented by a peraluminous, biotite-orthoclase-granite and a more evolved leucocratic granite in the outer parts towards the host-rock. Field observations suggest a feeder-zone for the granite in the west and that the granite postdates the mafic complex. Two granite samples of the outermost margins in the Northeast and South were analyzed. The zircons were dated by precise isotope-dilution U-Pb techniques of chemically abraded single grains. The data are concordant within the analytical error and define weighted mean 206/238U ages of 12.59 ± 0.03 Ma and 12.58 ± 0.01 Ma for the two samples respectively. A 40Ar/39Ar-age for the second sample yield a date of 12.37 ± 0.11 Ma. Three 40Ar/39Ar -ages of biotites were obtained for rocks belonging to the mafic complex. A hbl-bio- granodiorite from the central part, approximately 150 m below the subhorizontal contact with the granite, gives an age of 12.81 ± 0.11 Ma. A hbl-bio-granodiorite and an olivine-gabbro west of the feeder-zone date at 12.42 ± 0.14 Ma and 12.49 ± 0.11 Ma, respectively. The obtained older age of 12.81 Ma for the granodiorite in the central part is consistent with structural relationships of brittle fracturing of the mafic
The High Road to Astronomical Photometric Precision : Differential Photometry
Milone, E. F.; Pel, Jan Willem
2011-01-01
Differential photometry offers the most precise method for measuring the brightness of astronomical objects. We attempt to demonstrate why this should be the case, and then describe how well it has been done through a review of the application of differential techniques from the earliest visual
An Elementary Algorithm to Evaluate Trigonometric Functions to High Precision
Johansson, B. Tomas
2018-01-01
Evaluation of the cosine function is done via a simple Cordic-like algorithm, together with a package for handling arbitrary-precision arithmetic in the computer program Matlab. Approximations to the cosine function having hundreds of correct decimals are presented with a discussion around errors and implementation.
Cognition-Based Approaches for High-Precision Text Mining
Shannon, George John
2017-01-01
This research improves the precision of information extraction from free-form text via the use of cognitive-based approaches to natural language processing (NLP). Cognitive-based approaches are an important, and relatively new, area of research in NLP and search, as well as linguistics. Cognitive approaches enable significant improvements in both…
High Precision Clock Bias Prediction Model in Clock Synchronization System
Zan Liu
2016-01-01
Full Text Available Time synchronization is a fundamental requirement for many services provided by a distributed system. Clock calibration through the time signal is the usual way to realize the synchronization among the clocks used in the distributed system. The interference to time signal transmission or equipment failures may bring about failure to synchronize the time. To solve this problem, a clock bias prediction module is paralleled in the clock calibration system. And for improving the precision of clock bias prediction, the first-order grey model with one variable (GM(1,1 model is proposed. In the traditional GM(1,1 model, the combination of parameters determined by least squares criterion is not optimal; therefore, the particle swarm optimization (PSO is used to optimize GM(1,1 model. At the same time, in order to avoid PSO getting stuck at local optimization and improve its efficiency, the mechanisms that double subgroups and nonlinear decreasing inertia weight are proposed. In order to test the precision of the improved model, we design clock calibration experiments, where time signal is transferred via radio and wired channel, respectively. The improved model is built on the basis of clock bias acquired in the experiments. The results show that the improved model is superior to other models both in precision and in stability. The precision of improved model increased by 66.4%~76.7%.
Overview of the JYFLTRAP mass measurements and high-precision ...
nuclei, the mass difference can be determined with much higher precision than would normally be possible since for the mass doublets the systematic uncertainties become ..... The two-neutron separation energies in N = 60 indicate the. 338 ... Masses of zinc isotopes (Z = 30) were measured up to 80Zn, providing valuable.
High Energy Astrophysics and Cosmology from Space: NASA's Physics of the Cosmos Program
Hornschemeier, Ann
2016-03-01
We summarize currently-funded NASA activities in high energy astrophysics and cosmology, embodied in the NASA Physics of the Cosmos program, including updates on technology development and mission studies. The portfolio includes development of a space mission for measuring gravitational waves from merging supermassive black holes, currently envisioned as a collaboration with the European Space Agency (ESA) on its L3 mission and development of an X-ray observatory that will measure X-ray emission from the final stages of accretion onto black holes, currently envisioned as a NASA collaboration on ESA's Athena observatory. The portfolio also includes the study of cosmic rays and gamma ray photons resulting from a range of processes, of the physical process of inflation associated with the birth of the universe and of the nature of the dark energy that dominates the mass-energy of the modern universe. The program is supported by an analysis group called the PhysPAG that serves as a forum for community input and analysis and the talk will include a description of activities of this group.
The several faces of the cosmological principle
Beisbart, Claus [TU Dortmund (Germany). Fakultaet 14, Institut fuer Philosophie und Politikwissenschaft
2010-07-01
Much work in relativistic cosmology relies upon the cosmological principle. Very roughly, this principle has it hat the universe is spatially homogeneous and isotropic. However, if the principle is to do some work, it has to be rendered more precise. The aim of this talk is to show that such a precification significantly depends on the theoretical framework adopted and on its ontology. Moreover, it is shown that present-day cosmology uses the principle in different versions that do not fit together nicely. Whereas, in theoretical cosmology, the principle is spelt out as a requirement on space-time manifolds, observational cosmology cashes out the principle using the notion of a random process. I point out some philosophical problems that arise in this context. My conclusion is that the cosmological principle is not a very precise hypothesis, but rather a rough idea that has several faces in contemporary cosmology.
High precision measurement of the micro-imaging system to check repeatability of precision
Cheng Lin; Song Li; Ma Chuntao; Luo Hongxin; Wang Jie
2010-01-01
The beamlines slits of Shanghai Synchrotron Radiation Facility (SSRF) are required to have a repeatability of better than 1 μm. Before the slits installation, the off-line and/or on-line repeatability measurements must be conducted. A machine vision measuring system based on high resolution CCD and adjustable high magnification lens was used in this regard. A multi-level filtering method was used to treat the imaging data. After image binarization, the imaging noises were depressed effectively by using of algebraic mean filtering, statistics median filtering,and the least square filtering. Using the subtracted image between the images before and after slit movement, an average displacement of slit blades could be obtained, and the repeatability of slit could be measured, with a resolution of 0.3 μm of the measurement system. The experimental results show that this measurement system meets the requirements for non-contact measurements to the repeatability of slits. (authors)
Sanders, RH; Papantonopoulos, E
2005-01-01
I discuss the classical cosmological tests, i.e., angular size-redshift, flux-redshift, and galaxy number counts, in the light of the cosmology prescribed by the interpretation of the CMB anisotropies. The discussion is somewhat of a primer for physicists, with emphasis upon the possible systematic
ADVANCED DESIGN SOLUTIONS FOR HIGH-PRECISION WOODWORKING MACHINES
Giuseppe Lucisano
2016-03-01
Full Text Available With the aim at performing the highest precision during woodworking, a mix of alternative approaches, fruitfully integrated in a common design strategy, is essential. This paper represents an overview of technical solutions, recently developed by authors, in design of machine tools and their final effects on manufacturing. The most advanced solutions in machine design are reported side by side with common practices or little everyday expedients. These design actions are directly or indirectly related to the rational use of materials, sometimes very uncommon, as in the case of magnetorheological fluids chosen to implement an active control in speed and force on the electro-spindle, and permitting to improve the quality of wood machining. Other actions are less unusual, as in the case of the adoption of innovative anti-vibration supports for basement. Tradition or innovation, all these technical solutions contribute to the final result: the highest precision in wood machining.
Gorbunov Dmitry
2017-01-01
Full Text Available The very well known example of cosmology testing particle physics is the number of relativistic particles (photons and three active neutrinos within the Standard Model at primordial nucleosynthesis. These days the earliest moment we can hope to probe with present cosmological data is the early time inflation. The particle physics conditions there and now are different because of different energy scales and different values of the scalar fields, that usually prohibits a reliable connection between the particle physics parameters at the two interesting epochs. The physics at the highest energy scales may be probed with observations at the largest spatial scales (just somewhat smaller than the size of the visible Universe. However, we are not (yet ready to make the tests realistic, because of lack of a self-consistent theoretical description of the presently favorite cosmological models to be valid right after inflation.
Yale High Energy Physics Research: Precision Studies of Reactor Antineutrinos
Heeger, Karsten M. [Yale Univ., New Haven, CT (United States)
2014-09-13
This report presents experimental research at the intensity frontier of particle physics with particular focus on the study of reactor antineutrinos and the precision measurement of neutrino oscillations. The experimental neutrino physics group of Professor Heeger and Senior Scientist Band at Yale University has had leading responsibilities in the construction and operation of the Daya Bay Reactor Antineutrino Experiment and made critical contributions to the discovery of non-zero$\\theta_{13}$. Heeger and Band led the Daya Bay detector management team and are now overseeing the operations of the antineutrino detectors. Postdoctoral researchers and students in this group have made leading contributions to the Daya Bay analysis including the prediction of the reactor antineutrino flux and spectrum, the analysis of the oscillation signal, and the precision determination of the target mass yielding unprecedented precision in the relative detector uncertainty. Heeger's group is now leading an R\\&D effort towards a short-baseline oscillation experiment, called PROSPECT, at a US research reactor and the development of antineutrino detectors with advanced background discrimination.
Yale High Energy Physics Research: Precision Studies of Reactor Antineutrinos
Heeger, Karsten M.
2014-01-01
This report presents experimental research at the intensity frontier of particle physics with particular focus on the study of reactor antineutrinos and the precision measurement of neutrino oscillations. The experimental neutrino physics group of Professor Heeger and Senior Scientist Band at Yale University has had leading responsibilities in the construction and operation of the Daya Bay Reactor Antineutrino Experiment and made critical contributions to the discovery of non-zero$\\theta . Heeger and Band led the Daya Bay detector management team and are now overseeing the operations of the antineutrino detectors. Postdoctoral researchers and students in this group have made leading contributions to the Daya Bay analysis including the prediction of the reactor antineutrino flux and spectrum, the analysis of the oscillation signal, and the precision determination of the target mass yielding unprecedented precision in the relative detector uncertainty. Heeger's group is now leading an R\\&D effort towards a short-baseline oscillation experiment, called PROSPECT, at a US research reactor and the development of antineutrino detectors with advanced background discrimination.
Tumlinson, J.; Malec, A. L.; Murphy, M. T.; Carswell, R. F.; Jorgenson, R. A.; Buning, R.; Ubachs, W.; Milutinovic, N.; Ellison, S. L.; Prochaska, J. X.; Wolfe, A. M.
2010-01-01
We report two detections of deuterated molecular hydrogen (HD) in QSO absorption-line systems at z>2. Toward J2123-0500, we find N(HD) =13.84 ± 0.2 for a sub-Damped Lyman Alpha system (DLA) with metallicity ≅0.5Z sun and N(H 2 ) = 17.64 ± 0.15 at z = 2.0594. Toward FJ0812+32, we find N(HD) =15.38 ± 0.3 for a solar-metallicity DLA with N(H 2 ) = 19.88 ± 0.2 at z = 2.6265. These systems have ratios of HD to H 2 above that observed in dense clouds within the Milky Way disk and apparently consistent with a simple conversion from the cosmological ratio of D/H. These ratios are not readily explained by any available model of HD chemistry, and there are no obvious trends with metallicity or molecular content. Taken together, these two systems and the two published z>2 HD-bearing DLAs indicate that HD is either less effectively dissociated or more efficiently produced in high-redshift interstellar gas, even at low molecular fraction and/or solar metallicity. It is puzzling that such diverse systems should show such consistent HD/H 2 ratios. Without clear knowledge of all the aspects of HD chemistry that may help determine the ratio HD/H 2 , we conclude that these systems are potentially more revealing of gas chemistry than of D/H itself and that it is premature to use such systems to constrain D/H at high redshift.
Cosmological evidence for leptonic asymmetry after Planck
Caramete, A.; Popa, L.A., E-mail: acaramete@spacescience.ro, E-mail: lpopa@spacescience.ro [Institute of Space Science, 409 Atomistilor Street, Magurele, Ilfov 077125 (Romania)
2014-02-01
Recently, the PLANCK satellite found a larger and most precise value of the matter energy density, that impacts on the present values of other cosmological parameters such as the Hubble constant H{sub 0}, the present cluster abundances S{sub 8}, and the age of the Universe t{sub U}. The existing tension between PLANCK determination of these parameters in the frame of the base ΛCDM model and their determination from other measurements generated lively discussions, one possible interpretation being that some sources of systematic errors in cosmological measurements are not completely understood. An alternative interpretation is related to the fact that the CMB observations, that probe the high redshift Universe are interpreted in terms of cosmological parameters at present time by extrapolation within the base ΛCDM model that can be inadequate or incomplete. In this paper we quantify this tension by exploring several extensions of the base ΛCDM model that include the leptonic asymmetry. We set bounds on the radiation content of the Universe and neutrino properties by using the latest cosmological measurements, imposing also self-consistent BBN constraints on the primordial helium abundance. For all asymmetric cosmological models we find the preference of cosmological data for smaller values of active and sterile neutrino masses. This increases the tension between cosmological and short baseline neutrino oscillation data that favors a sterile neutrino with the mass of around 1 eV. For the case of degenerate massive neutrinos, we find that the discrepancies with the local determinations of H{sub 0}, and t{sub U} are alleviated at ∼ 1.3σ level while S{sub 8} is in agreement with its determination from CFHTLenS survey data at ∼ 1σ and with the prediction of cluster mass-observation relation at ∼ 0.5σ. We also find 2σ statistical preference of the cosmological data for the leptonic asymmetric models involving three massive neutrino species and neutrino direct
Design of High Precise Focusing System in Laser Direct Writer
Liang, Y Y; Tian, F; Luo, J B; Yang, G G
2006-01-01
In order to improve the accuracy and efficiency of fabricating lines with laser pattern generator, a novel focusing system was designed. Focusing system is based on optical off-axis detection principle. The detector is a two-quadrant photocell and the defocus signal is constructed by division. Focusing system has the character of second-order system with overdamp. The new embedded PID controller improves the performance of focusing system and upgrades the closed-loop precision to 0.2 μm. Furthermore focusing system has the fabrication capabilities for alterable-width lines under various defocus amount
High precision survey and alignment techniques in accelerator construction
Gervaise, J
1974-01-01
Basic concepts of precision surveying are briefly reviewed, and an historical account is given of instruments and techniques used during the construction of the Proton Synchrotron (1954-59), the Intersecting Storage Rings (1966-71), and the Super Proton Synchrotron (1971). A nylon wire device, distinvar, invar wire and tape, and recent automation of the gyrotheodolite and distinvar as well as auxiliary equipment (polyurethane jacks, Centipede) are discussed in detail. The paper ends summarizing the present accuracy in accelerator metrology, giving an outlook of possible improvement, and some aspects of staffing for the CERN Survey Group. (0 refs).
A high precision method for normalization of cross sections
Aguilera R, E.F.; Vega C, J.J.; Martinez Q, E.; Kolata, J.J.
1988-08-01
It was developed a system of 4 monitors and a program to eliminate, in the process of normalization of cross sections, the dependence of the alignment of the equipment and those condition of having centered of the beam. It was carried out a series of experiments with the systems 27 Al + 70, 72, 74, 76 Ge, 35 Cl + 58 Ni, 37 Cl + 58, 60, 62, 64 Ni and ( 81 Br, 109 Rh) + 60 Ni. For these experiments the typical precision of 1% was obtained in the normalization. It is demonstrated theoretical and experimentally the advantage of this method on those that use 1 or 2 monitors. (Author)
Planck 2013 results. XVI. Cosmological parameters
Ade, P.A.R.; Armitage-Caplan, C.; Arnaud, M.; Ashdown, M.; Atrio-Barandela, F.; Aumont, J.; Baccigalupi, C.; Banday, A.J.; Barreiro, R.B.; Bartlett, J.G.; Battaner, E.; Benabed, K.; Benoit, A.; Benoit-Levy, A.; Bernard, J.P.; Bersanelli, M.; Bielewicz, P.; Bobin, J.; Bock, J.J.; Bonaldi, A.; Bond, J.R.; Borrill, J.; Bouchet, F.R.; Bridges, M.; Bucher, M.; Burigana, C.; Butler, R.C.; Calabrese, E.; Cappellini, B.; Cardoso, J.F.; Catalano, A.; Challinor, A.; Chamballu, A.; Chary, R.R.; Chen, X.; Chiang, L.Y.; Chiang, H.C.; Christensen, P.R.; Church, S.; Clements, D.L.; Colombi, S.; Colombo, L.P.L.; Couchot, F.; Coulais, A.; Crill, B.P.; Curto, A.; Cuttaia, F.; Danese, L.; Davies, R.D.; Davis, R.J.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Delouis, J.M.; Desert, F.X.; Dickinson, C.; Diego, J.M.; Dolag, K.; Dole, H.; Donzelli, S.; Dore, O.; Douspis, M.; Dunkley, J.; Dupac, X.; Efstathiou, G.; Elsner, F.; Ensslin, T.A.; Eriksen, H.K.; Finelli, F.; Forni, O.; Frailis, M.; Fraisse, A.A.; Franceschi, E.; Gaier, T.C.; Galeotta, S.; Galli, S.; Ganga, K.; Giard, M.; Giardino, G.; Giraud-Heraud, Y.; Gjerlow, E.; Gonzalez-Nuevo, J.; Gorski, K.M.; Gratton, S.; Gregorio, A.; Gruppuso, A.; Gudmundsson, J.E.; Haissinski, J.; Hamann, J.; Hansen, F.K.; Hanson, D.; Harrison, D.; Henrot-Versille, S.; Hernandez-Monteagudo, C.; Herranz, D.; Hildebrandt, S.R.; Hivon, E.; Hobson, M.; Holmes, W.A.; Hornstrup, A.; Hou, Z.; Hovest, W.; Huffenberger, K.M.; Jaffe, T.R.; Jaffe, A.H.; Jewell, J.; Jones, W.C.; Juvela, M.; Keihanen, E.; Keskitalo, R.; Kisner, T.S.; Kneissl, R.; Knoche, J.; Knox, L.; Kunz, M.; Kurki-Suonio, H.; Lagache, G.; Lahteenmaki, A.; Lamarre, J.M.; Lasenby, A.; Lattanzi, M.; Laureijs, R.J.; Lawrence, C.R.; Leach, S.; Leahy, J.P.; Leonardi, R.; Leon-Tavares, J.; Lesgourgues, J.; Lewis, A.; Liguori, M.; Lilje, P.B.; Linden-Vornle, M.; Lopez-Caniego, M.; Lubin, P.M.; Macias-Perez, J.F.; Maffei, B.; Maino, D.; Mandolesi, N.; Maris, M.; Marshall, D.J.; Martin, P.G.; Martinez-Gonzalez, E.; Masi, S.; Matarrese, S.; Matthai, F.; Mazzotta, P.; Meinhold, P.R.; Melchiorri, A.; Melin, J.B.; Mendes, L.; Menegoni, E.; Mennella, A.; Migliaccio, M.; Millea, M.; Mitra, S.; Miville-Deschenes, M.A.; Moneti, A.; Montier, L.; Morgante, G.; Mortlock, D.; Moss, A.; Munshi, D.; Naselsky, P.; Nati, F.; Natoli, P.; Netterfield, C.B.; Norgaard-Nielsen, H.U.; Noviello, F.; Novikov, D.; Novikov, I.; O'Dwyer, I.J.; Osborne, S.; Oxborrow, C.A.; Paci, F.; Pagano, L.; Pajot, F.; Paoletti, D.; Partridge, B.; Pasian, F.; Patanchon, G.; Pearson, D.; Pearson, T.J.; Peiris, H.V.; Perdereau, O.; Perotto, L.; Perrotta, F.; Pettorino, V.; Piacentini, F.; Piat, M.; Pierpaoli, E.; Pietrobon, D.; Plaszczynski, S.; Platania, P.; Pointecouteau, E.; Polenta, G.; Ponthieu, N.; Popa, L.; Poutanen, T.; Pratt, G.W.; Prezeau, G.; Prunet, S.; Puget, J.L.; Rachen, J.P.; Reach, W.T.; Rebolo, R.; Reinecke, M.; Remazeilles, M.; Renault, C.; Ricciardi, S.; Riller, T.; Ristorcelli, I.; Rocha, G.; Rosset, C.; Roudier, G.; Rowan-Robinson, M.; Rubino-Martin, J.A.; Rusholme, B.; Sandri, M.; Santos, D.; Savelainen, M.; Savini, G.; Scott, D.; Seiffert, M.D.; Shellard, E.P.S.; Spencer, L.D.; Starck, J.L.; Stolyarov, V.; Stompor, R.; Sudiwala, R.; Sunyaev, R.; Sureau, F.; Sutton, D.; Suur-Uski, A.S.; Sygnet, J.F.; Tauber, J.A.; Tavagnacco, D.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Tucci, M.; Tuovinen, J.; Turler, M.; Umana, G.; Valenziano, L.; Valiviita, J.; Van Tent, B.; Vielva, P.; Villa, F.; Vittorio, N.; Wade, L.A.; Wandelt, B.D.; Wehus, I.K.; White, M.; White, S.D.M.; Wilkinson, A.; Yvon, D.; Zacchei, A.; Zonca, A.
2014-10-29
We present the first results based on Planck measurements of the CMB temperature and lensing-potential power spectra. The Planck spectra at high multipoles are extremely well described by the standard spatially-flat six-parameter LCDM cosmology. In this model Planck data determine the cosmological parameters to high precision. We find a low value of the Hubble constant, H0=67.3+/-1.2 km/s/Mpc and a high value of the matter density parameter, Omega_m=0.315+/-0.017 (+/-1 sigma errors) in excellent agreement with constraints from baryon acoustic oscillation (BAO) surveys. Including curvature, we find that the Universe is consistent with spatial flatness to percent-level precision using Planck CMB data alone. We present results from an analysis of extensions to the standard cosmology, using astrophysical data sets in addition to Planck and high-resolution CMB data. None of these models are favoured significantly over standard LCDM. The deviation of the scalar spectral index from unity is insensitive to the additi...
High precision determination of 16O in high Tc superconductors by DIGME
Vickridge, I.; Tallon, J.; Presland, M.
1994-01-01
A method is described for measuring the 16 O content of high T c superconductors with better than 1% precision by exploiting the detection of gamma rays emitted when they are irradiated by an MeV deuterium beam. The method is presently less accurate than the widely used titration and thermogravimetric methods, however it is rapid, and may be applied to materials such as Tl-containing high T c superconductors which pose serious problems for the usual analytical methods. (orig.)
Leibundgut, B.
2005-01-01
Supernovae have developed into a versatile tool for cosmology. Their impact on the cosmological model has been profound and led to the discovery of the accelerated expansion. The current status of the cosmological model as perceived through supernova observations will be presented. Supernovae are currently the only astrophysical objects that can measure the dynamics of the cosmic expansion during the past eight billion years. Ongoing experiments are trying to determine the characteristics of the accelerated expansion and give insight into what might be the physical explanation for the acceleration. (author)
Turner, Michael S.
1999-01-01
For two decades the hot big-bang model as been referred to as the standard cosmology - and for good reason. For just as long cosmologists have known that there are fundamental questions that are not answered by the standard cosmology and point to a grander theory. The best candidate for that grander theory is inflation + cold dark matter. It holds that the Universe is flat, that slowly moving elementary particles left over from the earliest moments provide the cosmic infrastructure, and that the primeval density inhomogeneities that seed all the structure arose from quantum fluctuations. There is now prima facie evidence that supports two basic tenets of this paradigm. An avalanche of high-quality cosmological observations will soon make this case stronger or will break it. Key questions remain to be answered; foremost among them are: identification and detection of the cold dark matter particles and elucidation of the dark-energy component. These are exciting times in cosmology!
Khalatnikov, I.M.; Belinskij, V.A.
1984-01-01
Application of the qualitative theory of dynamic systems to analysis of homogeneous cosmological models is described. Together with the well-known cases, requiring ideal liquid, the properties of cosmological evolution of matter with dissipative processes due to viscosity are considered. New cosmological effects occur, when viscosity terms being one and the same order with the rest terms in the equations of gravitation or even exceeding them. In these cases the description of the dissipative process by means of only two viscosity coefficients (volume and shift) may become inapplicable because all the rest decomposition terms of dissipative addition to the energy-momentum in velocity gradient can be large application of equations with hydrodynamic viscosty should be considered as a model of dissipative effects in cosmology
Lesgourgues, Julien; Miele, Gennaro; Pastor, Sergio
2013-01-01
The role that neutrinos have played in the evolution of the Universe is the focus of one of the most fascinating research areas that has stemmed from the interplay between cosmology, astrophysics and particle physics. In this self-contained book, the authors bring together all aspects of the role of neutrinos in cosmology, spanning from leptogenesis to primordial nucleosynthesis, their role in CMB and structure formation, to the problem of their direct detection. The book starts by guiding the reader through aspects of fundamental neutrino physics, such as the standard cosmological model and the statistical mechanics in the expanding Universe, before discussing the history of neutrinos in chronological order from the very early stages until today. This timely book will interest graduate students and researchers in astrophysics, cosmology and particle physics, who work with either a theoretical or experimental focus.
Zeldovich, Y.B.
1983-01-01
This paper fives a general review of modern cosmology. The following subjects are discussed: hot big bang and periodization of the evolution; Hubble expansion; the structure of the universe (pancake theory); baryon asymmetry; inflatory universe. (Auth.)
CERN. Geneva
2007-01-01
The understanding of the Universe at the largest and smallest scales traditionally has been the subject of cosmology and particle physics, respectively. Studying the evolution of the Universe connects today's large scales with the tiny scales in the very early Universe and provides the link between the physics of particles and of the cosmos. This series of five lectures aims at a modern and critical presentation of the basic ideas, methods, models and observations in today's particle cosmology.
High-precision x-ray spectroscopy of highly charged ions with microcalorimeters
Kraft-Bermuth, S; Andrianov, V; Bleile, A; Echler, A; Egelhof, P; Grabitz, P; Ilieva, S; Kiselev, O; Meier, J; Kilbourne, C; McCammon, D
2013-01-01
The precise determination of the energy of the Lyman α1 and α2 lines in hydrogen-like heavy ions provides a sensitive test of quantum electrodynamics in very strong Coulomb fields. To improve the experimental precision, the new detector concept of microcalorimeters is now exploited for such measurements. Such detectors consist of compensated-doped silicon thermistors and Pb or Sn absorbers to obtain high quantum efficiency in the energy range of 40–70 keV, where the Doppler-shifted Lyman lines are located. For the first time, a microcalorimeter was applied in an experiment to precisely determine the transition energy of the Lyman lines of lead ions at the experimental storage ring at GSI. The energy of the Ly α1 line E(Ly-α1, 207 Pb 81+ ) = (77937 ± 12 stat ± 25 syst ) eV agrees within error bars with theoretical predictions. To improve the experimental precision, a new detector array with more pixels and better energy resolution was equipped and successfully applied in an experiment to determine the Lyman-α lines of gold ions 197 Au 78+ . (paper)
Trial of accelerator cells machining with high precision and high efficiency at Okayama region
Yoshikawa, Mitsuo; Yoden, Hiroyuki; Yokomizo, Seiichi; Sumida, Tsuneto; Kunishida, Jun; Oshita, Isao
2005-01-01
In the framework of the project 'Promotion of Science and Technology in Regional Areas' by the Ministry of Education, Culture, Sports, Science and Technology, we have prepared a special apparatus for machining accelerator cells with a high precision and a high efficiency for the future linear collider. A machining with as small an error as 2 micrometers has been realized. Necessary time to finish one accelerator cell is reduced from 128 minutes to 34 minutes due to the suppression of the heating of the object at the machining. If newly developed one chuck method was employed, the precision and efficiency would be further improved. By cutting at both sides of the spindle, the necessary time for machining would be reduced by half. (author)
Dornfeld, David
2008-01-01
Today there is a high demand for high-precision products. The manufacturing processes are now highly sophisticated and derive from a specialized genre called precision engineering. Precision Manufacturing provides an introduction to precision engineering and manufacturing with an emphasis on the design and performance of precision machines and machine tools, metrology, tooling elements, machine structures, sources of error, precision machining processes and precision process planning. As well as discussing the critical role precision machine design for manufacturing has had in technological developments over the last few hundred years. In addition, the influence of sustainable manufacturing requirements in precision processes is introduced. Drawing upon years of practical experience and using numerous examples and illustrative applications, David Dornfeld and Dae-Eun Lee cover precision manufacturing as it applies to: The importance of measurement and metrology in the context of Precision Manufacturing. Th...
Futility of high-precision SO(10) calculations
Dixit, V.V.; Sher, M.
1989-01-01
In grand unified models, there are a large number of scalar bosons with masses of the order of the unification scale. Since the masses could be an order of magnitude or so above or below the vector-boson masses, they will affect the beta functions and thus low-energy predictions; the lack of knowledge of the masses translates into an uncertainty in these predictions. Although the effect is very small for a single scalar field, SO(10) models have hundreds of such fields, leading to very large uncertainties. We analyze this effect in SO(10) models with intermediate scales, and show that all such models have an additional uncertainty which can be as large as 4 orders of magnitude in the proton lifetime and as large as a factor of 0.02 in sin 2 θ w . In models with 210-dimensional representations, the weak mixing angle is uncertain by as much as 0.06. As a result, we argue that precise calculations in SO(10) models with intermediate scales may not be possible
High precision silicon piezo resistive SMART pressure sensor
Brown, Rod
2005-01-01
Instruments for test and calibration require a pressure sensor that is precise and stable. Market forces also dictate a move away from single measurand test equipment and, certainly in the case of pressure, away from single range equipment. A pressure 'module' is required which excels in pressure measurement but is interchangble with sensors for other measurands. A communications interface for such a sensor has been specified. Instrument Digital Output Sensor (IDOS) that permits this interchanagability and allows the sensor to be inside or outside the measuring instrument. This paper covers the design and specification of a silicon diaphragm piezo resistive SMART sensor using this interface. A brief history of instrument sensors will be given to establish the background to this development. Design choices of the silicon doping, bridge energisation method, temperature sensing, signal conversion, data processing, compensation method, communications interface will be discussed. The physical format of the 'in-instrument' version will be shown and then extended to the packaging design for the external version. Test results will show the accuracy achieved exceeds the target of 0.01%FS over a range of temperatures
High precision silicon piezo resistive SMART pressure sensor
Brown, Rod
2005-01-01
Instruments for test and calibration require a pressure sensor that is precise and stable. Market forces also dictate a move away from single measurand test equipment and, certainly in the case of pressure, away from single range equipment. A pressure `module' is required which excels in pressure measurement but is interchangble with sensors for other measurands. A communications interface for such a sensor has been specified. Instrument Digital Output Sensor (IDOS) that permits this interchanagability and allows the sensor to be inside or outside the measuring instrument. This paper covers the design and specification of a silicon diaphragm piezo resistive SMART sensor using this interface. A brief history of instrument sensors will be given to establish the background to this development. Design choices of the silicon doping, bridge energisation method, temperature sensing, signal conversion, data processing, compensation method, communications interface will be discussed. The physical format of the `in-instrument' version will be shown and then extended to the packaging design for the external version. Test results will show the accuracy achieved exceeds the target of 0.01%FS over a range of temperatures.
Application of GPS in a high precision engineering survey network
Ruland, R.; Leick, A.
1985-04-01
A GPS satellite survey was carried out with the Macrometer to support construction at the Stanford Linear Accelerator Center (SLAC). The network consists of 16 stations of which 9 stations were part of the Macrometer network. The horizontal and vertical accuracy of the GPS survey is estimated to be 1 to 2 mm and 2 to 3 mm respectively. The horizontal accuracy of the terrestrial survey, consisting of angles and distances, equals that of the GPS survey only in the ''loop'' portion of the network. All stations are part of a precise level network. The ellipsoidal heights obtained from the GPS survey and the orthometric heights of the level network are used to compute geoid undulations. A geoid profile along the linac was computed by the National Geodetic Survey in 1963. This profile agreed with the observed geoid within the standard deviation of the GPS survey. Angles and distances were adjusted together (TERRA), and all terrestrial observations were combined with the GPS vector observations in a combination adjustment (COMB). A comparison of COMB and TERRA revealed systematic errors in the terrestrial solution. A scale factor of 1.5 ppM +- .8 ppM was estimated. This value is of the same magnitude as the over-all horizontal accuracy of both networks. 10 refs., 3 figs., 5 tabs
Big-bang nucleosynthesis in the new cosmology
Fields, B.D.
2005-01-01
Big bang nucleosynthesis (BBN) describes the production of the lightest elements in the first minutes of cosmic time. I will review the physics of cosmological element production, and the observations of the primordial element abundances. The comparison between theory and observation has heretofore provided our earliest probe of the universe, and given the best measure of the cosmic baryon content. However, BBN has now taken a new role in cosmology, in light of new precision measurements of the cosmic microwave background (CMB). Recent CMB anisotropy data yield a wealth of cosmological parameters; in particular, the baryon-to-photon ratio η = n B /n γ is measured to high precision. The confrontation between the BBN and CMB 'baryometers' poses a new and stringent test of the standard cosmology; the status of this test will be discussed. Moreover, it is now possible to recast the role of BBN by using the CMB to fix the baryon density and even some light element abundances. This strategy sharpens BBN into a more powerful probe of early universe physics, and of galactic nucleosynthesis processes. The impact of the CMB results on particle physics beyond the Standard Model, and on non-standard cosmology, will be illustrated. Prospects for improvement of these bounds via additional astronomical observations and nuclear experiments will be discussed, as will the lingering 'lithium problem.' (author)
High-speed precision weighing of pharmaceutical capsules
Bürmen, Miran; Pernuš, Franjo; Likar, Boštjan
2009-01-01
In this paper, we present a cost-effective method for fast and accurate in-line weighing of hard gelatin capsules based on the optimized capacitance sensor and real-time processing of the capsule capacitance profile resulting from 5000 capacitance measurements per second. First, the effect of the shape and size of the capacitive sensor on the sensitivity and stability of the measurements was investigated in order to optimize the performance of the system. The method was tested on two types of hard gelatin capsules weighing from 50 mg to 650 mg. The results showed that the capacitance profile was exceptionally well correlated with the capsule weight with the correlation coefficient exceeding 0.999. The mean precision of the measurements was in the range from 1 mg to 3 mg, depending on the size of the capsule and was significantly lower than the 5% weight tolerances usually used by the pharmaceutical industry. Therefore, the method was found feasible for weighing pharmaceutical hard gelatin capsules as long as certain conditions are met regarding the capsule fill properties and environment stability. The proposed measurement system can be calibrated by using only two or three sets of capsules with known weight. However, for most applications it is sufficient to use only empty and nominally filled capsules for calibration. Finally, a practical application of the proposed method showed that a single system is capable of weighing around 75 000 capsules per hour, while using multiple systems could easily increase the inspection rate to meet almost any requirements
Parton distributions from high-precision collider data
Ball, Richard D.; Del Debbio, Luigi; Groth-Merrild, Patrick [University of Edinburgh, The Higgs Centre for Theoretical Physics, Edinburgh (United Kingdom); Bertone, Valerio; Hartland, Nathan P.; Rojo, Juan [VU University, Department of Physics and Astronomy, Amsterdam (Netherlands); Nikhef Theory Group, Amsterdam (Netherlands); Carrazza, Stefano [CERN, Theoretical Physics Department, Geneva (Switzerland); Forte, Stefano [Universita di Milano, Tif Lab, Dipartimento di Fisica, Milano (Italy); INFN, Sezione di Milano, Milano (Italy); Guffanti, Alberto [Universita di Torino, Dipartimento di Fisica, Turin (Italy); INFN, Sezione di Torino, Turin (Italy); Kassabov, Zahari [Universita di Milano, Tif Lab, Dipartimento di Fisica, Milano (Italy); INFN, Sezione di Milano, Milano (Italy); Universita di Torino, Dipartimento di Fisica, Turin (Italy); INFN, Sezione di Torino, Turin (Italy); Latorre, Jose I. [Universitat de Barcelona, Departament de Fisica Quantica i Astrofisica, Barcelona (Spain); National University of Singapore, Center for Quantum Technologies, Singapore (Singapore); Nocera, Emanuele R.; Rottoli, Luca; Slade, Emma [University of Oxford, Rudolf Peierls Centre for Theoretical Physics, Oxford (United Kingdom); Ubiali, Maria [University of Cambridge, Cavendish Laboratory, HEP Group, Cambridge (United Kingdom); Collaboration: NNPDF Collaboration
2017-10-15
We present a new set of parton distributions, NNPDF3.1, which updates NNPDF3.0, the first global set of PDFs determined using a methodology validated by a closure test. The update is motivated by recent progress in methodology and available data, and involves both. On the methodological side, we now parametrize and determine the charm PDF alongside the light-quark and gluon ones, thereby increasing from seven to eight the number of independent PDFs. On the data side, we now include the D0 electron and muon W asymmetries from the final Tevatron dataset, the complete LHCb measurements of W and Z production in the forward region at 7 and 8 TeV, and new ATLAS and CMS measurements of inclusive jet and electroweak boson production. We also include for the first time top-quark pair differential distributions and the transverse momentum of the Z bosons from ATLAS and CMS. We investigate the impact of parametrizing charm and provide evidence that the accuracy and stability of the PDFs are thereby improved. We study the impact of the new data by producing a variety of determinations based on reduced datasets. We find that both improvements have a significant impact on the PDFs, with some substantial reductions in uncertainties, but with the new PDFs generally in agreement with the previous set at the one-sigma level. The most significant changes are seen in the light-quark flavor separation, and in increased precision in the determination of the gluon. We explore the implications of NNPDF3.1 for LHC phenomenology at Run II, compare with recent LHC measurements at 13 TeV, provide updated predictions for Higgs production cross-sections and discuss the strangeness and charm content of the proton in light of our improved dataset and methodology. The NNPDF3.1 PDFs are delivered for the first time both as Hessian sets, and as optimized Monte Carlo sets with a compressed number of replicas. (orig.)
A novel power source for high-precision, highly efficient micro w-EDM
Chen, Shun-Tong; Chen, Chi-Hung
2015-01-01
The study presents the development of a novel power source for high-precision, highly efficient machining of micropart microstructures using micro wire electrical discharge machining (w-EDM). A novel power source based on a pluri resistance–capacitance (pRC) circuit that can generate a high-frequency, high-peak current with a short pulse train is proposed and designed to enhance the performance of micro w-EDM processes. Switching between transistors is precisely controlled in the designed power source to create a high-frequency short-pulse train current. Various microslot cutting tests in both aluminum and copper alloys are conducted. Experimental results demonstrate that the pRC power source creates instant spark erosion resulting in markedly less material for removal, diminishing discharge crater size, and consequently an improved surface finish. A new evaluation approach for spark erosion ability (SEA) to assess the merits of micro EDM power sources is also proposed. In addition to increasing the speed of micro w-EDM by increasing wire feed rates by 1.6 times the original feed rate, the power source is more appropriate for machining micropart microstructures since there is less thermal breaking. Satisfactory cutting of an elaborate miniature hook-shaped structure and a high-aspect ratio microstructure with a squared-pillar array also reveal that the developed pRC power source is effective, and should be very useful in the manufacture of intricate microparts. (paper)
Marzocchi, Badder
2017-01-01
The CMS Electromagnetic Calorimeter is made of scintillating lead tungstate crystals, using avalanche photodiodes (APD) as photo-detectors in the barrel part. The high voltage system, consisting of 1224 channels, biases groups of 50 APD pairs, each at a voltage of about 380 V. The APD gain dependence on the voltage is 3pct/V. A stability of better than 60 mV is needed to have negligible impact on the calorimeter energy resolution. Until 2015 manual calibrations were performed yearly. A new calibration system was deployed recently, which satisfies the requirement of low disturbance and high precision. The system is discussed in detail and first operational experience is presented.
Rajantie, Arttu
2018-03-06
The discovery of the Higgs boson in 2012 and other results from the Large Hadron Collider have confirmed the standard model of particle physics as the correct theory of elementary particles and their interactions up to energies of several TeV. Remarkably, the theory may even remain valid all the way to the Planck scale of quantum gravity, and therefore it provides a solid theoretical basis for describing the early Universe. Furthermore, the Higgs field itself has unique properties that may have allowed it to play a central role in the evolution of the Universe, from inflation to cosmological phase transitions and the origin of both baryonic and dark matter, and possibly to determine its ultimate fate through the electroweak vacuum instability. These connections between particle physics and cosmology have given rise to a new and growing field of Higgs cosmology, which promises to shed new light on some of the most puzzling questions about the Universe as new data from particle physics experiments and cosmological observations become available.This article is part of the Theo Murphy meeting issue 'Higgs cosmology'. © 2018 The Author(s).
Wesson, P.S.
1979-01-01
The Cosmological Principle states: the universe looks the same to all observers regardless of where they are located. To most astronomers today the Cosmological Principle means the universe looks the same to all observers because density of the galaxies is the same in all places. A new Cosmological Principle is proposed. It is called the Dimensional Cosmological Principle. It uses the properties of matter in the universe: density (rho), pressure (p), and mass (m) within some region of space of length (l). The laws of physics require incorporation of constants for gravity (G) and the speed of light (C). After combining the six parameters into dimensionless numbers, the best choices are: 8πGl 2 rho/c 2 , 8πGl 2 rho/c 4 , and 2 Gm/c 2 l (the Schwarzchild factor). The Dimensional Cosmological Principal came about because old ideas conflicted with the rapidly-growing body of observational evidence indicating that galaxies in the universe have a clumpy rather than uniform distribution
Rajantie, Arttu
2018-01-01
The discovery of the Higgs boson in 2012 and other results from the Large Hadron Collider have confirmed the standard model of particle physics as the correct theory of elementary particles and their interactions up to energies of several TeV. Remarkably, the theory may even remain valid all the way to the Planck scale of quantum gravity, and therefore it provides a solid theoretical basis for describing the early Universe. Furthermore, the Higgs field itself has unique properties that may have allowed it to play a central role in the evolution of the Universe, from inflation to cosmological phase transitions and the origin of both baryonic and dark matter, and possibly to determine its ultimate fate through the electroweak vacuum instability. These connections between particle physics and cosmology have given rise to a new and growing field of Higgs cosmology, which promises to shed new light on some of the most puzzling questions about the Universe as new data from particle physics experiments and cosmological observations become available. This article is part of the Theo Murphy meeting issue `Higgs cosmology'.
SynUTC - high precision time synchronization over ethernet networks
Höller, R; Horauer, M; Kerö, N; Schmid, U; Schossmaier, K
2002-01-01
This article describes our SynUTC (Synchronized Universal Time Coordinated) technology, which enables high-accuracy distribution of GPS time and time synchronization of network nodes connected via standard Ethernet LANs. By means of exchanging data packets in conjunction with moderate hardware support at nodes and switches, an overall worst-case accuracy in the range of some 100 ns can be achieved, with negligible communication overhead. Our technology thus improves the 1 ms-range accuracy achievable by conventional, software-based approaches like NTP by 4 orders of magnitude. Applications can use the high-accuracy global time provided by SynUTC for event timestamping and event generation both at hardware and software level. SynUTC is based upon inserting highly accurate time information into dedicated data packets at the media-independent interface (MII) between the physical layer transceiver and the network controller upon packet transmission and reception, respectively. As a consequence, every node has acc...
High precision 16K, 16 channel peak sensing CAMAC ADC
Jain, Mamta; Subramaniam, E.T
2013-01-01
A high density, peak sensing, analog to digital converter (ADC) double width module with CAMAC back plane has been developed for nuclear physics experiments with a large number of detectors. This module has sixteen independent channels in plug-in daughter card mother board mode
In-plane laser forming for high precision alignment
Folkersma, Ger; Römer, Gerardus Richardus, Bernardus, Engelina; Brouwer, Dannis Michel; Huis in 't Veld, Bert
2014-01-01
Laser microforming is extensively used to align components with submicrometer accuracy, often after assembly. While laser-bending sheet metal is the most common laser-forming mechanism, the in-plane upsetting mechanism is preferred when a high actuator stiffness is required. A three-bridge planar
Precision High-Voltage DC Dividers and Their Calibration
Dragounová, Naděžda
2005-01-01
Roč. 54, č. 5 (2005), s. 1911-1915 ISSN 0018-9456 R&D Projects: GA AV ČR KSK1048102; GA ČR GA202/03/0889 Keywords : calibration * dc voltage * high voltage (HV) Subject RIV: JA - Electronics ; Optoelectronics, Electrical Engineering Impact factor: 0.665, year: 2005
Various high precision measurements of pressure in atomic energy industry
Aritomi, Masanori; Inoue, Akira; Hosoma, Takashi; Tanaka, Izumi; Gabane, Tsunemichi.
1987-01-01
As for the pressure measurement in atomic energy industry, it is mostly the measurement using differential pressure transmitters and pressure transmitters for process measurement with the general accuracy of measurement of 0.2 - 0.5 % FS/year. However, recently for the development of nuclear fusion reactors and the establishment of nuclear fuel cycle accompanying new atomic energy technology, there are the needs of the pressure measurement having higher accuracy of 0.01 % FS/year and high resolution, and quartz vibration type pressure sensors appeared. New high accuracy pressure measurement techniques were developed by the advance of data processing and the rationalization of data transmission. As the results, the measurement of the differential pressure of helium-lithium two-phase flow in the cooling system of nuclear fusion reactors, the high accuracy measuring system for the level of plutonium nitrate and other fuel substance in tanks in fuel reprocessing and conversion, the high accuracy measurement of atmospheric pressure and wind velocity in ducts, chimneys and tunnels in nuclear facilities and so on became feasible. The principle and the measured data of quartz vibration type pressure sensors are shown. (Kako, I.)
High-Precision Registration of Point Clouds Based on Sphere Feature Constraints
Junhui Huang
2016-12-01
Full Text Available Point cloud registration is a key process in multi-view 3D measurements. Its precision affects the measurement precision directly. However, in the case of the point clouds with non-overlapping areas or curvature invariant surface, it is difficult to achieve a high precision. A high precision registration method based on sphere feature constraint is presented to overcome the difficulty in the paper. Some known sphere features with constraints are used to construct virtual overlapping areas. The virtual overlapping areas provide more accurate corresponding point pairs and reduce the influence of noise. Then the transformation parameters between the registered point clouds are solved by an optimization method with weight function. In that case, the impact of large noise in point clouds can be reduced and a high precision registration is achieved. Simulation and experiments validate the proposed method.
A high precision radiation-tolerant LVDT conditioning module
Masi, A; Losito, R; Peronnard, P; Secondo, R; Spiezia, G
2014-01-01
Linear variable differential transformer (LVDT) position sensors are widely used in particle accelerators and nuclear plants, thanks to their properties of contact-less sensing, radiation tolerance, infinite resolution, good linearity and cost efficiency. Many applications require high reading accuracy, even in environments with high radiation levels, where the conditioning electronics must be located several hundred meters away from the sensor. Sometimes even at long distances the conditioning module is still exposed to ionizing radiation. Standard off-the-shelf electronic conditioning modules offer limited performances in terms of reading accuracy and long term stability already with short cables. A radiation tolerant stand-alone LVDT conditioning module has been developed using Commercial Off-The-Shelf (COTS) components. The reading of the sensor output voltages is based on a sine-fit algorithm digitally implemented on an FPGA ensuring few micrometers reading accuracy even with low signal-to-noise ratios. ...
Ultrasmooth, Highly Spherical Monocrystalline Gold Particles for Precision Plasmonics
Lee, You-Jin
2013-12-23
Ultrasmooth, highly spherical monocrystalline gold particles were prepared by a cyclic process of slow growth followed by slow chemical etching, which selectively removes edges and vertices. The etching process effectively makes the surface tension isotropic, so that spheres are favored under quasi-static conditions. It is scalable up to particle sizes of 200 nm or more. The resulting spherical crystals display uniform scattering spectra and consistent optical coupling at small separations, even showing Fano-like resonances in small clusters. The high monodispersity of the particles we demonstrate should facilitate the self-assembly of nanoparticle clusters with uniform optical resonances, which could in turn be used to fabricate optical metafluids. Narrow size distributions are required to control not only the spectral features but also the morphology and yield of clusters in certain assembly schemes. © 2013 American Chemical Society.
New high-precision deep concave optical surface manufacturing capability
Piché, François; Maloney, Chris; VanKerkhove, Steve; Supranowicz, Chris; Dumas, Paul; Donohue, Keith
2017-10-01
This paper describes the manufacturing steps necessary to manufacture hemispherical concave aspheric mirrors for high- NA systems. The process chain is considered from generation to final figuring and includes metrology testing during the various manufacturing steps. Corning Incorporated has developed this process by taking advantage of recent advances in commercially available Satisloh and QED Technologies equipment. Results are presented on a 100 mm concave radius nearly hemispherical (NA = 0.94) fused silica sphere with a better than 5 nm RMS figure. Part interferometric metrology was obtained on a QED stitching interferometer. Final figure was made possible by the implementation of a high-NA rotational MRF mode recently developed by QED Technologies which is used at Corning Incorporated for production. We also present results from a 75 mm concave radius (NA = 0.88) Corning ULE sphere that was produced using sub-aperture tools from generation to final figuring. This part demonstrates the production chain from blank to finished optics for high-NA concave asphere.
A high-precision algorithm for axisymmetric flow
A. Gokhman
1995-01-01
Full Text Available We present a new algorithm for highly accurate computation of axisymmetric potential flow. The principal feature of the algorithm is the use of orthogonal curvilinear coordinates. These coordinates are used to write down the equations and to specify quadrilateral elements following the boundary. In particular, boundary conditions for the Stokes' stream-function are satisfied exactly. The velocity field is determined by differentiating the stream-function. We avoid the use of quadratures in the evaluation of Galerkin integrals, and instead use splining of the boundaries of elements to take the double integrals of the shape functions in closed form. This is very accurate and not time consuming.
High precision neutron interferometer setup S18b
Hasegawa, Y.; Lemmel, H.
2011-01-01
The present setup at S18 is a multi purpose instrument. It is used for both interferometry and a Bonse-Hart camera for USANS (Ultra Small Angle Neutron Scattering) spectroscopy with wide range tunability of wavelength. Some recent measurements demand higher stability of the instrument, which made us to propose a new setup dedicated particularly for neutron interferometer experiments requiring high phase stability. To keep both options available, we suggest building the new setup in addition to the old one. By extending the space of the present setup by 1.5 m to the upstream, both setups can be accommodated side by side. (authors)
Combination spindle-drive system for high precision machining
Gerth, Howard L.
1977-07-26
A combination spindle-drive is provided for fabrication of optical quality surface finishes. Both the spindle-and-drive utilize the spindle bearings for support, thereby removing the conventional drive-means bearings as a source of vibration. An airbearing spindle is modified to carry at the drive end a highly conductive cup-shaped rotor which is aligned with a stationary stator to produce torque in the cup-shaped rotor through the reaction of eddy currents induced in the rotor. This arrangement eliminates magnetic attraction forces and all force is in the form of torque on the cup-shaped rotor.
A high precision radiation-tolerant LVDT conditioning module
Masi, A. [EN/STI Group, CERN - European Organization for Nuclear Research, CH-1211 Geneva 23 (Switzerland); Danzeca, S. [EN/STI Group, CERN - European Organization for Nuclear Research, CH-1211 Geneva 23 (Switzerland); IES, F-34000 Montpellier (France); Losito, R.; Peronnard, P. [EN/STI Group, CERN - European Organization for Nuclear Research, CH-1211 Geneva 23 (Switzerland); Secondo, R., E-mail: raffaello.secondo@cern.ch [EN/STI Group, CERN - European Organization for Nuclear Research, CH-1211 Geneva 23 (Switzerland); Spiezia, G. [EN/STI Group, CERN - European Organization for Nuclear Research, CH-1211 Geneva 23 (Switzerland)
2014-05-01
Linear variable differential transformer (LVDT) position sensors are widely used in particle accelerators and nuclear plants, thanks to their properties of contact-less sensing, radiation tolerance, infinite resolution, good linearity and cost efficiency. Many applications require high reading accuracy, even in environments with high radiation levels, where the conditioning electronics must be located several hundred meters away from the sensor. Sometimes even at long distances the conditioning module is still exposed to ionizing radiation. Standard off-the-shelf electronic conditioning modules offer limited performances in terms of reading accuracy and long term stability already with short cables. A radiation tolerant stand-alone LVDT conditioning module has been developed using Commercial Off-The-Shelf (COTS) components. The reading of the sensor output voltages is based on a sine-fit algorithm digitally implemented on an FPGA ensuring few micrometers reading accuracy even with low signal-to-noise ratios. The algorithm validation and board architecture are described. A full metrological characterization of the module is reported and radiation tests results are discussed.
Dickau, Jonathan J.
2009-01-01
The use of fractals and fractal-like forms to describe or model the universe has had a long and varied history, which begins long before the word fractal was actually coined. Since the introduction of mathematical rigor to the subject of fractals, by Mandelbrot and others, there have been numerous cosmological theories and analyses of astronomical observations which suggest that the universe exhibits fractality or is by nature fractal. In recent years, the term fractal cosmology has come into usage, as a description for those theories and methods of analysis whereby a fractal nature of the cosmos is shown.
Fasanella, Giuseppe
2017-01-01
The CMS Electromagnetic Calorimeter utilizes scintillating lead tungstate crystals, with avalanche photodiodes (APD) as photo-detectors in the barrel part. 1224 HV channels bias groups of 50 APD pairs, each at a voltage of about 380 V. The APD gain dependence on the voltage is 3pct/V. A stability of better than 60 mV is needed to have negligible impact on the calorimeter energy resolution. Until 2015 manual calibrations were performed yearly. A new calibration system was deployed recently, which satisfies the requirement of low disturbance and high precision. The system is discussed in detail and first operational experience is presented.
Fasanella, Giuseppe
2016-01-01
The CMS Electromagnetic Calorimeter utilizes scintillating lead tungstate crystals, with avalanche photodiodes (APD) as photo-detectors in the barrel part. 1224 HV channels bias groups of 50 APD pairs, each at a voltage of about 380 V. The APD gain dependence on the voltage is 3pct/V. A stability of better than 60 mV is needed to have negligible impact on the calorimeter energy resolution. Until 2015 manual calibrations were performed yearly. A new calibration system was deployed recently, which satisfies the requirement of low disturbance and high precision. The system is discussed in detail and first operational experience is presented.
High-precision numerical integration of equations in dynamics
Alesova, I. M.; Babadzanjanz, L. K.; Pototskaya, I. Yu.; Pupysheva, Yu. Yu.; Saakyan, A. T.
2018-05-01
An important requirement for the process of solving differential equations in Dynamics, such as the equations of the motion of celestial bodies and, in particular, the motion of cosmic robotic systems is high accuracy at large time intervals. One of effective tools for obtaining such solutions is the Taylor series method. In this connection, we note that it is very advantageous to reduce the given equations of Dynamics to systems with polynomial (in unknowns) right-hand sides. This allows us to obtain effective algorithms for finding the Taylor coefficients, a priori error estimates at each step of integration, and an optimal choice of the order of the approximation used. In the paper, these questions are discussed and appropriate algorithms are considered.
High precision stress measurements in semiconductor structures by Raman microscopy
Uhlig, Benjamin
2009-07-01
Stress in silicon structures plays an essential role in modern semiconductor technology. This stress has to be measured and due to the ongoing miniaturization in today's semiconductor industry, the measuring method has to meet certain requirements. The present thesis deals with the question how Raman spectroscopy can be used to measure the state of stress in semiconductor structures. In the first chapter the relation between Raman peakshift and stress in the material is explained. It is shown that detailed stress maps with a spatial resolution close to the diffraction limit can be obtained in structured semiconductor samples. Furthermore a novel procedure, the so called Stokes-AntiStokes-Difference method is introduced. With this method, topography, tool or drift effects can be distinguished from stress related influences in the sample. In the next chapter Tip-enhanced Raman Scattering (TERS) and its application for an improvement in lateral resolution is discussed. For this, a study is presented, which shows the influence of metal particles on the intensity and localization of the Raman signal. A method to attach metal particles to scannable tips is successfully applied. First TERS scans are shown and their impact on and challenges for high resolution stress measurements on semiconductor structures is explained. (orig.)
A research of a high precision multichannel data acquisition system
Zhong, Ling-na; Tang, Xiao-ping; Yan, Wei
2013-08-01
The output signals of the focusing system in lithography are analog. To convert the analog signals into digital ones which are more flexible and stable to process, a desirable data acquisition system is required. The resolution of data acquisition, to some extent, affects the accuracy of focusing. In this article, we first compared performance between the various kinds of analog-to-digital converters (ADC) available on the market at the moment. Combined with the specific requirements (sampling frequency, converting accuracy, numbers of channels etc) and the characteristics (polarization, amplitude range etc) of the analog signals, the model of the ADC to be used as the core chip in our hardware design was determined. On this basis, we chose other chips needed in the hardware circuit that would well match with ADC, then the overall hardware design was obtained. Validation of our data acquisition system was verified through experiments and it can be demonstrated that the system can effectively realize the high resolution conversion of the multi-channel analog signals and give the accurate focusing information in lithography.
A precision timing discriminator for high density detector systems
Turko, B.T.; Smith, R.C.
1992-01-01
Most high resolution time measurement techniques require discriminators that accurately make the time arrival of events regardless of their intensity. Constant fraction discriminators or zero-crossing discriminators are generally used. In this paper, the authors describe a zero-crossing discriminator that accurately determines the peak of a quasi-Gaussian waveform by differentiating it and detecting the resulting zero-crossing. Basically, it consists of a fast voltage comparator and tow integrating networks: an RC section and an LR section used in a way that keeps the input impedance purely resistive. A time walk of 100 ps in an amplitude range exceeding 100:1 has been achieved for wave-forms from 1.5 ns to 15 ns FWHM. An arming level discriminator is added to eliminate triggering by noise. Easily implemented in either monolithic or hybrid technology, the circuit is suitable for large multichannel detector systems where size and power dissipation are crucial. Circuit diagrams and typical measured data are also presented
Intelligent technologies in process of highly-precise products manufacturing
Vakhidova, K. L.; Khakimov, Z. L.; Isaeva, M. R.; Shukhin, V. V.; Labazanov, M. A.; Ignatiev, S. A.
2017-10-01
One of the main control methods of the surface layer of bearing parts is the eddy current testing method. Surface layer defects of bearing parts, like burns, cracks and some others, are reflected in the results of the rolling surfaces scan. The previously developed method for detecting defects from the image of the raceway was quite effective, but the processing algorithm is complicated and lasts for about 12 ... 16 s. The real non-stationary signals from an eddy current transducer (ECT) consist of short-time high-frequency and long-time low-frequency components, therefore a transformation is used for their analysis, which provides different windows for different frequencies. The wavelet transform meets these conditions. Based on aforesaid, a methodology for automatically detecting and recognizing local defects in bearing parts surface layer has been developed on the basis of wavelet analysis using integral estimates. Some of the defects are recognized by the amplitude component, otherwise an automatic transition to recognition by the phase component of information signals (IS) is carried out. The use of intelligent technologies in the manufacture of bearing parts will, firstly, significantly improve the quality of bearings, and secondly, significantly improve production efficiency by reducing (eliminating) rejections in the manufacture of products, increasing the period of normal operation of the technological equipment (inter-adjustment period), the implementation of the system of Flexible facilities maintenance, as well as reducing production costs.
Enqvist, K
2012-01-01
The very basics of cosmological inflation are discussed. We derive the equations of motion for the inflaton field, introduce the slow-roll parameters, and present the computation of the inflationary perturbations and their connection to the temperature fluctuations of the cosmic microwave background.
Ellis, G F R
1993-01-01
Many topics were covered in the submitted papers, showing much life in this subject at present. They ranged from conventional calculations in specific cosmological models to provocatively speculative work. Space and time restrictions required selecting from them, for summarisation here; the book of Abstracts should be consulted for a full overview.
Chow, Nathan; Khoury, Justin
2009-01-01
We study the cosmology of a galileon scalar-tensor theory, obtained by covariantizing the decoupling Lagrangian of the Dvali-Gabadadze-Poratti (DGP) model. Despite being local in 3+1 dimensions, the resulting cosmological evolution is remarkably similar to that of the full 4+1-dimensional DGP framework, both for the expansion history and the evolution of density perturbations. As in the DGP model, the covariant galileon theory yields two branches of solutions, depending on the sign of the galileon velocity. Perturbations are stable on one branch and ghostlike on the other. An interesting effect uncovered in our analysis is a cosmological version of the Vainshtein screening mechanism: at early times, the galileon dynamics are dominated by self-interaction terms, resulting in its energy density being suppressed compared to matter or radiation; once the matter density has redshifted sufficiently, the galileon becomes an important component of the energy density and contributes to dark energy. We estimate conservatively that the resulting expansion history is consistent with the observed late-time cosmology, provided that the scale of modification satisfies r c > or approx. 15 Gpc.
Single Crystal Piezomotor for Large Stroke, High Precision and Cryogenic Actuations, Phase I
National Aeronautics and Space Administration — TRS Technologies proposes a novel single crystal piezomotor for large stroke, high precision, and cryogenic actuations with capability of position set-hold with...
Drift chambers for a large-area, high-precision muon spectrometer
Alberini, C.; Bari, G.; Cara Romeo, G.; Cifarelli, L.; Del Papa, C.; Iacobucci, G.; Laurenti, G.; Maccarrone, G.; Massam, T.; Motta, F.; Nania, R.; Perotto, E.; Prisco, G.; Willutsky, M.; Basile, M.; Contin, A.; Palmonari, F.; Sartorelli, G.
1987-01-01
We have tested two prototypes of high-precision drift chamber for a magnetic muon spectrometer. Results of the tests are presented, with special emphasis on their efficiency and spatial resolution as a function of particle rate. (orig.)
High-precision analogue peak detector for X-ray imaging applications
Dlugosz, Rafal Tomasz; Iniewski, Kris
2007-01-01
A new analogue high-precision peak detector is presented. Owing to its very low power consumption the circuit is particularly well suited for photon energy detection in multichannel receiver integrated circuits used in nuclear medicine.
A simulation of driven reconnection by a high precision MHD code
Kusano, Kanya; Ouchi, Yasuo; Hayashi, Takaya; Horiuchi, Ritoku; Watanabe, Kunihiko; Sato, Tetsuya.
1988-01-01
A high precision MHD code, which has the fourth-order accuracy for both the spatial and time steps, is developed, and is applied to the simulation studies of two dimensional driven reconnection. It is confirm that the numerical dissipation of this new scheme is much less than that of two-step Lax-Wendroff scheme. The effect of the plasma compressibility on the reconnection dynamics is investigated by means of this high precision code. (author)
Research on the high-precision non-contact optical detection technology for banknotes
Jin, Xiaofeng; Liang, Tiancai; Luo, Pengfeng; Sun, Jianfeng
2015-09-01
The technology of high-precision laser interferometry was introduced for optical measurement of the banknotes in this paper. Taking advantage of laser short wavelength and high sensitivity, information of adhesive tape and cavity about the banknotes could be checked efficiently. Compared with current measurement devices, including mechanical wheel measurement device, Infrared measurement device, ultrasonic measurement device, the laser interferometry measurement has higher precision and reliability. This will improve the ability of banknotes feature information in financial electronic equipment.
High precision electron beam diagnostic system for high current long pulse beams
Chen, Y J; Fessenden, T; Holmes, C; Nelson, S D; Selchow, N.
1999-01-01
As part of the effort to develop a multi-axis electron beam transport system using stripline kicker technology for DARHT II applications, it is necessary to precisely determine the position and extent of long high energy beams (6-40 MeV, 1-4 kA, 2 microseconds) for accurate position control. The kicker positioning system utilizes shot-to-shot adjustments for reduction of relatively slow (<20 MHz) motion of the beam centroid. The electron beams passing through the diagnostic systems have the potential for large halo effects that tend to corrupt measurements performed using capacitive pick-off probes. Likewise, transmission line traveling wave probes have problems with multi-bounce effects due to these longer pulse widths. Finally, the high energy densities experienced in these applications distort typical foil beam position measurements
Method of high precision interval measurement in pulse laser ranging system
Wang, Zhen; Lv, Xin-yuan; Mao, Jin-jin; Liu, Wei; Yang, Dong
2013-09-01
Laser ranging is suitable for laser system, for it has the advantage of high measuring precision, fast measuring speed,no cooperative targets and strong resistance to electromagnetic interference,the measuremen of laser ranging is the key paremeters affecting the performance of the whole system.The precision of the pulsed laser ranging system was decided by the precision of the time interval measurement, the principle structure of laser ranging system was introduced, and a method of high precision time interval measurement in pulse laser ranging system was established in this paper.Based on the analysis of the factors which affected the precision of range measure,the pulse rising edges discriminator was adopted to produce timing mark for the start-stop time discrimination,and the TDC-GP2 high precision interval measurement system based on TMS320F2812 DSP was designed to improve the measurement precision.Experimental results indicate that the time interval measurement method in this paper can obtain higher range accuracy. Compared with the traditional time interval measurement system,the method simplifies the system design and reduce the influence of bad weather conditions,furthermore,it satisfies the requirements of low costs and miniaturization.
M. Sakamoto
2012-07-01
Full Text Available In this paper, a novel technique to generate a high resolution and high precision Orthorectified Road Imagery (ORI by using spatial information acquired from a Mobile Mapping System (MMS is introduced. The MMS was equipped with multiple sensors such as GPS, IMU, odometer, 2-6 digital cameras and 2-4 laser scanners. In this study, a Triangulated Irregular Network (TIN based approach, similar to general aerial photogrammetry, was adopted to build a terrain model in order to generate ORI with high resolution and high geometric precision. Compared to aerial photogrammetry, there are several issues that are needed to be addressed. ORI is generated by merging multiple time sequence images of a short section. Hence, the influence of occlusion due to stationary objects, such as telephone poles, trees, footbridges, or moving objects, such as vehicles, pedestrians are very significant. Moreover, influences of light falloff at the edges of cameras, tone adjustment among images captured from different cameras or a round trip data acquisition of the same path, and time lag between image exposure and laser point acquisition also need to be addressed properly. The proposed method was applied to generate ORI with 1 cm resolution, from the actual MMS data sets. The ORI generated by the proposed technique was more clear, occlusion free and with higher resolution compared to the conventional orthorectified coloured point cloud imagery. Moreover, the visual interpretation of road features from the ORI was much easier. In addition, the experimental results also validated the effectiveness of proposed radiometric corrections. In occluded regions, the ORI was compensated by using other images captured from different angles. The validity of the image masking process, in the occluded regions, was also ascertained.
Silk, Joseph
2011-01-01
Horizons of Cosmology: Exploring Worlds Seen and Unseen is the fourth title published in the Templeton Science and Religion Series, in which scientists from a wide range of fields distill their experience and knowledge into brief tours of their respective specialties. In this volume, highly esteemed astrophysicist Joseph Silk explores the vast mysteries and speculations of the field of cosmology in a way that balances an accessible style for the general reader and enough technical detail for advanced students and professionals. Indeed, while the p
Cosmological models without singularities
Petry, W.
1981-01-01
A previously studied theory of gravitation in flat space-time is applied to homogeneous and isotropic cosmological models. There exist two different classes of models without singularities: (i) ever-expanding models, (ii) oscillating models. The first class contains models with hot big bang. For these models there exist at the beginning of the universe-in contrast to Einstein's theory-very high but finite densities of matter and radiation with a big bang of very short duration. After short time these models pass into the homogeneous and isotropic models of Einstein's theory with spatial curvature equal to zero and cosmological constant ALPHA >= O. (author)
Li, Yaqiong; Choi, Steve; Ho, Shuay-Pwu; Crowley, Kevin T.; Salatino, Maria; Simon, Sara M.; Staggs, Suzanne T.; Nati, Federico; Wollack, Edward J.
2016-01-01
The Advanced ACTPol (AdvACT) upgrade on the Atacama Cosmology Telescope (ACT) consists of multichroicTransition Edge Sensor (TES) detector arrays to measure the Cosmic Microwave Background (CMB) polarization anisotropies in multiple frequency bands. The first AdvACT detector array, sensitive to both 150 and 230 GHz, is fabricated on a 150 mm diameter wafer and read out with a completely different scheme compared to ACTPol. Approximately 2000 TES bolometers are packed into the wafer leading to both a much denser detector density and readout circuitry. The demonstration of the assembly and integration of the AdvACT arrays is important for the next generation CMB experiments, which will continue to increase the pixel number and density. We present the detailed assembly process of the first AdvACT detector array.
Hyperbolic geometry of cosmological attractors
Carrasco, John Joseph M.; Kallosh, Renata; Linde, Andrei; Roest, Diederik
2015-01-01
Cosmological alpha attractors give a natural explanation for the spectral index n(s) of inflation as measured by Planck while predicting a range for the tensor-to-scalar ratio r, consistent with all observations, to be measured more precisely in future B-mode experiments. We highlight the crucial
High-redshift post-reionization cosmology with 21cm intensity mapping
Obuljen, Andrej; Castorina, Emanuele; Villaescusa-Navarro, Francisco; Viel, Matteo
2018-05-01
We investigate the possibility of performing cosmological studies in the redshift range 2.5place on the growth rate, the BAO distance scale parameters, the sum of the neutrino masses and the number of relativistic degrees of freedom at decoupling, N eff. We point out that quantities that depend on the amplitude of the 21cm power spectrum, like fσ8, are completely degenerate with ΩHI and bHI, and propose several strategies to independently constrain them through cross-correlations with other probes. Assuming 5% priors on ΩHI and bHI, kmax=0.2 h Mpc‑1 and the primary beam wedge, we find that a HIRAX extension can constrain, within bins of Δ z=0.1: 1) the value of fσ8 at simeq4%, 2) the value of DA and H at simeq1%. In combination with data from Euclid-like galaxy surveys and CMB S4, the sum of the neutrino masses can be constrained with an error equal to 23 meV (1σ), while Neff can be constrained within 0.02 (1σ). We derive similar constraints for the extensions of the other instruments. We study in detail the dependence of our results on the instrument, amplitude of the HI bias, the foreground wedge coverage, the nonlinear scale used in the analysis, uncertainties in the theoretical modeling and the priors on bHI and Ω HI. We conclude that 21cm intensity mapping surveys operating in this redshift range can provide extremely competitive constraints on key cosmological parameters.
Classification of LIDAR Data for Generating a High-Precision Roadway Map
Jeong, J.; Lee, I.
2016-06-01
Generating of a highly precise map grows up with development of autonomous driving vehicles. The highly precise map includes a precision of centimetres level unlike an existing commercial map with the precision of meters level. It is important to understand road environments and make a decision for autonomous driving since a robust localization is one of the critical challenges for the autonomous driving car. The one of source data is from a Lidar because it provides highly dense point cloud data with three dimensional position, intensities and ranges from the sensor to target. In this paper, we focus on how to segment point cloud data from a Lidar on a vehicle and classify objects on the road for the highly precise map. In particular, we propose the combination with a feature descriptor and a classification algorithm in machine learning. Objects can be distinguish by geometrical features based on a surface normal of each point. To achieve correct classification using limited point cloud data sets, a Support Vector Machine algorithm in machine learning are used. Final step is to evaluate accuracies of obtained results by comparing them to reference data The results show sufficient accuracy and it will be utilized to generate a highly precise road map.
CLASSIFICATION OF LIDAR DATA FOR GENERATING A HIGH-PRECISION ROADWAY MAP
J. Jeong
2016-06-01
Full Text Available Generating of a highly precise map grows up with development of autonomous driving vehicles. The highly precise map includes a precision of centimetres level unlike an existing commercial map with the precision of meters level. It is important to understand road environments and make a decision for autonomous driving since a robust localization is one of the critical challenges for the autonomous driving car. The one of source data is from a Lidar because it provides highly dense point cloud data with three dimensional position, intensities and ranges from the sensor to target. In this paper, we focus on how to segment point cloud data from a Lidar on a vehicle and classify objects on the road for the highly precise map. In particular, we propose the combination with a feature descriptor and a classification algorithm in machine learning. Objects can be distinguish by geometrical features based on a surface normal of each point. To achieve correct classification using limited point cloud data sets, a Support Vector Machine algorithm in machine learning are used. Final step is to evaluate accuracies of obtained results by comparing them to reference data The results show sufficient accuracy and it will be utilized to generate a highly precise road map.
High-precision multiband spectroscopy of ultracold fermions in a nonseparable optical lattice
Fläschner, Nick; Tarnowski, Matthias; Rem, Benno S.; Vogel, Dominik; Sengstock, Klaus; Weitenberg, Christof
2018-05-01
Spectroscopic tools are fundamental for the understanding of complex quantum systems. Here, we demonstrate high-precision multiband spectroscopy in a graphenelike lattice using ultracold fermionic atoms. From the measured band structure, we characterize the underlying lattice potential with a relative error of 1.2 ×10-3 . Such a precise characterization of complex lattice potentials is an important step towards precision measurements of quantum many-body systems. Furthermore, we explain the excitation strengths into different bands with a model and experimentally study their dependency on the symmetry of the perturbation operator. This insight suggests the excitation strengths as a suitable observable for interaction effects on the eigenstates.
Reference satellite selection method for GNSS high-precision relative positioning
Xiao Gao
2017-03-01
Full Text Available Selecting the optimal reference satellite is an important component of high-precision relative positioning because the reference satellite directly influences the strength of the normal equation. The reference satellite selection methods based on elevation and positional dilution of precision (PDOP value were compared. Results show that all the above methods cannot select the optimal reference satellite. We introduce condition number of the design matrix in the reference satellite selection method to improve structure of the normal equation, because condition number can indicate the ill condition of the normal equation. The experimental results show that the new method can improve positioning accuracy and reliability in precise relative positioning.
High-precision high-sensitivity clock recovery circuit for a mobile payment application
Sun Lichong; Yan Na; Min Hao; Ren Wenliang
2011-01-01
This paper presents a fully integrated carrier clock recovery circuit for a mobile payment application. The architecture is based on a sampling-detection module and a charge pump phase locked loop. Compared with clock recovery in conventional 13.56 MHz transponders, this circuit can recover a high-precision consecutive carrier clock from the on/off keying (OOK) signal sent by interrogators. Fabricated by a SMIC 0.18-μm EEPROM CMOS process, this chip works from a single power supply as low as 1.5 V Measurement results show that this circuit provides 0.34% frequency deviation and 8 mV sensitivity. (semiconductor integrated circuits)
Precision ring rolling technique and application in high-performance bearing manufacturing
Hua Lin
2015-01-01
Full Text Available High-performance bearing has significant application in many important industry fields, like automobile, precision machine tool, wind power, etc. Precision ring rolling is an advanced rotary forming technique to manufacture high-performance seamless bearing ring thus can improve the working life of bearing. In this paper, three kinds of precision ring rolling techniques adapt to different dimensional ranges of bearings are introduced, which are cold ring rolling for small-scale bearing, hot radial ring rolling for medium-scale bearing and hot radial-axial ring rolling for large-scale bearing. The forming principles, technological features and forming equipments for three kinds of precision ring rolling techniques are summarized, the technological development and industrial application in China are introduced, and the main technological development trend is described.
MRPC-PET: A new technique for high precision time and position measurements
Doroud, K.; Hatzifotiadou, D.; Li, S.; Williams, M.C.S.; Zichichi, A.; Zuyeuski, R.
2011-01-01
The purpose of this paper is to consider a new technology for medical diagnosis: the MRPC-PET. This technology allows excellent time resolution together with 2-D position information thus providing a fundamental step in this field. The principle of this method is based on the Multigap Resistive Plate Chamber (MRPC) capable of high precision time measurements. We have previously found that the route to precise timing is differential readout (this requires matching anode and cathode strips); thus crossed strip readout schemes traditionally used for 2-D readout cannot be exploited. In this paper we consider the time difference from the two ends of the strip to provide a high precision measurement along the strip; the average time gives precise timing. The MRPC-PET thus provides a basic step in the field of medical technology: excellent time resolution together with 2-D position measurement.
High-Precision Half-Life Measurement for the Superallowed β+ Emitter Alm26
Finlay, P.; Ettenauer, S.; Ball, G. C.; Leslie, J. R.; Svensson, C. E.; Andreoiu, C.; Austin, R. A. E.; Bandyopadhyay, D.; Cross, D. S.; Demand, G.; Djongolov, M.; Garrett, P. E.; Green, K. L.; Grinyer, G. F.; Hackman, G.; Leach, K. G.; Pearson, C. J.; Phillips, A. A.; Sumithrarachchi, C. S.; Triambak, S.; Williams, S. J.
2011-01-01
A high-precision half-life measurement for the superallowed β+ emitter Alm26 was performed at the TRIUMF-ISAC radioactive ion beam facility yielding T1/2=6346.54±0.46stat±0.60systms, consistent with, but 2.5 times more precise than, the previous world average. The Alm26 half-life and ft value, 3037.53(61) s, are now the most precisely determined for any superallowed β decay. Combined with recent theoretical corrections for isospin-symmetry-breaking and radiative effects, the corrected Ft value for Alm26, 3073.0(12) s, sets a new benchmark for the high-precision superallowed Fermi β-decay studies used to test the conserved vector current hypothesis and determine the Vud element of the Cabibbo-Kobayashi-Maskawa quark mixing matrix.
Neutrino mass constraints from joint cosmological probes.
Kwan, Juliana
2018-01-01
One of the most promising avenues to come from precision cosmology is the measurement of the sum of neutrino masses in the next 5-10 years. Ongoing imaging surveys, such as the Dark Energy Survey and the Hyper Suprime Cam survey, will cover a substantial volume of the sky and when combined with existing spectroscopic data, are expected to deliver a definitive measurement in the near future. But it is important that the accuracy of theoretical predictions matches the precision of the observational data so that the neutrino mass signal can be properly detected without systematic error. To this end, we have run a suite of high precision, large volume cosmological N-body simulations containing massive neutrinos to quantify their effect on probes of large scale structure such as weak lensing and galaxy clustering. In this talk, I will describe the analytical tools that we have developed to extract the neutrino mass that are capable of fully utilizing the non-linear regime of structure formation. These include predictions for the bias in the clustering of dark matter halos (one of the fundamental ingredients of the halo model) with an error of only a few percent.
Ray tracing and Hubble diagrams in post-Newtonian cosmology
Sanghai, Viraj A.A.; Clifton, Timothy [School of Physics and Astronomy, Queen Mary University of London, 327 Mile End Road, London E1 4NS (United Kingdom); Fleury, Pierre, E-mail: v.a.a.sanghai@qmul.ac.uk, E-mail: pierre.fleury@unige.ch, E-mail: t.clifton@qmul.ac.uk [Départment de Physique Théorique, Université de Genève, 24 quai Ernest-Ansermet, 1211 Genève 4 (Switzerland)
2017-07-01
On small scales the observable Universe is highly inhomogeneous, with galaxies and clusters forming a complex web of voids and filaments. The optical properties of such configurations can be quite different from the perfectly smooth Friedmann-Lemaȋtre-Robertson-Walker (FLRW) solutions that are frequently used in cosmology, and must be well understood if we are to make precise inferences about fundamental physics from cosmological observations. We investigate this problem by calculating redshifts and luminosity distances within a class of cosmological models that are constructed explicitly in order to allow for large density contrasts on small scales. Our study of optics is then achieved by propagating one hundred thousand null geodesics through such space-times, with matter arranged in either compact opaque objects or diffuse transparent haloes. We find that in the absence of opaque objects, the mean of our ray tracing results faithfully reproduces the expectations from FLRW cosmology. When opaque objects with sizes similar to those of galactic bulges are introduced, however, we find that the mean of distance measures can be shifted up from FLRW predictions by as much as 10%. This bias is due to the viable photon trajectories being restricted by the presence of the opaque objects, which means that they cannot probe the regions of space-time with the highest curvature. It corresponds to a positive bias of order 10% in the estimation of Ω{sub Λ} and highlights the important consequences that astronomical selection effects can have on cosmological observables.
Ray tracing and Hubble diagrams in post-Newtonian cosmology
Sanghai, Viraj A. A.; Fleury, Pierre; Clifton, Timothy
2017-07-01
On small scales the observable Universe is highly inhomogeneous, with galaxies and clusters forming a complex web of voids and filaments. The optical properties of such configurations can be quite different from the perfectly smooth Friedmann-Lemaȋtre-Robertson-Walker (FLRW) solutions that are frequently used in cosmology, and must be well understood if we are to make precise inferences about fundamental physics from cosmological observations. We investigate this problem by calculating redshifts and luminosity distances within a class of cosmological models that are constructed explicitly in order to allow for large density contrasts on small scales. Our study of optics is then achieved by propagating one hundred thousand null geodesics through such space-times, with matter arranged in either compact opaque objects or diffuse transparent haloes. We find that in the absence of opaque objects, the mean of our ray tracing results faithfully reproduces the expectations from FLRW cosmology. When opaque objects with sizes similar to those of galactic bulges are introduced, however, we find that the mean of distance measures can be shifted up from FLRW predictions by as much as 10%. This bias is due to the viable photon trajectories being restricted by the presence of the opaque objects, which means that they cannot probe the regions of space-time with the highest curvature. It corresponds to a positive bias of order 10% in the estimation of ΩΛ and highlights the important consequences that astronomical selection effects can have on cosmological observables.
Grant, E.; Murdin, P.
2000-11-01
During the early Middle Ages (ca 500 to ca 1130) scholars with an interest in cosmology had little useful and dependable literature. They relied heavily on a partial Latin translation of PLATO's Timaeus by Chalcidius (4th century AD), and on a series of encyclopedic treatises associated with the names of Pliny the Elder (ca AD 23-79), Seneca (4 BC-AD 65), Macrobius (fl 5th century AD), Martianus ...
High Precision Fast Projective Synchronization for Chaotic Systems with Unknown Parameters
Nian, Fuzhong; Wang, Xingyuan; Lin, Da; Niu, Yujun
2013-08-01
A high precision fast projective synchronization method for chaotic systems with unknown parameters was proposed by introducing optimal matrix. Numerical simulations indicate that the precision be improved about three orders compared with other common methods under the same condition of software and hardware. Moreover, when average error is less than 10-3, the synchronization speed is 6500 times than common methods, the iteration needs only 4 times. The unknown parameters also were identified rapidly. The theoretical analysis and proof also were given.
High precision analysis of trace lithium isotope by thermal ionization mass spectrometry
Tang Lei; Liu Xuemei; Long Kaiming; Liu Zhao; Yang Tianli
2010-01-01
High precision analysis method of ng lithium by thermal ionization mass spectrometry is developed. By double-filament measurement,phosphine acid ion enhancer and sample pre-baking technique,the precision of trace lithium analysis is improved. For 100 ng lithium isotope standard sample, relative standard deviation is better than 0.086%; for 10 ng lithium isotope standard sample, relative standard deviation is better than 0.90%. (authors)
The πNN coupling from high precision np charge exchange at 162 MeV
Nilsson, J.; Blomgren, J.; Conde, H.; Elmgren, K.; Olsson, N.; Ericson, T.E.O.; Uppsala Univ.; Jonsson, O.; Nilsson, L.; Loiseau, B.; Ringbom, A.
1995-02-01
Differential cross sections for unpolarized neutrons of 162 MeV have been measured to high precision with particular attention to the absolute normalisation. These data can be extrapolated precisely and model-independently to the pion pole and give a πNN coupling constant g 2 =14.6±0.3 or f 2 =0.0808±0.0017. This is higher than recently suggested values. (author) 24 refs.; 3 figs.; 1 tab
Research on Ship Trajectory Tracking with High Precision Based on LOS
Hengzhi Liu
2018-01-01
Full Text Available Aiming at how precise to track by LOS, a method is proposed. The method combines the advantages of LOS simplicity and intuition, easy parameter setting and good convergence, with the features of GPC softening, multi-step prediction, rolling optimization and excellent controllability and robustness. In order to verify the effectiveness of the method, the method is simulated by Matlab. The simulation’s results show that it makes ship tracking highly precise.
Bradshaw, R.C.; Schmidt, D.P.; Rogers, J.R.; Kelton, K.F.; Hyers, R.W.
2005-01-01
By combining the best practices in optical dilatometry with numerical methods, a high-speed and high-precision technique has been developed to measure the volume of levitated, containerlessly processed samples with subpixel resolution. Containerless processing provides the ability to study highly reactive materials without the possibility of contamination affecting thermophysical properties. Levitation is a common technique used to isolate a sample as it is being processed. Noncontact optical measurement of thermophysical properties is very important as traditional measuring methods cannot be used. Modern, digitally recorded images require advanced numerical routines to recover the subpixel locations of sample edges and, in turn, produce high-precision measurements
the Universe About Cosmology Planck Satellite Launched Cosmology Videos Professor George Smoot's group conducts research on the early universe (cosmology) using the Cosmic Microwave Background radiation (CMB science goals regarding cosmology. George Smoot named Director of Korean Cosmology Institute The GRB
Design and algorithm research of high precision airborne infrared touch screen
Zhang, Xiao-Bing; Wang, Shuang-Jie; Fu, Yan; Chen, Zhao-Quan
2016-10-01
There are shortcomings of low precision, touch shaking, and sharp decrease of touch precision when emitting and receiving tubes are failure in the infrared touch screen. A high precision positioning algorithm based on extended axis is proposed to solve these problems. First, the unimpeded state of the beam between emitting and receiving tubes is recorded as 0, while the impeded state is recorded as 1. Then, the method of oblique scan is used, in which the light of one emitting tube is used for five receiving tubes. The impeded information of all emitting and receiving tubes is collected as matrix. Finally, according to the method of arithmetic average, the position of the touch object is calculated. The extended axis positioning algorithm is characteristic of high precision in case of failure of individual infrared tube and affects slightly the precision. The experimental result shows that the 90% display area of the touch error is less than 0.25D, where D is the distance between adjacent emitting tubes. The conclusion is gained that the algorithm based on extended axis has advantages of high precision, little impact when individual infrared tube is failure, and using easily.
2006-01-01
This year's Nobel prize is welcome recognition for cosmology. Back in the 1960s, according to Paul Davies' new book The Goldilocks Enigma (see 'Seeking anthropic answers' in this issue), cynics used to quip that there is 'speculation, speculation squared - and cosmology'. Anyone trying to understand the origin and fate of the universe was, in other words, dealing with questions that were simply impractical - or even impossible - to answer. But that has all changed with the development of new telescopes, satellites and data-processing techniques - to the extent that cosmology is now generally viewed as a perfectly acceptable branch of science. If anyone was in any doubt of cosmology's new status, the Royal Swedish Academy of Sciences last month gave the subject welcome recognition with the award of this year's Nobel prize to John Mather and George Smoot (see pp6-7; print version only). The pair were the driving force behind the COBE satellite that in 1992 produced the now famous image of the cosmic microwave background. The mission's data almost certainly proved that the universe started with a Big Bang, while tiny fluctuations in the temperature signal between different parts of the sky were shown to be the seeds of the stars and galaxies we see today. These results are regarded by many as the start of a new era of 'precision cosmology'. But for cosmologists, the job is far from over. There are still massive holes in our understanding of the cosmos, notably the nature of dark matter and dark energy, which together account for over 95% of the total universe. Indeed, some regard dark energy and matter as just ad hoc assumptions needed to fit the data. (Hypothetical particles called 'axions' are one possible contender for dark matter (see pp20-23; print version only), but don't bet your house on it.) Some physicists even think it makes more sense to adjust Newtonian gravity rather than invoke dark matter. But the notion that cosmology is in crisis, as argued by some
Design of a self-calibration high precision micro-angle deformation optical monitoring scheme
Gu, Yingying; Wang, Li; Guo, Shaogang; Wu, Yun; Liu, Da
2018-03-01
In order to meet the requirement of high precision and micro-angle measurement on orbit, a self-calibrated optical non-contact real-time monitoring device is designed. Within three meters, the micro-angle variable of target relative to measuring basis can be measured in real-time. The range of angle measurement is +/-50'', the angle measurement accuracy is less than 2''. The equipment can realize high precision real-time monitoring the micro-angle deformation, which caused by high strength vibration and shock of rock launching, sun radiation and heat conduction on orbit and so on.
High Throughput, High Precision Hot Testing Tool for HBLED Wafer Level Testing
Solarz, Richard [KLA-Tencor Corporation, Milpitas, CA (United States); McCord, Mark [KLA-Tencor Corporation, Milpitas, CA (United States)
2015-12-31
The Socrates research effort developed an in depth understanding and demonstrated in a prototype tool new precise methods for teh characterization of color characteristics and flux from individual LEDs for the production of uniform quality lighting. This effort was focused on improving the color quality and consistency of solid state lighting and potentially reducing characterization costs for all LED product types. The patented laser hot testing method was demonstrated to be far more accurate than all current state of the art color and flux characterization methods in use by the solid state lighting industry today. A seperately patented LED grouping method (statistical binning) was demonstrated to be a useful approach to improving utilization of entire lots of large color and flux distributions of manufactured LEDs for high quality color solid-state lighting. At the conclusion of the research in late 2015 the solid-state lighting industry was however generally satisfied with its existing production methods for high quality color products for the small segment of customers that demand it, albeit with added costs.
High precision locating control system based on VCM for Talbot lithography
Yao, Jingwei; Zhao, Lixin; Deng, Qian; Hu, Song
2016-10-01
Aiming at the high precision and efficiency requirements of Z-direction locating in Talbot lithography, a control system based on Voice Coil Motor (VCM) was designed. In this paper, we built a math model of VCM and its moving characteristic was analyzed. A double-closed loop control strategy including position loop and current loop were accomplished. The current loop was implemented by driver, in order to achieve the rapid follow of the system current. The position loop was completed by the digital signal processor (DSP) and the position feedback was achieved by high precision linear scales. Feed forward control and position feedback Proportion Integration Differentiation (PID) control were applied in order to compensate for dynamic lag and improve the response speed of the system. And the high precision and efficiency of the system were verified by simulation and experiments. The results demonstrated that the performance of Z-direction gantry was obviously improved, having high precision, quick responses, strong real-time and easily to expend for higher precision.
Video-rate or high-precision: a flexible range imaging camera
Dorrington, Adrian A.; Cree, Michael J.; Carnegie, Dale A.; Payne, Andrew D.; Conroy, Richard M.; Godbaz, John P.; Jongenelen, Adrian P. P.
2008-02-01
A range imaging camera produces an output similar to a digital photograph, but every pixel in the image contains distance information as well as intensity. This is useful for measuring the shape, size and location of objects in a scene, hence is well suited to certain machine vision applications. Previously we demonstrated a heterodyne range imaging system operating in a relatively high resolution (512-by-512) pixels and high precision (0.4 mm best case) configuration, but with a slow measurement rate (one every 10 s). Although this high precision range imaging is useful for some applications, the low acquisition speed is limiting in many situations. The system's frame rate and length of acquisition is fully configurable in software, which means the measurement rate can be increased by compromising precision and image resolution. In this paper we demonstrate the flexibility of our range imaging system by showing examples of high precision ranging at slow acquisition speeds and video-rate ranging with reduced ranging precision and image resolution. We also show that the heterodyne approach and the use of more than four samples per beat cycle provides better linearity than the traditional homodyne quadrature detection approach. Finally, we comment on practical issues of frame rate and beat signal frequency selection.
Recent results and perspectives on cosmology and fundamental physics from microwave surveys
Burigana, Carlo; Battistelli, Elia Stefano; Benetti, Micol
2016-01-01
Recent cosmic microwave background (CMB) data in temperature and polarization have reached high precision in estimating all the parameters that describe the current so-called standard cosmological model. Recent results about the integrated Sachs-Wolfe (ISW) effect from CMB anisotropies, galaxy su...
Pavluchenko, Sergey A. [Universidade Federal do Maranhao (UFMA), Programa de Pos-Graduacao em Fisica, Sao Luis, Maranhao (Brazil)
2017-08-15
In this paper we perform a systematic study of spatially flat [(3+D)+1]-dimensional Einstein-Gauss-Bonnet cosmological models with Λ-term. We consider models that topologically are the product of two flat isotropic subspaces with different scale factors. One of these subspaces is three-dimensional and represents our space and the other is D-dimensional and represents extra dimensions. We consider no ansatz of the scale factors, which makes our results quite general. With both Einstein-Hilbert and Gauss-Bonnet contributions in play, D = 3 and the general D ≥ 4 cases have slightly different dynamics due to the different structure of the equations of motion. We analytically study the equations of motion in both cases and describe all possible regimes with special interest on the realistic regimes. Our analysis suggests that the only realistic regime is the transition from high-energy (Gauss-Bonnet) Kasner regime, which is the standard cosmological singularity in that case, to the anisotropic exponential regime with expanding three and contracting extra dimensions. Availability of this regime allows us to put a constraint on the value of Gauss-Bonnet coupling α and the Λ-term - this regime appears in two regions on the (α, Λ) plane: α < 0, Λ > 0, αΛ ≤ -3/2 and α > 0, αΛ ≤ (3D{sup 2} - 7D + 6)/(4D(D-1)), including the entire Λ < 0 region. The obtained bounds are confronted with the restrictions on α and Λ from other considerations, like causality, entropy-to-viscosity ratio in AdS/CFT and others. Joint analysis constrains (α, Λ) even further: α > 0, D ≥ 2 with (3D{sup 2} - 7D + 6)/(4D(D-1)) ≥ αΛ ≥ -(D+2)(D+3)(D{sup 2} + 5D + 12)/(8(D{sup 2} + 3D + 6){sup 2}). (orig.)
An Ultra-low Frequency Modal Testing Suspension System for High Precision Air Pressure Control
Qiaoling YUAN
2014-05-01
Full Text Available As a resolution for air pressure control challenges in ultra-low frequency modal testing suspension systems, an incremental PID control algorithm with dead band is applied to achieve high-precision pressure control. We also develop a set of independent hardware and software systems for high-precision pressure control solutions. Taking control system versatility, scalability, reliability, and other aspects into considerations, a two-level communication employing Ethernet and CAN bus, is adopted to complete such tasks as data exchange between the IPC, the main board and the control board ,and the pressure control. Furthermore, we build a single set of ultra-low frequency modal testing suspension system and complete pressure control experiments, which achieve the desired results and thus confirm that the high-precision pressure control subsystem is reasonable and reliable.
High-Precision Mass Measurements of Exotic Nuclei with the Triple-Trap Mass Spectrometer Isoltrap
Blaum, K; Zuber, K T; Stanja, J
2002-01-01
The masses of close to 200 short-lived nuclides have already been measured with the mass spectrometer ISOLTRAP with a relative precision between 1$\\times$10$^{-7}$ and 1$\\times$10^{-8}$. The installatin of a radio-frequency quadrupole trap increased the overall efficiency by two orders of magnitude which is at present about 1%. In a recent upgrade, we installed a carbon cluster laser ion source, which will allow us to use carbon clusters as mass references for absolute mass measurements. Due to these improvements and the high reliability of ISOLTRAP we are now able to perform accurate high-precision mass measurements all over the nuclear chart. We propose therefore mass measurements on light, medium and heavy nuclides on both sides of the valley of stability in the coming four years. ISOLTRAP is presently the only instrument capable of the high precision required for many of the proposed studies.
Lee, Taehwa; Luo, Wei; Li, Qiaochu; Demirci, Hakan; Guo, L Jay
2017-10-01
Beyond the implementation of the photoacoustic effect to photoacoustic imaging and laser ultrasonics, this study demonstrates a novel application of the photoacoustic effect for high-precision cavitation treatment of tissue using laser-induced focused ultrasound. The focused ultrasound is generated by pulsed optical excitation of an efficient photoacoustic film coated on a concave surface, and its amplitude is high enough to produce controllable microcavitation within the focal region (lateral focus <100 µm). Such microcavitation is used to cut or ablate soft tissue in a highly precise manner. This work demonstrates precise cutting of tissue-mimicking gels as well as accurate ablation of gels and animal eye tissues. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Brandenberger, Robert H.
2008-01-01
String gas cosmology is a string theory-based approach to early universe cosmology which is based on making use of robust features of string theory such as the existence of new states and new symmetries. A first goal of string gas cosmology is to understand how string theory can effect the earliest moments of cosmology before the effective field theory approach which underlies standard and inflationary cosmology becomes valid. String gas cosmology may also provide an alternative to the curren...
Turner, Michael S
1999-03-01
For two decades the hot big-bang model as been referred to as the standard cosmology - and for good reason. For just as long cosmologists have known that there are fundamental questions that are not answered by the standard cosmology and point to a grander theory. The best candidate for that grander theory is inflation + cold dark matter. It holds that the Universe is flat, that slowly moving elementary particles left over from the earliest moments provide the cosmic infrastructure, and that the primeval density inhomogeneities that seed all the structure arose from quantum fluctuations. There is now prima facie evidence that supports two basic tenets of this paradigm. An avalanche of high-quality cosmological observations will soon make this case stronger or will break it. Key questions remain to be answered; foremost among them are: identification and detection of the cold dark matter particles and elucidation of the dark-energy component. These are exciting times in cosmology{exclamation_point}.
Probing early universe cosmology and high energy physics through space-borne interferometers
Ungarelli, C.; Vecchio, A.
2001-01-01
We discuss the impact of space-borne laser interferometric experiments operating in the low-frequency window (∼ 1 μHz - 1 Hz), with the goal of identifying the fundamental issues that regard the detection of a primordial background of GW predicted by slow-roll inflationary models, corresponding to h 100 2 Ω ∼ 10 -16 - 10 -15 . We analyse the capabilities of the planned single-instrument LISA mission and the sensitivity improvements that could be achieved by cross-correlating the data streams from a pair of detectors of the LISA-class. We show that the two-detectors configuration is extremely powerful, and leads to the detection of a stochastic background as weak as h 100 2 Ω ∼ 10 -14 . However, such instrumental sensitivity cannot be exploited to achieve a comparable performance for the detection of the primordial component of the background, due to the overwhelming power of the stochastic signal produced by short-period solar-mass binary systems of compact objects, that cannot be resolved as individual sources. We estimate that the primordial background can be detected only if its fractional energy density h 100 2 Ω is greater than a few times 10 -12 . The key conclusion of our analysis is that the typical mHz frequency band, regardless of the instrumental noise level, is the wrong observational window to probe slow-roll inflationary models. We discuss possible follow-on missions with optimal sensitivity in the ∼ μHz-regime and/or in the ∼ 0.1Hz-band specifically aimed at gravitational wave cosmology. (author)
Magnetohydrodynamic cosmologies
Portugal, R.; Soares, I.D.
1991-01-01
We analyse a class of cosmological models in magnetohydrodynamic regime extending and completing the results of a previous paper. The material content of the models is a perfect fluid plus electromagnetic fields. The fluid is neutral in average but admits an electrical current which satisfies Ohm's law. All models fulfil the physical requirements of near equilibrium thermodynamics and can be favourably used as a more realistic description of the interior of a collapsing star in a magnetohydrodynamic regime with or without a magnetic field. (author)
Boeyens, Jan CA
2010-01-01
The composition of the most remote objects brought into view by the Hubble telescope can no longer be reconciled with the nucleogenesis of standard cosmology and the alternative explanation, in terms of the LAMBDA-Cold-Dark-Matter model, has no recognizable chemical basis. A more rational scheme, based on the chemistry and periodicity of atomic matter, opens up an exciting new interpretation of the cosmos in terms of projective geometry and general relativity. The response of atomic structure to environmental pressure predicts non-Doppler cosmical redshifts and equilibrium nucleogenesis by alp
Page, Don N.
2006-01-01
A complete model of the universe needs at least three parts: (1) a complete set of physical variables and dynamical laws for them, (2) the correct solution of the dynamical laws, and (3) the connection with conscious experience. In quantum cosmology, item (2) is the quantum state of the cosmos. Hartle and Hawking have made the `no-boundary' proposal, that the wavefunction of the universe is given by a path integral over all compact Euclidean 4-dimensional geometries and matter fields that hav...
Religion, theology and cosmology
John T. Fitzgerald
2013-10-01
Full Text Available Cosmology is one of the predominant research areas of the contemporary world. Advances in modern cosmology have prompted renewed interest in the intersections between religion, theology and cosmology. This article, which is intended as a brief introduction to the series of studies on theological cosmology in this journal, identifies three general areas of theological interest stemming from the modern scientific study of cosmology: contemporary theology and ethics; cosmology and world religions; and ancient cosmologies. These intersections raise important questions about the relationship of religion and cosmology, which has recently been addressed by William Scott Green and is the focus of the final portion of the article.
A High Precision Laser-Based Autofocus Method Using Biased Image Plane for Microscopy
Chao-Chen Gu
2018-01-01
Full Text Available This study designs and accomplishes a high precision and robust laser-based autofocusing system, in which a biased image plane is applied. In accordance to the designed optics, a cluster-based circle fitting algorithm is proposed to calculate the radius of the detecting spot from the reflected laser beam as an essential factor to obtain the defocus value. The experiment conduct on the experiment device achieved novel performance of high precision and robustness. Furthermore, the low demand of assembly accuracy makes the proposed method a low-cost and realizable solution for autofocusing technique.
Optimization of the data taking strategy for a high precision τ mass measurement
Wang, Y.K.; Mo, X.H.; Yuan, C.Z.; Liu, J.P.
2007-01-01
To achieve a high precision τ mass (m τ ) measurement at the forthcoming high luminosity experiment, Monte Carlo simulation and sampling technique are adopted to simulate various data taking cases from which the optimal scheme is determined. The study indicates that when m τ is the sole parameter to be fit, the optimal energy for data taking is located near the τ + τ - production threshold in the vicinity of the largest derivative of the cross-section to energy; one point in the optimal position with luminosity around 63pb -1 is sufficient for getting a statistical precision of 0.1MeV/c 2 or better
High precision NC lathe feeding system rigid-flexible coupling model reduction technology
Xuan, He; Hua, Qingsong; Cheng, Lianjun; Zhang, Hongxin; Zhao, Qinghai; Mao, Xinkai
2017-08-01
This paper proposes the use of dynamic substructure method of reduction of order to achieve effective reduction of feed system for high precision NC lathe feeding system rigid-flexible coupling model, namely the use of ADAMS to establish the rigid flexible coupling simulation model of high precision NC lathe, and then the vibration simulation of the period by using the FD 3D damper is very effective for feed system of bolt connection reduction of multi degree of freedom model. The vibration simulation calculation is more accurate, more quickly.
Design of high precision temperature control system for TO packaged LD
Liang, Enji; Luo, Baoke; Zhuang, Bin; He, Zhengquan
2017-10-01
Temperature is an important factor affecting the performance of TO package LD. In order to ensure the safe and stable operation of LD, a temperature control circuit for LD based on PID technology is designed. The MAX1978 and an external PID circuit are used to form a control circuit that drives the thermoelectric cooler (TEC) to achieve control of temperature and the external load can be changed. The system circuit has low power consumption, high integration and high precision,and the circuit can achieve precise control of the LD temperature. Experiment results show that the circuit can achieve effective and stable control of the laser temperature.
Super high precision 200 ppi liquid crystal display series; Chokoseido 200 ppi ekisho display series
NONE
2000-03-01
In mobile equipment, in demand is a high precision liquid crystal display (LCD) having the power of expression equivalent to printed materials like magazines because of the necessity of displaying a large amount of information on a easily potable small screen. In addition, with the spread and high-quality image of digital still cameras, it is strongly desired to display photographed digital image data in high quality. Toshiba Corp., by low temperature polysilicone (p-Si) technology, commercialized the liquid crystal display series of 200 ppi (pixels per inch) precision dealing with the rise of the high-precision high-image quality LCD market. The super high precision of 200 ppi enables the display of smooth beautiful animation comparable to printed sheets of magazines and photographs. The display series are suitable for the display of various information services such as electronic books and electronic photo-viewers including internet. The screen sizes lined up are No. 4 type VGA (640x480 pixels) of a small pocket notebook size and No. 6.3 type XGA (1,024x768 pixels) of a paperback size, with a larger screen to be furthered. (translated by NEDO)
High Precision Measurement of the differential W and Z boson cross-sections
Gasnikova, Ksenia; The ATLAS collaboration
2017-01-01
Measurements of the Drell-Yan production of W and Z/gamma bosons at the LHC provide a benchmark of our understanding of perturbative QCD and probe the proton structure in a unique way. The ATLAS collaboration has performed new high precision measurements at center-of-mass energies of 7. The measurements are performed for W+, W- and Z/gamma bosons integrated and as a function of the boson or lepton rapidity and the Z/gamma* mass. Unprecedented precision is reached and strong constraints on Parton Distribution functions, in particular the strange density are found. Z cross sections are also measured at a center-of-mass energies of 8TeV and 13TeV, and cross-section ratios to the top-quark pair production have been derived. This ratio measurement leads to a cancellation of several systematic effects and allows therefore for a high precision comparison to the theory predictions.
Lee, Taehwa; Luo, Wei; Li, Qiaochu; Guo, L. Jay
2017-03-01
Laser-generated focused ultrasound has shown great promise in precisely treating cells and tissues by producing controlled micro-cavitation within the acoustic focal volume (30 MPa, negative pressure amplitude). By moving cavitation spots along pre-defined paths through a motorized stage, tissue-mimicking gels of different elastic moduli were cut into different shapes (rectangle, triangle, and circle), leaving behind the same shape of holes, whose sizes are less than 1 mm. The cut line width is estimated to be less than 50 um (corresponding to localized cavitation region), allowing for accurate cutting. This novel approach could open new possibility for in-vivo treatment of diseased tissues in a high-precision manner (i.e., high-precision invisible sonic scalpel).
A High-precision Motion Compensation Method for SAR Based on Image Intensity Optimization
Hu Ke-bin
2015-02-01
Full Text Available Owing to the platform instability and precision limitations of motion sensors, motion errors negatively affect the quality of synthetic aperture radar (SAR images. The autofocus Back Projection (BP algorithm based on the optimization of image sharpness compensates for motion errors through phase error estimation. This method can attain relatively good performance, while assuming the same phase error for all pixels, i.e., it ignores the spatial variance of motion errors. To overcome this drawback, a high-precision motion error compensation method is presented in this study. In the proposed method, the Antenna Phase Centers (APC are estimated via optimization using the criterion of maximum image intensity. Then, the estimated APCs are applied for BP imaging. Because the APC estimation equals the range history estimation for each pixel, high-precision phase compensation for every pixel can be achieved. Point-target simulations and processing of experimental data validate the effectiveness of the proposed method.
High-precision mass measurements for the rp-process at JYFLTRAP
Canete Laetitia
2017-01-01
Full Text Available The double Penning trap JYFLTRAP at the University of Jyväskylä has been successfully used to achieve high-precision mass measurements of nuclei involved in the rapid proton-capture (rp process. A precise mass measurement of 31Cl is essential to estimate the waiting point condition of 30S in the rp-process occurring in type I x-ray bursts (XRBs. The mass-excess of 31C1 measured at JYFLTRAP, -7034.7(3.4 keV, is 15 more precise than the value given in the Atomic Mass Evaluation 2012. The proton separation energy Sp determined from the new mass-excess value confirmed that 30S is a waiting point, with a lower-temperature limit of 0.44 GK. The mass of 52Co effects both 51Fe(p,γ52Co and 52Co(p,γ53Ni reactions. The mass-excess value measured, - 34 331.6(6.6 keV is 30 times more precise than the value given in AME2012. The Q values for the 51Fe(p,γ52Co and 52Co(p,γ53Ni reactions are now known with a high precision, 1418(11 keV and 2588(26 keV respectively. The results show that 52Co is more proton bound and 53Ni less proton bound than what was expected from the extrapolated value.
Chimento, L P; Forte, M; Devecchi, F P; Kremer, G M; Ribas, M O; Samojeden, L L
2011-01-01
In this work we review if fermionic sources could be responsible for accelerated periods during the evolution of a FRW universe. In a first attempt, besides the fermionic source, a matter constituent would answer for the decelerated periods. The coupled differential equations that emerge from the field equations are integrated numerically. The self-interaction potential of the fermionic field is considered as a function of the scalar and pseudo-scalar invariants. It is shown that the fermionic field could behave like an inflaton field in the early universe, giving place to a transition to a matter dominated (decelerated) period. In a second formulation we turn our attention to analytical results, specifically using the idea of form-invariance transformations. These transformations can be used for obtaining accelerated cosmologies starting with conventional cosmological models. Here we reconsider the scalar field case and extend the discussion to fermionic fields. Finally we investigate the role of a Dirac field in a Brans-Dicke (BD) context. The results show that this source, in combination with the BD scalar, promote a final eternal accelerated era, after a matter dominated period.
Kim, Ji-hoon; Ma, Xiangcheng; Grudić, Michael Y.; Hopkins, Philip F.; Hayward, Christopher C.; Wetzel, Andrew; Faucher-Giguère, Claude-André; Kereš, Dušan; Garrison-Kimmel, Shea; Murray, Norman
2018-03-01
Using a state-of-the-art cosmological simulation of merging proto-galaxies at high redshift from the FIRE project, with explicit treatments of star formation and stellar feedback in the interstellar medium, we investigate the formation of star clusters and examine one of the formation hypotheses of present-day metal-poor globular clusters. We find that frequent mergers in high-redshift proto-galaxies could provide a fertile environment to produce long-lasting bound star clusters. The violent merger event disturbs the gravitational potential and pushes a large gas mass of ≳ 105-6 M⊙ collectively to high density, at which point it rapidly turns into stars before stellar feedback can stop star formation. The high dynamic range of the reported simulation is critical in realizing such dense star-forming clouds with a small dynamical time-scale, tff ≲ 3 Myr, shorter than most stellar feedback time-scales. Our simulation then allows us to trace how clusters could become virialized and tightly bound to survive for up to ˜420 Myr till the end of the simulation. Because the cluster's tightly bound core was formed in one short burst, and the nearby older stars originally grouped with the cluster tend to be preferentially removed, at the end of the simulation the cluster has a small age spread.
Neutrino mass from cosmology: impact of high-accuracy measurement of the Hubble constant
Sekiguchi, Toyokazu [Institute for Cosmic Ray Research, University of Tokyo, Kashiwa 277-8582 (Japan); Ichikawa, Kazuhide [Department of Micro Engineering, Kyoto University, Kyoto 606-8501 (Japan); Takahashi, Tomo [Department of Physics, Saga University, Saga 840-8502 (Japan); Greenhill, Lincoln, E-mail: sekiguti@icrr.u-tokyo.ac.jp, E-mail: kazuhide@me.kyoto-u.ac.jp, E-mail: tomot@cc.saga-u.ac.jp, E-mail: greenhill@cfa.harvard.edu [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States)
2010-03-01
Non-zero neutrino mass would affect the evolution of the Universe in observable ways, and a strong constraint on the mass can be achieved using combinations of cosmological data sets. We focus on the power spectrum of cosmic microwave background (CMB) anisotropies, the Hubble constant H{sub 0}, and the length scale for baryon acoustic oscillations (BAO) to investigate the constraint on the neutrino mass, m{sub ν}. We analyze data from multiple existing CMB studies (WMAP5, ACBAR, CBI, BOOMERANG, and QUAD), recent measurement of H{sub 0} (SHOES), with about two times lower uncertainty (5 %) than previous estimates, and recent treatments of BAO from the Sloan Digital Sky Survey (SDSS). We obtained an upper limit of m{sub ν} < 0.2eV (95 % C.L.), for a flat ΛCDM model. This is a 40 % reduction in the limit derived from previous H{sub 0} estimates and one-third lower than can be achieved with extant CMB and BAO data. We also analyze the impact of smaller uncertainty on measurements of H{sub 0} as may be anticipated in the near term, in combination with CMB data from the Planck mission, and BAO data from the SDSS/BOSS program. We demonstrate the possibility of a 5σ detection for a fiducial neutrino mass of 0.1 eV or a 95 % upper limit of 0.04 eV for a fiducial of m{sub ν} = 0 eV. These constraints are about 50 % better than those achieved without external constraint. We further investigate the impact on modeling where the dark-energy equation of state is constant but not necessarily -1, or where a non-flat universe is allowed. In these cases, the next-generation accuracies of Planck, BOSS, and 1 % measurement of H{sub 0} would all be required to obtain the limit m{sub ν} < 0.05−0.06 eV (95 % C.L.) for the fiducial of m{sub ν} = 0 eV. The independence of systematics argues for pursuit of both BAO and H{sub 0} measurements.
High-resolution SMA imaging of bright submillimetre sources from the SCUBA-2 Cosmology Legacy Survey
Hill, Ryley; Chapman, Scott C.; Scott, Douglas; Petitpas, Glen; Smail, Ian; Chapin, Edward L.; Gurwell, Mark A.; Perry, Ryan; Blain, Andrew W.; Bremer, Malcolm N.; Chen, Chian-Chou; Dunlop, James S.; Farrah, Duncan; Fazio, Giovanni G.; Geach, James E.; Howson, Paul; Ivison, R. J.; Lacaille, Kevin; Michałowski, Michał J.; Simpson, James M.; Swinbank, A. M.; van der Werf, Paul P.; Wilner, David J.
2018-06-01
We have used the Submillimeter Array (SMA) at 860 μm to observe the brightest sources in the Submillimeter Common User Bolometer Array-2 (SCUBA-2) Cosmology Legacy Survey (S2CLS). The goal of this survey is to exploit the large field of the S2CLS along with the resolution and sensitivity of the SMA to construct a large sample of these rare sources and to study their statistical properties. We have targeted 70 of the brightest single-dish SCUBA-2 850 μm sources down to S850 ≈ 8 mJy, achieving an average synthesized beam of 2.4 arcsec and an average rms of σ860 = 1.5 mJy beam-1 in our primary beam-corrected maps. We searched our SMA maps for 4σ peaks, corresponding to S860 ≳ 6 mJy sources, and detected 62, galaxies, including three pairs. We include in our study 35 archival observations, bringing our sample size to 105 bright single-dish submillimetre sources with interferometric follow-up. We compute the cumulative and differential number counts, finding them to overlap with previous single-dish survey number counts within the uncertainties, although our cumulative number count is systematically lower than the parent S2CLS cumulative number count by 14 ± 6 per cent between 11 and 15 mJy. We estimate the probability that a ≳10 mJy single-dish submillimetre source resolves into two or more galaxies with similar flux densities to be less than 15 per cent. Assuming the remaining 85 per cent of the targets are ultraluminous starburst galaxies between z = 2 and 3, we find a likely volume density of ≳400 M⊙ yr-1 sources to be {˜ } 3^{+0.7}_{-0.6} {× } 10^{-7} Mpc-3. We show that the descendants of these galaxies could be ≳4 × 1011 M⊙ local quiescent galaxies, and that about 10 per cent of their total stellar mass would have formed during these short bursts of star formation.
Zucker, M. H.
This paper is a critical analysis and reassessment of entropic functioning as it applies to the question of whether the ultimate fate of the universe will be determined in the future to be "open" (expanding forever to expire in a big chill), "closed" (collapsing to a big crunch), or "flat" (balanced forever between the two). The second law of thermodynamics declares that entropy can only increase and that this principle extends, inevitably, to the universe as a whole. This paper takes the position that this extension is an unwarranted projection based neither on experience nonfact - an extrapolation that ignores the powerful effect of a gravitational force acting within a closed system. Since it was originally presented by Clausius, the thermodynamic concept of entropy has been redefined in terms of "order" and "disorder" - order being equated with a low degree of entropy and disorder with a high degree. This revised terminology more subjective than precise, has generated considerable confusion in cosmology in several critical instances. For example - the chaotic fireball of the big bang, interpreted by Stephen Hawking as a state of disorder (high entropy), is infinitely hot and, thermally, represents zero entropy (order). Hawking, apparently focusing on the disorderly "chaotic" aspect, equated it with a high degree of entropy - overlooking the fact that the universe is a thermodynamic system and that the key factor in evaluating the big-bang phenomenon is the infinitely high temperature at the early universe, which can only be equated with zero entropy. This analysis resolves this confusion and reestablishes entropy as a cosmological function integrally linked to temperature. The paper goes on to show that, while all subsystems contained within the universe require external sources of energization to have their temperatures raised, this requirement does not apply to the universe as a whole. The universe is the only system that, by itself can raise its own
Brandriss, Mark E.
2010-01-01
This article describes ways to incorporate high-precision measurements of the specific gravities of minerals into undergraduate courses in mineralogy and physical geology. Most traditional undergraduate laboratory methods of measuring specific gravity are suitable only for unusually large samples, which severely limits their usefulness for student…
MiniDSS: a low-power and high-precision miniaturized digital sun sensor
Boer, B.M. de; Durkut, M.; Laan, E.; Hakkesteegt, H.; Theuwissen, A.; Xie, N.; Leijtens, J.L.; Urquijo, E.; Bruins, P.
2012-01-01
A high-precision and low-power miniaturized digital sun sensor has been developed at TNO. The single-chip sun sensor comprises an application specific integrated circuit (ASIC) on which an active pixel sensor (APS), read-out and processing circuitry as well as communication circuitry are combined.
High Precision Optical Observations of Space Debris in the Geo Ring from Venezuela
Lacruz, E.; Abad, C.; Downes, J. J.; Casanova, D.; Tresaco, E.
2018-01-01
We present preliminary results to demonstrate that our method for detection and location of Space Debris (SD) in the geostationary Earth orbit (GEO) ring, based on observations at the OAN of Venezuela is of high astrometric precision. A detailed explanation of the method, its validation and first results is available in (Lacruz et al. 2017).
High-precision photometry by telescope defocusing - I. The transiting planetary system WASP-5
Southworth, J.; Hinse, T. C.; Jørgensen, U. G.
2009-01-01
We present high-precision photometry of two transit events of the extrasolar planetary system WASP-5, obtained with the Danish 1.54-m telescope at European Southern Obseratory La Silla. In order to minimize both random and flat-fielding errors, we defocused the telescope so its point spread...
Active-passive hybrid piezoelectric actuators for high-precision hard disk drive servo systems
Chan, Kwong Wah; Liao, Wei-Hsin
2006-03-01
Positioning precision is crucial to today's increasingly high-speed, high-capacity, high data density, and miniaturized hard disk drives (HDDs). The demand for higher bandwidth servo systems that can quickly and precisely position the read/write head on a high track density becomes more pressing. Recently, the idea of applying dual-stage actuators to track servo systems has been studied. The push-pull piezoelectric actuated devices have been developed as micro actuators for fine and fast positioning, while the voice coil motor functions as a large but coarse seeking. However, the current dual-stage actuator design uses piezoelectric patches only without passive damping. In this paper, we propose a dual-stage servo system using enhanced active-passive hybrid piezoelectric actuators. The proposed actuators will improve the existing dual-stage actuators for higher precision and shock resistance, due to the incorporation of passive damping in the design. We aim to develop this hybrid servo system not only to increase speed of track seeking but also to improve precision of track following servos in HDDs. New piezoelectrically actuated suspensions with passive damping have been designed and fabricated. In order to evaluate positioning and track following performances for the dual-stage track servo systems, experimental efforts are carried out to implement the synthesized active-passive suspension structure with enhanced piezoelectric actuators using a composite nonlinear feedback controller.
Self-tuning in master-slave synchronization of high-precision stage systems
Heertjes, M.F.; Temizer, B.; Schneiders, M.G.E.
2013-01-01
For synchronization of high-precision stage systems, in particular the synchronization between a wafer and a reticle stage system of a wafer scanner, a master–slave controller design is presented. The design consists of a synchronization controller based on FIR filters and a data-driven self-tuning
Local high precision 3D measurement based on line laser measuring instrument
Zhang, Renwei; Liu, Wei; Lu, Yongkang; Zhang, Yang; Ma, Jianwei; Jia, Zhenyuan
2018-03-01
In order to realize the precision machining and assembly of the parts, the geometrical dimensions of the surface of the local assembly surfaces need to be strictly guaranteed. In this paper, a local high-precision three-dimensional measurement method based on line laser measuring instrument is proposed to achieve a high degree of accuracy of the three-dimensional reconstruction of the surface. Aiming at the problem of two-dimensional line laser measuring instrument which lacks one-dimensional high-precision information, a local three-dimensional profile measuring system based on an accurate single-axis controller is proposed. First of all, a three-dimensional data compensation method based on spatial multi-angle line laser measuring instrument is proposed to achieve the high-precision measurement of the default axis. Through the pretreatment of the 3D point cloud information, the measurement points can be restored accurately. Finally, the target spherical surface is needed to make local three-dimensional scanning measurements for accuracy verification. The experimental results show that this scheme can get the local three-dimensional information of the target quickly and accurately, and achieves the purpose of gaining the information and compensating the error for laser scanner information, and improves the local measurement accuracy.
Investigation of the proton-neutron interaction by high-precision nuclear mass measurements
Savreux, R P; Akkus, B
2007-01-01
We propose to measure the atomic masses of a series of short-lived nuclides, including $^{70}$Ni, $^{122-130}$Cd, $^{134}$Sn, $^{138,140}$Xe, $^{207-210}$Hg, and $^{223-225}$Rn, that contribute to the investigation of the proton-neutron interaction and its role in nuclear structure. The high-precision mass measurements are planned for the Penning trap mass spectrometer ISOLTRAP that reaches the required precision of 10 keV in the nuclear mass determination.
Status and outlook of CHIP-TRAP: The Central Michigan University high precision Penning trap
Redshaw, M.; Bryce, R. A.; Hawks, P.; Gamage, N. D.; Hunt, C.; Kandegedara, R. M. E. B.; Ratnayake, I. S.; Sharp, L.
2016-06-01
At Central Michigan University we are developing a high-precision Penning trap mass spectrometer (CHIP-TRAP) that will focus on measurements with long-lived radioactive isotopes. CHIP-TRAP will consist of a pair of hyperbolic precision-measurement Penning traps, and a cylindrical capture/filter trap in a 12 T magnetic field. Ions will be produced by external ion sources, including a laser ablation source, and transported to the capture trap at low energies enabling ions of a given m / q ratio to be selected via their time-of-flight. In the capture trap, contaminant ions will be removed with a mass-selective rf dipole excitation and the ion of interest will be transported to the measurement traps. A phase-sensitive image charge detection technique will be used for simultaneous cyclotron frequency measurements on single ions in the two precision traps, resulting in a reduction in statistical uncertainty due to magnetic field fluctuations.
Accurate and emergent applications for high precision light small aerial remote sensing system
Pei, Liu; Yingcheng, Li; Yanli, Xue; Xiaofeng, Sun; Qingwu, Hu
2014-01-01
In this paper, we focus on the successful applications of accurate and emergent surveying and mapping for high precision light small aerial remote sensing system. First, the remote sensing system structure and three integrated operation modes will be introduced. It can be combined to three operation modes depending on the application requirements. Second, we describe the preliminary results of a precision validation method for POS direct orientation in 1:500 mapping. Third, it presents two fast response mapping products- regional continuous three-dimensional model and digital surface model, taking the efficiency and accuracy evaluation of the two products as an important point. The precision of both products meets the 1:2 000 topographic map accuracy specifications in Pingdingshan area. In the end, conclusions and future work are summarized
Accurate and emergent applications for high precision light small aerial remote sensing system
Pei, Liu; Yingcheng, Li; Yanli, Xue; Qingwu, Hu; Xiaofeng, Sun
2014-03-01
In this paper, we focus on the successful applications of accurate and emergent surveying and mapping for high precision light small aerial remote sensing system. First, the remote sensing system structure and three integrated operation modes will be introduced. It can be combined to three operation modes depending on the application requirements. Second, we describe the preliminary results of a precision validation method for POS direct orientation in 1:500 mapping. Third, it presents two fast response mapping products- regional continuous three-dimensional model and digital surface model, taking the efficiency and accuracy evaluation of the two products as an important point. The precision of both products meets the 1:2 000 topographic map accuracy specifications in Pingdingshan area. In the end, conclusions and future work are summarized.
Yin Xuebing; Zhao Huijie; Zeng Junyu; Qu Yufu
2007-01-01
A new acoustic grating fringe projector (AGFP) was developed for high-speed and high-precision 3D measurement. A new acoustic grating fringe projection theory is also proposed to describe the optical system. The AGFP instrument can adjust the spatial phase and period of fringes with unprecedented speed and accuracy. Using rf power proportional-integral-derivative (PID) control and CCD synchronous control, we obtain fringes with fine sinusoidal characteristics and realize high-speed acquisition of image data. Using the device, we obtained a precise phase map for a 3D profile. In addition, the AGFP can work in running fringe mode, which could be applied in other measurement fields
Scalar-tensor cosmology with cosmological constant
Maslanka, K.
1983-01-01
The equations of scalar-tensor theory of gravitation with cosmological constant in the case of homogeneous and isotropic cosmological model can be reduced to dynamical system of three differential equations with unknown functions H=R/R, THETA=phi/phi, S=e/phi. When new variables are introduced the system becomes more symmetrical and cosmological solutions R(t), phi(t), e(t) are found. It is shown that when cosmological constant is introduced large class of solutions which depend also on Dicke-Brans parameter can be obtained. Investigations of these solutions give general limits for cosmological constant and mean density of matter in plane model. (author)
High-precision relative position and attitude measurement for on-orbit maintenance of spacecraft
Zhu, Bing; Chen, Feng; Li, Dongdong; Wang, Ying
2018-02-01
In order to realize long-term on-orbit running of satellites, space stations, etc spacecrafts, in addition to the long life design of devices, The life of the spacecraft can also be extended by the on-orbit servicing and maintenance. Therefore, it is necessary to keep precise and detailed maintenance of key components. In this paper, a high-precision relative position and attitude measurement method used in the maintenance of key components is given. This method mainly considers the design of the passive cooperative marker, light-emitting device and high resolution camera in the presence of spatial stray light and noise. By using a series of algorithms, such as background elimination, feature extraction, position and attitude calculation, and so on, the high precision relative pose parameters as the input to the control system between key operation parts and maintenance equipment are obtained. The simulation results show that the algorithm is accurate and effective, satisfying the requirements of the precision operation technique.
High-precision branching ratio measurement for the superallowed β+ emitter Ga62
Finlay, P.; Ball, G. C.; Leslie, J. R.; Svensson, C. E.; Towner, I. S.; Austin, R. A. E.; Bandyopadhyay, D.; Chaffey, A.; Chakrawarthy, R. S.; Garrett, P. E.; Grinyer, G. F.; Hackman, G.; Hyland, B.; Kanungo, R.; Leach, K. G.; Mattoon, C. M.; Morton, A. C.; Pearson, C. J.; Phillips, A. A.; Ressler, J. J.; Sarazin, F.; Savajols, H.; Schumaker, M. A.; Wong, J.
2008-08-01
A high-precision branching ratio measurement for the superallowed β+ decay of Ga62 was performed at the Isotope Separator and Accelerator (ISAC) radioactive ion beam facility. The 8π spectrometer, an array of 20 high-purity germanium detectors, was employed to detect the γ rays emitted following Gamow-Teller and nonanalog Fermi β+ decays of Ga62, and the SCEPTAR plastic scintillator array was used to detect the emitted β particles. Thirty γ rays were identified following Ga62 decay, establishing the superallowed branching ratio to be 99.858(8)%. Combined with the world-average half-life and a recent high-precision Q-value measurement for Ga62, this branching ratio yields an ft value of 3074.3±1.1 s, making Ga62 among the most precisely determined superallowed ft values. Comparison between the superallowed ft value determined in this work and the world-average corrected F tmacr value allows the large nuclear-structure-dependent correction for Ga62 decay to be experimentally determined from the CVC hypothesis to better than 7% of its own value, the most precise experimental determination for any superallowed emitter. These results provide a benchmark for the refinement of the theoretical description of isospin-symmetry breaking in A⩾62 superallowed decays.
Rigorous high-precision enclosures of fixed points and their invariant manifolds
Wittig, Alexander N.
The well established concept of Taylor Models is introduced, which offer highly accurate C0 enclosures of functional dependencies, combining high-order polynomial approximation of functions and rigorous estimates of the truncation error, performed using verified arithmetic. The focus of this work is on the application of Taylor Models in algorithms for strongly non-linear dynamical systems. A method is proposed to extend the existing implementation of Taylor Models in COSY INFINITY from double precision coefficients to arbitrary precision coefficients. Great care is taken to maintain the highest efficiency possible by adaptively adjusting the precision of higher order coefficients in the polynomial expansion. High precision operations are based on clever combinations of elementary floating point operations yielding exact values for round-off errors. An experimental high precision interval data type is developed and implemented. Algorithms for the verified computation of intrinsic functions based on the High Precision Interval datatype are developed and described in detail. The application of these operations in the implementation of High Precision Taylor Models is discussed. An application of Taylor Model methods to the verification of fixed points is presented by verifying the existence of a period 15 fixed point in a near standard Henon map. Verification is performed using different verified methods such as double precision Taylor Models, High Precision intervals and High Precision Taylor Models. Results and performance of each method are compared. An automated rigorous fixed point finder is implemented, allowing the fully automated search for all fixed points of a function within a given domain. It returns a list of verified enclosures of each fixed point, optionally verifying uniqueness within these enclosures. An application of the fixed point finder to the rigorous analysis of beam transfer maps in accelerator physics is presented. Previous work done by
Hawking, S.W.
1984-01-01
The subject of these lectures is quantum effects in cosmology. The author deals first with situations in which the gravitational field can be treated as a classical, unquantized background on which the quantum matter fields propagate. This is the case with inflation at the GUT era. Nevertheless the curvature of spacetime can have important effects on the behaviour of the quantum fields and on the development of long-range correlations. He then turns to the question of the quantization of the gravitational field itself. The plan of these lectures is as follows: Euclidean approach to quantum field theory in flat space; the extension of techniques to quantum fields on a curved background with the four-sphere, the Euclidean version of De Sitter space as a particular example; the GUT era; quantization of the gravitational field by Euclidean path integrals; mini superspace model. (Auth.)
Krioukov, Dmitri; Kitsak, Maksim; Sinkovits, Robert S; Rideout, David; Meyer, David; Boguñá, Marián
2012-01-01
Prediction and control of the dynamics of complex networks is a central problem in network science. Structural and dynamical similarities of different real networks suggest that some universal laws might accurately describe the dynamics of these networks, albeit the nature and common origin of such laws remain elusive. Here we show that the causal network representing the large-scale structure of spacetime in our accelerating universe is a power-law graph with strong clustering, similar to many complex networks such as the Internet, social, or biological networks. We prove that this structural similarity is a consequence of the asymptotic equivalence between the large-scale growth dynamics of complex networks and causal networks. This equivalence suggests that unexpectedly similar laws govern the dynamics of complex networks and spacetime in the universe, with implications to network science and cosmology.
Narlikar, Jayant Vishnu
2002-01-01
The third edition of this successful textbook is fully updated and includes important recent developments in cosmology. It begins with an introduction to cosmology and general relativity, and goes on to cover the mathematical models of standard cosmology. The physical aspects of cosmology, including primordial nucleosynthesis, the astroparticle physics of inflation, and the current ideas on structure formation are discussed. Alternative models of cosmology are reviewed, including the model of Quasi-Steady State Cosmology, which has recently been proposed as an alternative to Big Bang Cosmology.
Application of MCU to intelligent interface of high precision magnet power supply
Xu Ruinian; Li Deming
2004-01-01
Application of the high-capability MCU in the intelligent interface is introduced in this paper. A prototype of intelligent interface for high precision huge magnet power supply was developed successfully. This intelligent interface was composed of two parts: operation panel and main board, both of which adopt a MCU of PIC16F877 respectively. The interface has many advantages, such as small size, low cost and good interference immunity. (authors)
Cosmological phase transitions
Kolb, E.W.
1987-01-01
If the universe stated from conditions of high temperature and density, there should have been a series of phase transitions associated with spontaneous symmetry breaking. The cosmological phase transitions could have observable consequences in the present Universe. Some of the consequences including the formation of topological defects and cosmological inflation are reviewed here. One of the most important tools in building particle physics models is the use of spontaneous symmetry breaking (SSB). The proposal that there are underlying symmetries of nature that are not manifest in the vacuum is a crucial link in the unification of forces. Of particular interest for cosmology is the expectation that are the high temperatures of the big bang symmetries broken today will be restored, and that there are phase transitions to the broken state. The possibility that topological defects will be produced in the transition is the subject of this section. The possibility that the Universe will undergo inflation in a phase transition will be the subject of the next section. Before discussing the creation of topological defects in the phase transition, some general aspects of high-temperature restoration of symmetry and the development of the phase transition will be reviewed. 29 references, 1 figure, 1 table
Winrow, Edward G.; Chavez, Victor H.
2011-09-01
High-precision opto-mechanical structures have historically been plagued by high costs for both hardware and the associated alignment and assembly process. This problem is especially true for space applications where only a few production units are produced. A methodology for optical alignment and optical structure design is presented which shifts the mechanism of maintaining precision from tightly toleranced, machined flight hardware to reusable, modular tooling. Using the proposed methodology, optical alignment error sources are reduced by the direct alignment of optics through their surface retroreflections (pips) as seen through a theodolite. Optical alignment adjustments are actualized through motorized, sub-micron precision actuators in 5 degrees of freedom. Optical structure hardware costs are reduced through the use of simple shapes (tubes, plates) and repeated components. This approach produces significantly cheaper hardware and more efficient assembly without sacrificing alignment precision or optical structure stability. The design, alignment plan and assembly of a 4" aperture, carbon fiber composite, Schmidt-Cassegrain concept telescope is presented.
Development and simulation of microfluidic Wheatstone bridge for high-precision sensor
Shipulya, N D; Konakov, S A; Krzhizhanovskaya, V V
2016-01-01
In this work we present the results of analytical modeling and 3D computer simulation of microfluidic Wheatstone bridge, which is used for high-accuracy measurements and precision instruments. We propose and simulate a new method of a bridge balancing process by changing the microchannel geometry. This process is based on the “etching in microchannel” technology we developed earlier (doi:10.1088/1742-6596/681/1/012035). Our method ensures a precise control of the flow rate and flow direction in the bridge microchannel. The advantage of our approach is the ability to work without any control valves and other active electronic systems, which are usually used for bridge balancing. The geometrical configuration of microchannels was selected based on the analytical estimations. A detailed 3D numerical model was based on Navier-Stokes equations for a laminar fluid flow at low Reynolds numbers. We investigated the behavior of the Wheatstone bridge under different process conditions; found a relation between the channel resistance and flow rate through the bridge; and calculated the pressure drop across the system under different total flow rates and viscosities. Finally, we describe a high-precision microfluidic pressure sensor that employs the Wheatstone bridge and discuss other applications in complex precision microfluidic systems. (paper)
High Precision Measurement of the differential vector boson cross-sections with the ATLAS detector
Armbruster, Aaron James; The ATLAS collaboration
2017-01-01
Measurements of the Drell-Yan production of W and Z/gamma bosons at the LHC provide a benchmark of our understanding of perturbative QCD and probe the proton structure in a unique way. The ATLAS collaboration has performed new high precision measurements at center-of-mass energies of 7. The measurements are performed for W+, W- and Z/gamma bosons integrated and as a function of the boson or lepton rapidity and the Z/gamma* mass. Unprecedented precision is reached and strong constraints on Parton Distribution functions, in particular the strange density are found. Z cross sections are also measured at center-of-mass energies of 8 eV and 13TeV, and cross-section ratios to the top-quark pair production have been derived. This ratio measurement leads to a cancellation of systematic effects and allows for a high precision comparison to the theory predictions. The cross section of single W events has also been measured precisely at center-of-mass energies of 8TeV and 13TeV and the W charge asymmetry has been determ...
A near infrared laser frequency comb for high precision Doppler planet surveys
Bally J.
2011-07-01
Full Text Available Perhaps the most exciting area of astronomical research today is the study of exoplanets and exoplanetary systems, engaging the imagination not just of the astronomical community, but of the general population. Astronomical instrumentation has matured to the level where it is possible to detect terrestrial planets orbiting distant stars via radial velocity (RV measurements, with the most stable visible light spectrographs reporting RV results the order of 1 m/s. This, however, is an order of magnitude away from the precision needed to detect an Earth analog orbiting a star such as our sun, the Holy Grail of these efforts. By performing these observations in near infrared (NIR there is the potential to simplify the search for distant terrestrial planets by studying cooler, less massive, much more numerous class M stars, with a tighter habitable zone and correspondingly larger RV signal. This NIR advantage is undone by the lack of a suitable high precision, high stability wavelength standard, limiting NIR RV measurements to tens or hundreds of m/s [1, 2]. With the improved spectroscopic precision provided by a laser frequency comb based wavelength reference producing a set of bright, densely and uniformly spaced lines, it will be possible to achieve up to two orders of magnitude improvement in RV precision, limited only by the precision and sensitivity of existing spectrographs, enabling the observation of Earth analogs through RV measurements. We discuss the laser frequency comb as an astronomical wavelength reference, and describe progress towards a near infrared laser frequency comb at the National Institute of Standards and Technology and at the University of Colorado where we are operating a laser frequency comb suitable for use with a high resolution H band astronomical spectrograph.
Khim, Dongyoon; Ryu, Gi-Seong; Park, Won-Tae; Kim, Hyunchul; Lee, Myungwon; Noh, Yong-Young
2016-04-13
A uniform ultrathin polymer film is deposited over a large area with molecularlevel precision by the simple wire-wound bar-coating method. The bar-coated ultrathin films not only exhibit high transparency of up to 90% in the visible wavelength range but also high charge carrier mobility with a high degree of percolation through the uniformly covered polymer nanofibrils. They are capable of realizing highly sensitive multigas sensors and represent the first successful report of ethylene detection using a sensor based on organic field-effect transistors. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
A Police and Insurance Joint Management System Based on High Precision BDS/GPS Positioning
Zuo, Wenwei; Guo, Chi; Liu, Jingnan; Peng, Xuan; Yang, Min
2018-01-01
Car ownership in China reached 194 million vehicles at the end of 2016. The traffic congestion index (TCI) exceeds 2.0 during rush hour in some cities. Inefficient processing for minor traffic accidents is considered to be one of the leading causes for road traffic jams. Meanwhile, the process after an accident is quite troublesome. The main reason is that it is almost always impossible to get the complete chain of evidence when the accident happens. Accordingly, a police and insurance joint management system is developed which is based on high precision BeiDou Navigation Satellite System (BDS)/Global Positioning System (GPS) positioning to process traffic accidents. First of all, an intelligent vehicle rearview mirror terminal is developed. The terminal applies a commonly used consumer electronic device with single frequency navigation. Based on the high precision BDS/GPS positioning algorithm, its accuracy can reach sub-meter level in the urban areas. More specifically, a kernel driver is built to realize the high precision positioning algorithm in an Android HAL layer. Thus the third-party application developers can call the general location Application Programming Interface (API) of the original standard Global Navigation Satellite System (GNSS) to get high precision positioning results. Therefore, the terminal can provide lane level positioning service for car users. Next, a remote traffic accident processing platform is built to provide big data analysis and management. According to the big data analysis of information collected by BDS high precision intelligent sense service, vehicle behaviors can be obtained. The platform can also automatically match and screen the data that uploads after an accident to achieve accurate reproduction of the scene. Thus, it helps traffic police and insurance personnel to complete remote responsibility identification and survey for the accident. Thirdly, a rapid processing flow is established in this article to meet the
A Police and Insurance Joint Management System Based on High Precision BDS/GPS Positioning
Wenwei Zuo
2018-01-01
Full Text Available Car ownership in China reached 194 million vehicles at the end of 2016. The traffic congestion index (TCI exceeds 2.0 during rush hour in some cities. Inefficient processing for minor traffic accidents is considered to be one of the leading causes for road traffic jams. Meanwhile, the process after an accident is quite troublesome. The main reason is that it is almost always impossible to get the complete chain of evidence when the accident happens. Accordingly, a police and insurance joint management system is developed which is based on high precision BeiDou Navigation Satellite System (BDS/Global Positioning System (GPS positioning to process traffic accidents. First of all, an intelligent vehicle rearview mirror terminal is developed. The terminal applies a commonly used consumer electronic device with single frequency navigation. Based on the high precision BDS/GPS positioning algorithm, its accuracy can reach sub-meter level in the urban areas. More specifically, a kernel driver is built to realize the high precision positioning algorithm in an Android HAL layer. Thus the third-party application developers can call the general location Application Programming Interface (API of the original standard Global Navigation Satellite System (GNSS to get high precision positioning results. Therefore, the terminal can provide lane level positioning service for car users. Next, a remote traffic accident processing platform is built to provide big data analysis and management. According to the big data analysis of information collected by BDS high precision intelligent sense service, vehicle behaviors can be obtained. The platform can also automatically match and screen the data that uploads after an accident to achieve accurate reproduction of the scene. Thus, it helps traffic police and insurance personnel to complete remote responsibility identification and survey for the accident. Thirdly, a rapid processing flow is established in this article to
A Police and Insurance Joint Management System Based on High Precision BDS/GPS Positioning.
Zuo, Wenwei; Guo, Chi; Liu, Jingnan; Peng, Xuan; Yang, Min
2018-01-10
Car ownership in China reached 194 million vehicles at the end of 2016. The traffic congestion index (TCI) exceeds 2.0 during rush hour in some cities. Inefficient processing for minor traffic accidents is considered to be one of the leading causes for road traffic jams. Meanwhile, the process after an accident is quite troublesome. The main reason is that it is almost always impossible to get the complete chain of evidence when the accident happens. Accordingly, a police and insurance joint management system is developed which is based on high precision BeiDou Navigation Satellite System (BDS)/Global Positioning System (GPS) positioning to process traffic accidents. First of all, an intelligent vehicle rearview mirror terminal is developed. The terminal applies a commonly used consumer electronic device with single frequency navigation. Based on the high precision BDS/GPS positioning algorithm, its accuracy can reach sub-meter level in the urban areas. More specifically, a kernel driver is built to realize the high precision positioning algorithm in an Android HAL layer. Thus the third-party application developers can call the general location Application Programming Interface (API) of the original standard Global Navigation Satellite System (GNSS) to get high precision positioning results. Therefore, the terminal can provide lane level positioning service for car users. Next, a remote traffic accident processing platform is built to provide big data analysis and management. According to the big data analysis of information collected by BDS high precision intelligent sense service, vehicle behaviors can be obtained. The platform can also automatically match and screen the data that uploads after an accident to achieve accurate reproduction of the scene. Thus, it helps traffic police and insurance personnel to complete remote responsibility identification and survey for the accident. Thirdly, a rapid processing flow is established in this article to meet the
High-precision two-dimensional atom localization via quantum interference in a tripod-type system
Wang, Zhiping; Yu, Benli
2014-01-01
A scheme is proposed for high-precision two-dimensional atom localization in a four-level tripod-type atomic system via measurement of the excited state population. It is found that because of the position-dependent atom–field interaction, the precision of 2D atom localization can be significantly improved by appropriately adjusting the system parameters. Our scheme may be helpful in laser cooling or atom nanolithography via high-precision and high-resolution atom localization. (letter)
Bluemlein, Johannes
2012-05-15
Precision measurements together with exact theoretical calculations have led to steady progress in fundamental physics. A brief survey is given on recent developments and current achievements in the field of perturbative precision calculations in the Standard Model of the Elementary Particles and their application in current high energy collider data analyses.
Bluemlein, Johannes
2012-05-01
Precision measurements together with exact theoretical calculations have led to steady progress in fundamental physics. A brief survey is given on recent developments and current achievements in the field of perturbative precision calculations in the Standard Model of the Elementary Particles and their application in current high energy collider data analyses.
A high-precision instrument for analyzing nonlinear dynamic behavior of bearing cage
Yang, Z., E-mail: zhaohui@nwpu.edu.cn; Yu, T. [School of Aeronautics, Northwestern Polytechnical University, Xi’an 710072 (China); Chen, H. [Xi’an Aerospace Propulsion Institute, Xi’an 710100 (China); Li, B. [State Key Laboratory for Manufacturing and Systems Engineering, Xi’an Jiaotong University, Xi’an 710054 (China)
2016-08-15
The high-precision ball bearing is fundamental to the performance of complex mechanical systems. As the speed increases, the cage behavior becomes a key factor in influencing the bearing performance, especially life and reliability. This paper develops a high-precision instrument for analyzing nonlinear dynamic behavior of the bearing cage. The trajectory of the rotational center and non-repetitive run-out (NRRO) of the cage are used to evaluate the instability of cage motion. This instrument applied an aerostatic spindle to support and spin test the bearing to decrease the influence of system error. Then, a high-speed camera is used to capture images when the bearing works at high speeds. A 3D trajectory tracking software TEMA Motion is used to track the spot which marked the cage surface. Finally, by developing the MATLAB program, a Lissajous’ figure was used to evaluate the nonlinear dynamic behavior of the cage with different speeds. The trajectory of rotational center and NRRO of the cage with various speeds are analyzed. The results can be used to predict the initial failure and optimize cage structural parameters. In addition, the repeatability precision of instrument is also validated. In the future, the motorized spindle will be applied to increase testing speed and image processing algorithms will be developed to analyze the trajectory of the cage.
A high-precision instrument for analyzing nonlinear dynamic behavior of bearing cage
Yang, Z.; Yu, T.; Chen, H.; Li, B.
2016-01-01
The high-precision ball bearing is fundamental to the performance of complex mechanical systems. As the speed increases, the cage behavior becomes a key factor in influencing the bearing performance, especially life and reliability. This paper develops a high-precision instrument for analyzing nonlinear dynamic behavior of the bearing cage. The trajectory of the rotational center and non-repetitive run-out (NRRO) of the cage are used to evaluate the instability of cage motion. This instrument applied an aerostatic spindle to support and spin test the bearing to decrease the influence of system error. Then, a high-speed camera is used to capture images when the bearing works at high speeds. A 3D trajectory tracking software TEMA Motion is used to track the spot which marked the cage surface. Finally, by developing the MATLAB program, a Lissajous’ figure was used to evaluate the nonlinear dynamic behavior of the cage with different speeds. The trajectory of rotational center and NRRO of the cage with various speeds are analyzed. The results can be used to predict the initial failure and optimize cage structural parameters. In addition, the repeatability precision of instrument is also validated. In the future, the motorized spindle will be applied to increase testing speed and image processing algorithms will be developed to analyze the trajectory of the cage.
A Study of Particle Beam Spin Dynamics for High Precision Experiments
Fiedler, Andrew J. [Northern Illinois Univ., DeKalb, IL (United States)
2017-05-01
In the search for physics beyond the Standard Model, high precision experiments to measure fundamental properties of particles are an important frontier. One group of such measurements involves magnetic dipole moment (MDM) values as well as searching for an electric dipole moment (EDM), both of which could provide insights about how particles interact with their environment at the quantum level and if there are undiscovered new particles. For these types of high precision experiments, minimizing statistical uncertainties in the measurements plays a critical role. \\\\ \\indent This work leverages computer simulations to quantify the effects of statistical uncertainty for experiments investigating spin dynamics. In it, analysis of beam properties and lattice design effects on the polarization of the beam is performed. As a case study, the beam lines that will provide polarized muon beams to the Fermilab Muon \\emph{g}-2 experiment are analyzed to determine the effects of correlations between the phase space variables and the overall polarization of the muon beam.
Advances in the Control System for a High Precision Dissolved Organic Carbon Analyzer
Liao, M.; Stubbins, A.; Haidekker, M.
2017-12-01
Dissolved organic carbon (DOC) is a master variable in aquatic ecosystems. DOC in the ocean is one of the largest carbon stores on earth. Studies of the dynamics of DOC in the ocean and other low DOC systems (e.g. groundwater) are hindered by the lack of high precision (sub-micromolar) analytical techniques. Results are presented from efforts to construct and optimize a flow-through, wet chemical DOC analyzer. This study focused on the design, integration and optimization of high precision components and control systems required for such a system (mass flow controller, syringe pumps, gas extraction, reactor chamber with controlled UV and temperature). Results of the approaches developed are presented.
Averaging in spherically symmetric cosmology
Coley, A. A.; Pelavas, N.
2007-01-01
The averaging problem in cosmology is of fundamental importance. When applied to study cosmological evolution, the theory of macroscopic gravity (MG) can be regarded as a long-distance modification of general relativity. In the MG approach to the averaging problem in cosmology, the Einstein field equations on cosmological scales are modified by appropriate gravitational correlation terms. We study the averaging problem within the class of spherically symmetric cosmological models. That is, we shall take the microscopic equations and effect the averaging procedure to determine the precise form of the correlation tensor in this case. In particular, by working in volume-preserving coordinates, we calculate the form of the correlation tensor under some reasonable assumptions on the form for the inhomogeneous gravitational field and matter distribution. We find that the correlation tensor in a Friedmann-Lemaitre-Robertson-Walker (FLRW) background must be of the form of a spatial curvature. Inhomogeneities and spatial averaging, through this spatial curvature correction term, can have a very significant dynamical effect on the dynamics of the Universe and cosmological observations; in particular, we discuss whether spatial averaging might lead to a more conservative explanation of the observed acceleration of the Universe (without the introduction of exotic dark matter fields). We also find that the correlation tensor for a non-FLRW background can be interpreted as the sum of a spatial curvature and an anisotropic fluid. This may lead to interesting effects of averaging on astrophysical scales. We also discuss the results of averaging an inhomogeneous Lemaitre-Tolman-Bondi solution as well as calculations of linear perturbations (that is, the backreaction) in an FLRW background, which support the main conclusions of the analysis
Physics of Eclipsing Binaries: Modelling in the new era of ultra-high precision photometry
Pavlovski, K.; Bloemen, S.; Degroote, P.; Conroy, K.; Hambleton, Kelly; Giammarco, J.M.; Pablo, H.; Prša, A.; Tkachenko, A.; Torres, G.
2013-01-01
Recent ultra-high precision observations of eclipsing binaries, especially data acquired by the Kepler satellite, have made accurate light curve modelling increasingly challenging but also more rewarding. In this contribution, we discuss low-amplitude signals in light curves that can now be used to derive physical information about eclipsing binaries but that were unaccessible before the Kepler era. A notable example is the detection of Doppler beaming, which leads to an increase in flux when...
The Multi-energy High precision Data Processor Based on AD7606
Zhao, Chen; Zhang, Yanchi; Xie, Da
2017-11-01
This paper designs an information collector based on AD7606 to realize the high-precision simultaneous acquisition of multi-source information of multi-energy systems to form the information platform of the energy Internet at Laogang with electricty as its major energy source. Combined with information fusion technologies, this paper analyzes the data to improve the overall energy system scheduling capability and reliability.
A Miniaturized Colorimeter with a Novel Design and High Precision for Photometric Detection
Jun-Chao Yan; Yan Chen; Yu Pang; Jan Slavik; Yun-Fei Zhao; Xiao-Ming Wu; Yi Yang; Si-Fan Yang; Tian-Ling Ren
2018-01-01
Water quality detection plays an increasingly important role in environmental protection. In this work, a novel colorimeter based on the Beer-Lambert law was designed for chemical element detection in water with high precision and miniaturized structure. As an example, the colorimeter can detect phosphorus, which was accomplished in this article to evaluate the performance. Simultaneously, a modified algorithm was applied to extend the linear measurable range. The colorimeter encompassed a ne...
The honeycomb strip chamber: A two coordinate and high precision muon detector
Tolsma, H.P.T.
1996-01-01
This thesis describes the construction and performance of the Honeycomb Strip Chamber (HSC). The HSC offers several advantages with respect to classical drift chambers and drift tubes. The main features of the HSC are: -The detector offers the possibility of simultaneous readout of two orthogonal coordinates with approximately the same precision. - The HSC technology is optimised for mass production. This means that the design is modular (monolayers) and automisation of most of the production steps is possible (folding and welding machines). - The technology is flexible. The cell diameter can easily be changed from a few millimetres to at least 20 mm by changing the parameters in the computer programme of the folding machine. The number of monolayers per station can be chosen freely to the demands of the experiment. -The honeycomb structure gives the detector stiffness and makes it self supporting. This makes the technology a very transparent one in terms of radiation length which is important to prevent multiple scattering of high energetic muons. - The dimensions of the detector are defined by high precision templates. Those templates constrain for example the overall tolerance on the wire positions to 20 μm rms. Reproduction of the high precision assembly of the detector is thus guaranteed. (orig.)
Using cold deformation methods in flow-production of steel high precision shaped sections
Zajtsev, M.L.; Makhnev, I.F.; Shkurko, I.I.
1975-01-01
A final size with a preset tolerance and a required surface finish of steel high-precision sections could be achieved by a cold deformation of hot-rolled ingots-by drawing through dismountable, monolith or roller-type drawing tools or by cold rolling in roller dies. The particularities of the both techniques are compared as regards a number of complicated shaped sections and the advantages of cold rolling are showna more uniform distribution of deformations (strain hardening) across the section, that is a greater margin of plasticity with the same reductions, the less number of the operations required. Rolling is recommended in all the cases when possible as regards the section shape and the bulk volume. The rolling-mill for the calibration of high-precision sections should have no less than two shafts (so that the size could be controlled in both directions) and arrangements to withstand high axial stresses on the rollers (the stresses appearing during rolling in skew dies). When manufacturing precise shaped sections by the cold rolling method the operations are less plentiful than in the cold drawing manufacturing
High-Precision Phenotyping of Grape Bunch Architecture Using Fast 3D Sensor and Automation
Florian Rist
2018-03-01
Full Text Available Wine growers prefer cultivars with looser bunch architecture because of the decreased risk for bunch rot. As a consequence, grapevine breeders have to select seedlings and new cultivars with regard to appropriate bunch traits. Bunch architecture is a mosaic of different single traits which makes phenotyping labor-intensive and time-consuming. In the present study, a fast and high-precision phenotyping pipeline was developed. The optical sensor Artec Spider 3D scanner (Artec 3D, L-1466, Luxembourg was used to generate dense 3D point clouds of grapevine bunches under lab conditions and an automated analysis software called 3D-Bunch-Tool was developed to extract different single 3D bunch traits, i.e., the number of berries, berry diameter, single berry volume, total volume of berries, convex hull volume of grapes, bunch width and bunch length. The method was validated on whole bunches of different grapevine cultivars and phenotypic variable breeding material. Reliable phenotypic data were obtained which show high significant correlations (up to r2 = 0.95 for berry number compared to ground truth data. Moreover, it was shown that the Artec Spider can be used directly in the field where achieved data show comparable precision with regard to the lab application. This non-invasive and non-contact field application facilitates the first high-precision phenotyping pipeline based on 3D bunch traits in large plant sets.
The honeycomb strip chamber: A two coordinate and high precision muon detector
Tolsma, H P.T.
1996-04-19
This thesis describes the construction and performance of the Honeycomb Strip Chamber (HSC). The HSC offers several advantages with respect to classical drift chambers and drift tubes. The main features of the HSC are: -The detector offers the possibility of simultaneous readout of two orthogonal coordinates with approximately the same precision. - The HSC technology is optimised for mass production. This means that the design is modular (monolayers) and automisation of most of the production steps is possible (folding and welding machines). - The technology is flexible. The cell diameter can easily be changed from a few millimetres to at least 20 mm by changing the parameters in the computer programme of the folding machine. The number of monolayers per station can be chosen freely to the demands of the experiment. -The honeycomb structure gives the detector stiffness and makes it self supporting. This makes the technology a very transparent one in terms of radiation length which is important to prevent multiple scattering of high energetic muons. - The dimensions of the detector are defined by high precision templates. Those templates constrain for example the overall tolerance on the wire positions to 20 {mu}m rms. Reproduction of the high precision assembly of the detector is thus guaranteed. (orig.).
High-Precision Phenotyping of Grape Bunch Architecture Using Fast 3D Sensor and Automation.
Rist, Florian; Herzog, Katja; Mack, Jenny; Richter, Robert; Steinhage, Volker; Töpfer, Reinhard
2018-03-02
Wine growers prefer cultivars with looser bunch architecture because of the decreased risk for bunch rot. As a consequence, grapevine breeders have to select seedlings and new cultivars with regard to appropriate bunch traits. Bunch architecture is a mosaic of different single traits which makes phenotyping labor-intensive and time-consuming. In the present study, a fast and high-precision phenotyping pipeline was developed. The optical sensor Artec Spider 3D scanner (Artec 3D, L-1466, Luxembourg) was used to generate dense 3D point clouds of grapevine bunches under lab conditions and an automated analysis software called 3D-Bunch-Tool was developed to extract different single 3D bunch traits, i.e., the number of berries, berry diameter, single berry volume, total volume of berries, convex hull volume of grapes, bunch width and bunch length. The method was validated on whole bunches of different grapevine cultivars and phenotypic variable breeding material. Reliable phenotypic data were obtained which show high significant correlations (up to r² = 0.95 for berry number) compared to ground truth data. Moreover, it was shown that the Artec Spider can be used directly in the field where achieved data show comparable precision with regard to the lab application. This non-invasive and non-contact field application facilitates the first high-precision phenotyping pipeline based on 3D bunch traits in large plant sets.
Concept of modular flexure-based mechanisms for ultra-high precision robot design
M. Richard
2011-05-01
Full Text Available This paper introduces a new concept of modular flexure-based mechanisms to design industrial ultra-high precision robots, which aims at significantly reducing both the complexity of their design and their development time. This modular concept can be considered as a robotic Lego, where a finite number of building bricks is used to quickly build a high-precision robot. The core of the concept is the transformation of a 3-D design problem into several 2-D ones, which are simpler and well-mastered. This paper will first briefly present the theoretical bases of this methodology and the requirements of both types of building bricks: the active and the passive bricks. The section dedicated to the design of the active bricks will detail the current research directions, mainly the maximisation of the strokes and the development of an actuation sub-brick. As for the passive bricks, some examples will be presented, and a discussion regarding the establishment of a mechanical solution catalogue will conclude the section. Last, this modular concept will be illustrated with a practical example, consisting in the design of a 5-degree of freedom ultra-high precision robot.
Recent developments for high-precision mass measurements of the heaviest elements at SHIPTRAP
Minaya Ramirez, E.; Ackermann, D.; Blaum, K.; Block, M.; Droese, C.; Düllmann, Ch. E.; Eibach, M.; Eliseev, S.; Haettner, E.; Herfurth, F.; Heßberger, F.P.
2013-01-01
Highlights: • Direct high-precision mass measurements of No and Lr isotopes performed. • High-precision mass measurements with a count rate of 1 ion/hour demonstrated. • The results provide anchor points for a large region connected by alpha-decay chains. • The binding energies determine the strength of the deformed shell closure N = 152. • Technical developments and new techniques will pave the way towards heavier elements. -- Abstract: Atomic nuclei far from stability continue to challenge our understanding. For example, theoretical models have predicted an “island of stability” in the region of the superheavy elements due to the closure of spherical proton and neutron shells. Depending on the model, these are expected at Z = 114, 120 or even 126 and N = 172 or 184. Valuable information on the road to the island of stability is derived from high-precision mass measurements, which give direct access to binding energies of short-lived trans-uranium nuclei. Recently, direct mass measurements at SHIPTRAP have been extended to nobelium and lawrencium isotopes around the deformed shell gap N = 152. In order to further extend mass measurements to the region of superheavy elements, new technical developments are required to increase the performance of our setup. The sensitivity will increase through the implementation of a new detection method, where observation of one single ion is sufficient. Together with the use of a more efficient gas stopping cell, this will us allow to significantly enhance the overall efficiency of SHIPTRAP
High-precision comparison of the antiproton-to-proton charge-to-mass ratio.
Ulmer, S; Smorra, C; Mooser, A; Franke, K; Nagahama, H; Schneider, G; Higuchi, T; Van Gorp, S; Blaum, K; Matsuda, Y; Quint, W; Walz, J; Yamazaki, Y
2015-08-13
Invariance under the charge, parity, time-reversal (CPT) transformation is one of the fundamental symmetries of the standard model of particle physics. This CPT invariance implies that the fundamental properties of antiparticles and their matter-conjugates are identical, apart from signs. There is a deep link between CPT invariance and Lorentz symmetry--that is, the laws of nature seem to be invariant under the symmetry transformation of spacetime--although it is model dependent. A number of high-precision CPT and Lorentz invariance tests--using a co-magnetometer, a torsion pendulum and a maser, among others--have been performed, but only a few direct high-precision CPT tests that compare the fundamental properties of matter and antimatter are available. Here we report high-precision cyclotron frequency comparisons of a single antiproton and a negatively charged hydrogen ion (H(-)) carried out in a Penning trap system. From 13,000 frequency measurements we compare the charge-to-mass ratio for the antiproton (q/m)p- to that for the proton (q/m)p and obtain (q/m)p-/(q/m)p − 1 =1(69) × 10(-12). The measurements were performed at cyclotron frequencies of 29.6 megahertz, so our result shows that the CPT theorem holds at the atto-electronvolt scale. Our precision of 69 parts per trillion exceeds the energy resolution of previous antiproton-to-proton mass comparisons as well as the respective figure of merit of the standard model extension by a factor of four. In addition, we give a limit on sidereal variations in the measured ratio of baryonic antimatter, and it sets a new limit on the gravitational anomaly parameter of |α − 1| < 8.7 × 10(-7).
Progress Towards a High-Precision Infrared Spectroscopic Survey of the H_3^+ Ion
Perry, Adam J.; Hodges, James N.; Markus, Charles R.; Kocheril, G. Stephen; Jenkins, Paul A., II; McCall, Benjamin J.
2015-06-01
The trihydrogen cation, H_3^+, represents one of the most important and fundamental molecular systems. Having only two electrons and three nuclei, H_3^+ is the simplest polyatomic system and is a key testing ground for the development of new techniques for calculating potential energy surfaces and predicting molecular spectra. Corrections that go beyond the Born-Oppenheimer approximation, including adiabatic, non-adiabatic, relativistic, and quantum electrodynamic corrections are becoming more feasible to calculate. As a result, experimental measurements performed on the H_3^+ ion serve as important benchmarks which are used to test the predictive power of new computational methods. By measuring many infrared transitions with precision at the sub-MHz level it is possible to construct a list of the most highly precise experimental rovibrational energy levels for this molecule. Until recently, only a select handful of infrared transitions of this molecule have been measured with high precision (˜ 1 MHz). Using the technique of Noise Immune Cavity Enhanced Optical Heterodyne Velocity Modulation Spectroscopy, we are aiming to produce the largest high-precision spectroscopic dataset for this molecule to date. Presented here are the current results from our survey along with a discussion of the combination differences analysis used to extract the experimentally determined rovibrational energy levels. O. Polyansky, et al., Phil. Trans. R. Soc. A (2012), 370, 5014. M. Pavanello, et al., J. Chem. Phys. (2012), 136, 184303. L. Diniz, et al., Phys. Rev. A (2013), 88, 032506. L. Lodi, et al., Phys. Rev. A (2014), 89, 032505. J. Hodges, et al., J. Chem. Phys (2013), 139, 164201.
Dimensional cosmological principles
Chi, L.K.
1985-01-01
The dimensional cosmological principles proposed by Wesson require that the density, pressure, and mass of cosmological models be functions of the dimensionless variables which are themselves combinations of the gravitational constant, the speed of light, and the spacetime coordinates. The space coordinate is not the comoving coordinate. In this paper, the dimensional cosmological principle and the dimensional perfect cosmological principle are reformulated by using the comoving coordinate. The dimensional perfect cosmological principle is further modified to allow the possibility that mass creation may occur. Self-similar spacetimes are found to be models obeying the new dimensional cosmological principle
Cosmology and particle physics
Turner, M.S.
1985-01-01
The author reviews the standard cosmology, focusing on primordial nucleosynthesis, and discusses how the standard cosmology has been used to place constraints on the properties of various particles. Baryogenesis is examined in which the B, C, CP violating interactions in GUTs provide a dynamical explanation for the predominance of matter over antimatter and the present baryon-to-baryon ratio. Monoposes, cosmology and astrophysics are reviewed. The author also discusses supersymmetry/supergravity and cosmology, superstrings and cosmology in extra dimensions, and axions, astrophics, and cosmology
Calibration of the precision high voltage dividers of the KATRIN experiment
Rest, Oliver [Institut fuer Kernphysik, Westfaelische Wilhelms-Universitaet Muenster (Germany); Collaboration: KATRIN-Collaboration
2016-07-01
The KATRIN (KArlsruhe TRItium Neutrino) experiment will measure the endpoint region of the tritium β decay spectrum to determine the neutrino mass with a sensitivity of 200 meV/c{sup 2}. To achieve this sub-eV sensitivity the energy of the decay electrons will be analyzed using a MAC-E type spectrometer. The retarding potential of the MAC-E-filter (up to -35 kV) has to be monitored with a relative precision of 3 . 10{sup -6}. For this purpose the potential will be measured directly via two custom made precision high voltage dividers, which were developed and constructed in cooperation with the Physikalisch-Technische Bundesanstalt Braunschweig. In order to determine the absolute values and the stability of the scale factors of the voltage dividers, regular calibration measurements are essential. Such measurements have been performed during the last years using several different methods. The poster gives an overview of the methods and results of the calibration of the precision high voltage dividers.
A six-bank multi-leaf system for high precision shaping of large fields
Topolnjak, R; Heide, U A van der; Raaymakers, B W; Kotte, A N T J; Welleweerd, J; Lagendijk, J J W
2004-01-01
In this study, we present the design for an alternative MLC system that allows high precision shaping of large fields. The MLC system consists of three layers of two opposing leaf banks. The layers are rotated 60 deg. relative to each other. The leaves in each bank have a standard width of 1 cm projected at the isocentre. Because of the symmetry of the collimator set-up it is expected that collimator rotation will not be required, thus simplifying the construction considerably. A 3D ray tracing computer program was developed in order to simulate the fluence profile for a given collimator and used to optimize the design and investigate its performance. The simulations show that a six-bank collimator will afford field shaping of fields of about 40 cm diameter with a precision comparable to that of existing mini MLCs with a leaf width of 4 mm
High-Precision Measurements of the Bound Electron’s Magnetic Moment
Sven Sturm
2017-01-01
Full Text Available Highly charged ions represent environments that allow to study precisely one or more bound electrons subjected to unsurpassed electromagnetic fields. Under such conditions, the magnetic moment (g-factor of a bound electron changes significantly, to a large extent due to contributions from quantum electrodynamics. We present three Penning-trap experiments, which allow to measure magnetic moments with ppb precision and better, serving as stringent tests of corresponding calculations, and also yielding access to fundamental quantities like the fine structure constant α and the atomic mass of the electron. Additionally, the bound electrons can be used as sensitive probes for properties of the ionic nuclei. We summarize the measurements performed so far, discuss their significance, and give a detailed account of the experimental setups, procedures and the foreseen measurements.
a High-Precision Branching-Ratio Measurement for the Superallowed β+ Emitter 74Rb
Dunlop, R.; Chagnon-Lessard, S.; Finlay, P.; Garrett, P. E.; Hadinia, B.; Leach, K. G.; Svensson, C. E.; Wong, J.; Ball, G.; Garnsworthy, A. B.; Glister, J.; Hackman, G.; Tardiff, E. R.; Triambak, S.; Williams, S. J.; Leslie, J. R.; Andreoiu, C.; Chester, A.; Cross, D.; Starosta, K.; Yates, S. W.; Zganjar, E. F.
2013-03-01
Precision measurements of superallowed Fermi beta decay allow for tests of the Cabibbo-Kobayashi-Maskawa matrix (CKM) unitarity, the conserved vector current hypothesis, and the magnitude of isospin-symmetry-breaking effects in nuclei. A high-precision measurement of the branching ratio for the β+ decay of 74Rb has been performed at the Isotope Separator and ACcelerator (ISAC) facility at TRIUMF. The 8π spectrometer, an array of 20 close-packed HPGe detectors, was used to detect gamma rays emitted following the decay of 74Rb. PACES, an array of 5 Si(Li) detectors, was used to detect emitted conversion electrons, while SCEPTAR, an array of plastic scintillators, was used to detect emitted beta particles. A total of 51γ rays have been identified following the decay of 21 excited states in the daughter nucleus 74Kr.
Computer-controlled detection system for high-precision isotope ratio measurements
McCord, B.R.; Taylor, J.W.
1986-01-01
In this paper the authors describe a detection system for high-precision isotope ratio measurements. In this new system, the requirement for a ratioing digital voltmeter has been eliminated, and a standard digital voltmeter interfaced to a computer is employed. Instead of measuring the ratio of the two steadily increasing output voltages simultaneously, the digital voltmeter alternately samples the outputs at a precise rate over a certain period of time. The data are sent to the computer which calculates the rate of charge of each amplifier and divides the two rates to obtain the isotopic ratio. These results simulate a coincident measurement of the output of both integrators. The charge rate is calculated by using a linear regression method, and the standard error of the slope gives a measure of the stability of the system at the time the measurement was taken
Françoise Benz
2002-01-01
17, 18, 19 June LECTURE SERIES from 11.00 to 12.00 hrs - Auditorium, bldg. 500 Probing nature with high precision; particle traps, laser spectroscopy and optical combs by G. GABRIELSE / Harvard University, USA Experiments with atomic energy scales probe nature and its symmetries with exquisite precision. Particle traps allow the manipulation of single charged particles for months at a time, allow the most accurate comparison of theory and experiment, and promise to allow better measurement of fundamental quantities like the fine structure constant. Ions and atoms can be probed with lasers that are phase locked to microwave frequency standards via optical combs, thus calibrating optical sources in terms of the official cesium second. A series of three lectures will illustrate what can be measured and discuss key techniques. ACADEMIC TRAINING Françoise Benz Tel. 73127 francoise.benz@cern.ch
Design of High-Precision Infrared Multi-Touch Screen Based on the EFM32
Zhong XIAOLING
2014-07-01
Full Text Available Due to the low accuracy of traditional infrared multi-touch screen, it’s difficult to ascertain the touch point. Putting forward a design scheme based on ARM Cortex-M3 kernel EFM32 processor of high precision infrared multi-touch screen. Using tracking scanning area algorithm after accessed electricity for the first time to scan, it greatly improved the scanning efficiency and response speed. Based on the infrared characteristic difference, putting forward a data fitting algorithm, employing the subtraction relationship between the covering area and sampling value to curve fitting, concluding the infrared sampling value of subtraction characteristic curve, establishing a sampling value differential data tables, at last ensuring the precise location of touch point. Besides, practices have proved that the accuracy of the infrared touch screen can up to 0.5 mm. The design uses standard USB port which connected to the PC can also be widely used in various terminals.
Proposal for the determination of nuclear masses by high-precision spectroscopy of Rydberg states
Wundt, B J; Jentschura, U D
2010-01-01
The theoretical treatment of Rydberg states in one-electron ions is facilitated by the virtual absence of the nuclear-size correction, and fundamental constants like the Rydberg constant may be in the reach of planned high-precision spectroscopic experiments. The dominant nuclear effect that shifts transition energies among Rydberg states therefore is due to the nuclear mass. As a consequence, spectroscopic measurements of Rydberg transitions can be used in order to precisely deduce nuclear masses. A possible application of this approach to hydrogen and deuterium, and hydrogen-like lithium and carbon is explored in detail. In order to complete the analysis, numerical and analytic calculations of the quantum electrodynamic self-energy remainder function for states with principal quantum number n = 5, ..., 8 and with angular momentum l = n - 1 and l = n - 2 are described (j = l +- 1/2).
Proposal for the determination of nuclear masses by high-precision spectroscopy of Rydberg states
Wundt, B J; Jentschura, U D [Department of Physics, Missouri University of Science and Technology, Rolla, MO 65409-0640 (United States)
2010-06-14
The theoretical treatment of Rydberg states in one-electron ions is facilitated by the virtual absence of the nuclear-size correction, and fundamental constants like the Rydberg constant may be in the reach of planned high-precision spectroscopic experiments. The dominant nuclear effect that shifts transition energies among Rydberg states therefore is due to the nuclear mass. As a consequence, spectroscopic measurements of Rydberg transitions can be used in order to precisely deduce nuclear masses. A possible application of this approach to hydrogen and deuterium, and hydrogen-like lithium and carbon is explored in detail. In order to complete the analysis, numerical and analytic calculations of the quantum electrodynamic self-energy remainder function for states with principal quantum number n = 5, ..., 8 and with angular momentum l = n - 1 and l = n - 2 are described (j = l {+-} 1/2).
Design and Manufacturing of a High-Precision Sun Tracking System Based on Image Processing
Kianoosh Azizi
2013-01-01
Full Text Available Concentration solar arrays require greater solar tracking precision than conventional photovoltaic arrays. This paper presents a high precision low cost dual axis sun tracking system based on image processing for concentration photovoltaic applications. An imaging device is designed according to the principle of pinhole imaging, making sun rays to be received on a screen through pinhole and to be a sun spot. The location of the spot is used to adjust the orientation of the solar panel. A fuzzy logic controller is developed to achieve this goal. A prototype was built, and experimental results have proven the good performance of the proposed system and low error of tracking. The operation of this system is independent of geographical location, initial calibration, and periodical regulations.
High spatial precision nano-imaging of polarization-sensitive plasmonic particles
Liu, Yunbo; Wang, Yipei; Lee, Somin Eunice
2018-02-01
Precise polarimetric imaging of polarization-sensitive nanoparticles is essential for resolving their accurate spatial positions beyond the diffraction limit. However, conventional technologies currently suffer from beam deviation errors which cannot be corrected beyond the diffraction limit. To overcome this issue, we experimentally demonstrate a spatially stable nano-imaging system for polarization-sensitive nanoparticles. In this study, we show that by integrating a voltage-tunable imaging variable polarizer with optical microscopy, we are able to suppress beam deviation errors. We expect that this nano-imaging system should allow for acquisition of accurate positional and polarization information from individual nanoparticles in applications where real-time, high precision spatial information is required.
Coded aperture detector for high precision gamma-ray burst source locations
Helmken, H.; Gorenstein, P.
1977-01-01
Coded aperture collimators in conjunction with position-sensitive detectors are very useful in the study of transient phenomenon because they combine broad field of view, high sensitivity, and an ability for precise source locations. Since the preceeding conference, a series of computer simulations of various detector designs have been carried out with the aid of a CDC 6400. Particular emphasis was placed on the development of a unit consisting of a one-dimensional random or periodic collimator in conjunction with a two-dimensional position-sensitive Xenon proportional counter. A configuration involving four of these units has been incorporated into the preliminary design study of the Transient Explorer (ATREX) satellite and are applicable to any SAS or HEAO type satellite mission. Results of this study, including detector response, fields of view, and source location precision, will be presented
Fabrication of high precision metallic freeform mirrors with magnetorheological finishing (MRF)
Beier, Matthias; Scheiding, Sebastian; Gebhardt, Andreas; Loose, Roman; Risse, Stefan; Eberhardt, Ramona; Tünnermann, Andreas
2013-09-01
The fabrication of complex shaped metal mirrors for optical imaging is a classical application area of diamond machining techniques. Aspherical and freeform shaped optical components up to several 100 mm in diameter can be manufactured with high precision in an acceptable amount of time. However, applications are naturally limited to the infrared spectral region due to scatter losses for shorter wavelengths as a result of the remaining periodic diamond turning structure. Achieving diffraction limited performance in the visible spectrum demands for the application of additional polishing steps. Magnetorheological Finishing (MRF) is a powerful tool to improve figure and finish of complex shaped optics at the same time in a single processing step. The application of MRF as a figuring tool for precise metal mirrors is a nontrivial task since the technology was primarily developed for figuring and finishing a variety of other optical materials, such as glasses or glass ceramics. In the presented work, MRF is used as a figuring tool for diamond turned aluminum lightweight mirrors with electroless nickel plating. It is applied as a direct follow-up process after diamond machining of the mirrors. A high precision measurement setup, composed of an interferometer and an advanced Computer Generated Hologram with additional alignment features, allows for precise metrology of the freeform shaped optics in short measuring cycles. Shape deviations less than 150 nm PV / 20 nm rms are achieved reliably for freeform mirrors with apertures of more than 300 mm. Characterization of removable and induced spatial frequencies is carried out by investigating the Power Spectral Density.
Computational Calorimetry: High-Precision Calculation of Host–Guest Binding Thermodynamics
2015-01-01
We present a strategy for carrying out high-precision calculations of binding free energy and binding enthalpy values from molecular dynamics simulations with explicit solvent. The approach is used to calculate the thermodynamic profiles for binding of nine small molecule guests to either the cucurbit[7]uril (CB7) or β-cyclodextrin (βCD) host. For these systems, calculations using commodity hardware can yield binding free energy and binding enthalpy values with a precision of ∼0.5 kcal/mol (95% CI) in a matter of days. Crucially, the self-consistency of the approach is established by calculating the binding enthalpy directly, via end point potential energy calculations, and indirectly, via the temperature dependence of the binding free energy, i.e., by the van’t Hoff equation. Excellent agreement between the direct and van’t Hoff methods is demonstrated for both host–guest systems and an ion-pair model system for which particularly well-converged results are attainable. Additionally, we find that hydrogen mass repartitioning allows marked acceleration of the calculations with no discernible cost in precision or accuracy. Finally, we provide guidance for accurately assessing numerical uncertainty of the results in settings where complex correlations in the time series can pose challenges to statistical analysis. The routine nature and high precision of these binding calculations opens the possibility of including measured binding thermodynamics as target data in force field optimization so that simulations may be used to reliably interpret experimental data and guide molecular design. PMID:26523125
Calaon, Matteo; Tosello, Guido; Elsborg, René
2016-01-01
The mass-replication nature of the process calls for fast monitoring of process parameters and product geometrical characteristics. In this direction, the present study addresses the possibility to develop a micro manufacturing platform for micro assembly injection moulding with real-time process....../product monitoring and metrology. The study represent a new concept yet to be developed with great potential for high precision mass-manufacturing of highly functional 3D multi-material (i.e. including metal/soft polymer) micro components. The activities related to HINMICO project objectives proves the importance...
High precision wavelength measurements of X-ray lines emitted from TS-Tokamak plasmas
Platz, P. [Association Euratom-CEA, Centre d`Etudes de Cadarache, 13 - Saint-Paul-lez-Durance (France). Dept. de Recherches sur la Fusion Controlee; Cornille, M.; Dubau, J. [Observatoire de Paris, 92 - Meudon (France)
1996-01-01
X-ray line spectra from highly charged impurity ions have been taken with a high-resolution Bragg-crystal spectrometer on the Tore Supra (TS) tokamak. By cross-checking the wavelengths of reference lines from the heliumlike ions Ti20 + (2.6 Angstroms) and Ar16 + (3.95 Angstroms) we first demonstrate that it is possible to measure wavelengths with a precision, {lambda}/{delta}{lambda}, of better than 50000. We than determine the wavelengths of n=3 to n=2 transitions of neonlike Ag37+ in the 4 Angstroms spectral range. (authors). 16 refs., 7 figs., 3 tabs.
Studies Of Submicron 3He Slabs Using A High Precision Torsional Oscillator
Corcoles, Antonio; Casey, Andrew; Cowan, Brian; Saunders, John; Parpia, Jeevak; Bowley, Roger
2006-01-01
A high precision torsional oscillator has been used to study 3He films of thickness in the range 100 to 350 nm. In previous work we found that the films decoupled from the oscillator motion below 60 mK, in the Knudsen limit. This precluded observation of the superfluid transition. Here we report measurements using a torsional oscillator whose highly polished inner surfaces have been decorated with a low density of silver particles to act as random elastic scattering centres. This modification locks the normal film to the surface. A superfluid transition of the film is observed
Measurement of high-mass dilepton production with the CMS-TOTEM Precision Proton Spectrometer
Shchelina, Ksenia
2017-01-01
The measurements of dilepton production in photon-photon fusion with the CMS-TOTEM Precision Proton Spectrometer (CT-PPS) are presented. For the first time, exclusive dilepton production at high masses have been observed in the CMS detector while one or two outgoing protons are measured in CT-PPS using around 10~${\\rm fb}^{-1}$ of data accumulated in 2016 during high-luminosity LHC operation. These first results show a good understanding, calibration and alignment of the new CT-PPS detectors installed in 2016.
The Megamaser Cosmology Project. X. High-resolution Maps and Mass Constraints for SMBHs
Zhao, W.; Braatz, J. A.; Condon, J. J.; Lo, K. Y.; Reid, M. J.; Henkel, C.; Pesce, D. W.; Greene, J. E.; Gao, F.; Kuo, C. Y.; Impellizzeri, C. M. V.
2018-02-01
We present high-resolution (submas) Very Long Baseline Interferometry maps of nuclear H2O megamasers for seven galaxies. In UGC 6093, the well-aligned systemic masers and high-velocity masers originate in an edge-on, flat disk and we determine the mass of the central supermassive black holes (SMBH) to be M SMBH = 2.58 × 107 M ⊙ (±7%). For J1346+5228, the distribution of masers is consistent with a disk, but the faint high-velocity masers are only marginally detected, and we constrain the mass of the SMBH to be in the range (1.5–2.0) × 107 M ⊙. The origin of the masers in Mrk 1210 is less clear, as the systemic and high-velocity masers are misaligned and show a disorganized velocity structure. We present one possible model in which the masers originate in a tilted, warped disk, but we do not rule out the possibility of other explanations including outflow masers. In NGC 6926, we detect a set of redshifted masers, clustered within a parsec of each other, and a single blueshifted maser about 4.4 pc away, an offset that would be unusually large for a maser disk system. Nevertheless, if it is a disk system, we estimate the enclosed mass to be M SMBH < 4.8 × 107 M ⊙. For NGC 5793, we detect redshifted masers spaced about 1.4 pc from a clustered set of blueshifted features. The orientation of the structure supports a disk scenario as suggested by Hagiwara et al. We estimate the enclosed mass to be M SMBH < 1.3 × 107 M ⊙. For NGC 2824 and J0350‑0127, the masers may be associated with parsec- or subparsec-scale jets or outflows.
Evaluating Galactic Habitability Using High Resolution Cosmological Simulations of Galaxy Formation
Forgan, Duncan; Dayal, Pratika; Cockell, Charles; Libeskind, Noam
2015-01-01
D. F. acknowledges support from STFC consolidated grant ST/J001422/1, and the ‘ECOGAL’ ERC Advanced Grant. P. D. acknowledges the support of the Addison Wheeler Fellowship awarded by the Institute of Advanced Study at Durham University. N. I. L. is supported by the Deutsche Forschungs Gemeinschaft (DFG). We present the first model that couples high-resolution simulations of the formation of local group galaxies with calculations of the galactic habitable zone (GHZ), a region of space which...
A New High-Precision Correction Method of Temperature Distribution in Model Stellar Atmospheres
Sapar A.
2013-06-01
Full Text Available The main features of the temperature correction methods, suggested and used in modeling of plane-parallel stellar atmospheres, are discussed. The main features of the new method are described. Derivation of the formulae for a version of the Unsöld-Lucy method, used by us in the SMART (Stellar Model Atmospheres and Radiative Transport software for modeling stellar atmospheres, is presented. The method is based on a correction of the model temperature distribution based on minimizing differences of flux from its accepted constant value and on the requirement of the lack of its gradient, meaning that local source and sink terms of radiation must be equal. The final relative flux constancy obtainable by the method with the SMART code turned out to have the precision of the order of 0.5 %. Some of the rapidly converging iteration steps can be useful before starting the high-precision model correction. The corrections of both the flux value and of its gradient, like in Unsöld-Lucy method, are unavoidably needed to obtain high-precision flux constancy. A new temperature correction method to obtain high-precision flux constancy for plane-parallel LTE model stellar atmospheres is proposed and studied. The non-linear optimization is carried out by the least squares, in which the Levenberg-Marquardt correction method and thereafter additional correction by the Broyden iteration loop were applied. Small finite differences of temperature (δT/T = 10−3 are used in the computations. A single Jacobian step appears to be mostly sufficient to get flux constancy of the order 10−2 %. The dual numbers and their generalization – the dual complex numbers (the duplex numbers – enable automatically to get the derivatives in the nilpotent part of the dual numbers. A version of the SMART software is in the stage of refactorization to dual and duplex numbers, what enables to get rid of the finite differences, as an additional source of lowering precision of the
Forecast and analysis of the cosmological redshift drift
Lazkoz, Ruth; Leanizbarrutia, Iker [University of the Basque Country UPV/EHU, Department of Theoretical Physics, Bilbao (Spain); Salzano, Vincenzo [University of Szczecin, Institute of Physics, Sczcecin (Poland)
2018-01-15
The cosmological redshift drift could lead to the next step in high-precision cosmic geometric observations, becoming a direct and irrefutable test for cosmic acceleration. In order to test the viability and possible properties of this effect, also called Sandage-Loeb (SL) test, we generate a model-independent mock data set in order to compare its constraining power with that of the future mock data sets of Type Ia Supernovae (SNe) and Baryon Acoustic Oscillations (BAO). The performance of those data sets is analyzed by testing several cosmological models with the Markov chain Monte Carlo (MCMC) method, both independently as well as combining all data sets. Final results show that, in general, SL data sets allow for remarkable constraints on the matter density parameter today Ω{sub m} on every tested model, showing also a great complementarity with SNe and BAO data regarding dark energy parameters. (orig.)
Forecast and analysis of the cosmological redshift drift.
Lazkoz, Ruth; Leanizbarrutia, Iker; Salzano, Vincenzo
2018-01-01
The cosmological redshift drift could lead to the next step in high-precision cosmic geometric observations, becoming a direct and irrefutable test for cosmic acceleration. In order to test the viability and possible properties of this effect, also called Sandage-Loeb (SL) test, we generate a model-independent mock data set in order to compare its constraining power with that of the future mock data sets of Type Ia Supernovae (SNe) and Baryon Acoustic Oscillations (BAO). The performance of those data sets is analyzed by testing several cosmological models with the Markov chain Monte Carlo (MCMC) method, both independently as well as combining all data sets. Final results show that, in general, SL data sets allow for remarkable constraints on the matter density parameter today [Formula: see text] on every tested model, showing also a great complementarity with SNe and BAO data regarding dark energy parameters.
Schorb, Martin; Briggs, John A.G.
2014-01-01
Performing fluorescence microscopy and electron microscopy on the same sample allows fluorescent signals to be used to identify and locate features of interest for subsequent imaging by electron microscopy. To carry out such correlative microscopy on vitrified samples appropriate for structural cryo-electron microscopy it is necessary to perform fluorescence microscopy at liquid-nitrogen temperatures. Here we describe an adaptation of a cryo-light microscopy stage to permit use of high-numerical aperture objectives. This allows high-sensitivity and high-resolution fluorescence microscopy of vitrified samples. We describe and apply a correlative cryo-fluorescence and cryo-electron microscopy workflow together with a fiducial bead-based image correlation procedure. This procedure allows us to locate fluorescent bacteriophages in cryo-electron microscopy images with an accuracy on the order of 50 nm, based on their fluorescent signal. It will allow the user to precisely and unambiguously identify and locate objects and events for subsequent high-resolution structural study, based on fluorescent signals. - Highlights: • Workflow for correlated cryo-fluorescence and cryo-electron microscopy. • Cryo-fluorescence microscopy setup incorporating a high numerical aperture objective. • Fluorescent signals located in cryo-electron micrographs with 50 nm spatial precision
Schorb, Martin [Structural and Computational Biology Unit, European Molecular Biology Laboratory, D-69117 Heidelberg (Germany); Briggs, John A.G., E-mail: john.briggs@embl.de [Structural and Computational Biology Unit, European Molecular Biology Laboratory, D-69117 Heidelberg (Germany); Cell Biology and Biophysics Unit, European Molecular Biology Laboratory, D-69117 Heidelberg (Germany)
2014-08-01
Performing fluorescence microscopy and electron microscopy on the same sample allows fluorescent signals to be used to identify and locate features of interest for subsequent imaging by electron microscopy. To carry out such correlative microscopy on vitrified samples appropriate for structural cryo-electron microscopy it is necessary to perform fluorescence microscopy at liquid-nitrogen temperatures. Here we describe an adaptation of a cryo-light microscopy stage to permit use of high-numerical aperture objectives. This allows high-sensitivity and high-resolution fluorescence microscopy of vitrified samples. We describe and apply a correlative cryo-fluorescence and cryo-electron microscopy workflow together with a fiducial bead-based image correlation procedure. This procedure allows us to locate fluorescent bacteriophages in cryo-electron microscopy images with an accuracy on the order of 50 nm, based on their fluorescent signal. It will allow the user to precisely and unambiguously identify and locate objects and events for subsequent high-resolution structural study, based on fluorescent signals. - Highlights: • Workflow for correlated cryo-fluorescence and cryo-electron microscopy. • Cryo-fluorescence microscopy setup incorporating a high numerical aperture objective. • Fluorescent signals located in cryo-electron micrographs with 50 nm spatial precision.
Axion-like particle imprint in cosmological very-high-energy sources
Domínguez, A.; Sánchez-Conde, M.A.; Prada, F.
2011-01-01
Discoveries of very high energy (VHE) photons from distant blazars suggest that, after correction by extragalactic background light (EBL) absorption, there is a flatness or even a turn-up in their spectra at the highest energies that cannot be easily explained by the standard framework. Here, it is shown that a possible solution to this problem is achieved by assuming the existence of axion-like particles (ALPs) with masses ∼ 1 neV. The ALP scenario is tested making use of observations of the highest redshift blazars known in the VHE energy regime, namely 3C 279, 3C 66A, PKS 1222+216 and PG 1553+113. In all cases, better fits to the observed spectra are found when including ALPs rather than considering EBL only. Interestingly, quite similar critical energies for photon/ALP conversions are also derived, independently of the source considered
Schellenberger, Pascale [Oxford Particle Imaging Centre, Division of Structural Biology, Wellcome Trust Centre for Human Genetics, University of Oxford, Roosevelt Drive, Oxford OX3 7BN (United Kingdom); Kaufmann, Rainer [Oxford Particle Imaging Centre, Division of Structural Biology, Wellcome Trust Centre for Human Genetics, University of Oxford, Roosevelt Drive, Oxford OX3 7BN (United Kingdom); Department of Biochemistry, University of Oxford, South Parks Road, Oxford OX1 3QU (United Kingdom); Siebert, C. Alistair; Hagen, Christoph [Oxford Particle Imaging Centre, Division of Structural Biology, Wellcome Trust Centre for Human Genetics, University of Oxford, Roosevelt Drive, Oxford OX3 7BN (United Kingdom); Wodrich, Harald [Microbiologie Fondamentale et Pathogénicité, MFP CNRS UMR 5234, University of Bordeaux SEGALEN, 146 rue Leo Seignat, 33076 Bordeaux (France); Grünewald, Kay, E-mail: kay@strubi.ox.ac.uk [Oxford Particle Imaging Centre, Division of Structural Biology, Wellcome Trust Centre for Human Genetics, University of Oxford, Roosevelt Drive, Oxford OX3 7BN (United Kingdom)
2014-08-01
Correlative light and electron microscopy (CLEM) is an emerging technique which combines functional information provided by fluorescence microscopy (FM) with the high-resolution structural information of electron microscopy (EM). So far, correlative cryo microscopy of frozen-hydrated samples has not reached better than micrometre range accuracy. Here, a method is presented that enables the correlation between fluorescently tagged proteins and electron cryo tomography (cryoET) data with nanometre range precision. Specifically, thin areas of vitrified whole cells are examined by correlative fluorescence cryo microscopy (cryoFM) and cryoET. Novel aspects of the presented cryoCLEM workflow not only include the implementation of two independent electron dense fluorescent markers to improve the precision of the alignment, but also the ability of obtaining an estimate of the correlation accuracy for each individual object of interest. The correlative workflow from plunge-freezing to cryoET is detailed step-by-step for the example of locating fluorescence-labelled adenovirus particles trafficking inside a cell. - Highlights: • Vitrified mammalian cell were imaged by fluorescence and electron cryo microscopy. • TetraSpeck fluorescence markers were added to correct shifts between cryo fluorescence channels. • FluoSpheres fiducials were used as reference points to assign new coordinates to cryoEM images. • Adenovirus particles were localised with an average correlation precision of 63 nm.
submitter A High Precision 3D Magnetic Field Scanner for Small to Medium Size Magnets
Bergsma, F; Garnier, F; Giudici, P A
2016-01-01
A bench to measure the magnetic field of small to-medium-sized magnets with high precision was built. It uses a small-sized head with three orthogonal Hall probes, supported on a long pole at continuous movement during measurement. The head is calibrated in three dimensions by rotation over the full solid angle in a special device. From 0 to 2.5 T, the precision is ±0.2 mT in all components. The spatial range is 1 × 1 × 2 m with precision of ±0.02 mm. The bench and its controls are lightweight and easy to transport. The head can penetrate through small apertures and measure as close as 0.5 mm from the surface of a magnet. The bench can scan complicated grids in Cartesian or cylindrical coordinates, steered by a simple text file on an accompanying PC. The raw data is online converted to magnetic units and stored in a text file.
Schellenberger, Pascale; Kaufmann, Rainer; Siebert, C. Alistair; Hagen, Christoph; Wodrich, Harald; Grünewald, Kay
2014-01-01
Correlative light and electron microscopy (CLEM) is an emerging technique which combines functional information provided by fluorescence microscopy (FM) with the high-resolution structural information of electron microscopy (EM). So far, correlative cryo microscopy of frozen-hydrated samples has not reached better than micrometre range accuracy. Here, a method is presented that enables the correlation between fluorescently tagged proteins and electron cryo tomography (cryoET) data with nanometre range precision. Specifically, thin areas of vitrified whole cells are examined by correlative fluorescence cryo microscopy (cryoFM) and cryoET. Novel aspects of the presented cryoCLEM workflow not only include the implementation of two independent electron dense fluorescent markers to improve the precision of the alignment, but also the ability of obtaining an estimate of the correlation accuracy for each individual object of interest. The correlative workflow from plunge-freezing to cryoET is detailed step-by-step for the example of locating fluorescence-labelled adenovirus particles trafficking inside a cell. - Highlights: • Vitrified mammalian cell were imaged by fluorescence and electron cryo microscopy. • TetraSpeck fluorescence markers were added to correct shifts between cryo fluorescence channels. • FluoSpheres fiducials were used as reference points to assign new coordinates to cryoEM images. • Adenovirus particles were localised with an average correlation precision of 63 nm
CERN. Geneva
2018-01-01
The Baryon Antibaryon Symmetry Experiment (BASE-CERN) at CERN’s antiproton decelerator facility is aiming at high-precision comparisons of the fundamental properties of protons and antiprotons, such as charge-to-mass ratios, magnetic moments and lifetimes. Such experiments provide sensitive tests of the fundamental charge-parity-time invariance in the baryon sector. BASE was approved in 2013 and has measured since then, utilizing single-particle multi-Penning-trap techniques, the antiproton-to-proton charge-to-mass ratio with a fractional precision of 69 p.p.t. [1], as well as the antiproton magnetic moment with fractional precisions of 0.8 p.p.m. and 1.5 p.p.b., respectively [2]. At our matter companion experiment BASE-Mainz, we have performed proton magnetic moment measurements with fractional uncertainties of 3.3 p.p.b. [3] and 0.3 p.p.b. [4]. By combining the data of both experiments we provide a baryon-magnetic-moment based CPT test gpbar/gp = 1.000 000 000 2(15), which improves the uncertainty of p...
Gaisser, T.K.; Shafi, Q.; Barr, S.M.; Seckel, D.; Rusjan, E.; Fletcher, R.S.
1991-01-01
This report discusses research of professor at Bartol research institute in the following general areas: particle phenomenology and non-accelerator physics; particle physics and cosmology; theories with higher symmetry; and particle astrophysics and cosmology
Heller, M.
1985-01-01
Two Friedman's cosmological papers (1922, 1924) and his own interpretation of the obtained results are briefly reviewed. Discussion follows of Friedman's role in the early development of relativistic cosmology. 18 refs. (author)
Kunze, Kerstin E.
2016-12-20
Cosmology is becoming an important tool to test particle physics models. We provide an overview of the standard model of cosmology with an emphasis on the observations relevant for testing fundamental physics.
Proceedings, High-Precision $\\alpha_s$ Measurements from LHC to FCC-ee
d' Enterria, David [CERN; Skands, Peter Z. [Monash U.
2015-01-01
This document provides a writeup of all contributions to the workshop on "High precision measurements of $\\alpha_s$: From LHC to FCC-ee" held at CERN, Oct. 12--13, 2015. The workshop explored in depth the latest developments on the determination of the QCD coupling $\\alpha_s$ from 15 methods where high precision measurements are (or will be) available. Those include low-energy observables: (i) lattice QCD, (ii) pion decay factor, (iii) quarkonia and (iv) $\\tau$ decays, (v) soft parton-to-hadron fragmentation functions, as well as high-energy observables: (vi) global fits of parton distribution functions, (vii) hard parton-to-hadron fragmentation functions, (viii) jets in $e^\\pm$p DIS and $\\gamma$-p photoproduction, (ix) photon structure function in $\\gamma$-$\\gamma$, (x) event shapes and (xi) jet cross sections in $e^+e^-$ collisions, (xii) W boson and (xiii) Z boson decays, and (xiv) jets and (xv) top-quark cross sections in proton-(anti)proton collisions. The current status of the theoretical and experimental uncertainties associated to each extraction method, the improvements expected from LHC data in the coming years, and future perspectives achievable in $e^+e^-$ collisions at the Future Circular Collider (FCC-ee) with $\\cal{O}$(1--100 ab$^{-1}$) integrated luminosities yielding 10$^{12}$ Z bosons and jets, and 10$^{8}$ W bosons and $\\tau$ leptons, are thoroughly reviewed. The current uncertainty of the (preliminary) 2015 strong coupling world-average value, $\\alpha_s(m_Z)$ = 0.1177 $\\pm$ 0.0013, is about 1\\%. Some participants believed this may be reduced by a factor of three in the near future by including novel high-precision observables, although this opinion was not universally shared. At the FCC-ee facility, a factor of ten reduction in the $\\alpha_s$ uncertainty should be possible, mostly thanks to the huge Z and W data samples available.
A high precision method for quantitative measurements of reactive oxygen species in frozen biopsies.
Kirsti Berg
Full Text Available OBJECTIVE: An electron paramagnetic resonance (EPR technique using the spin probe cyclic hydroxylamine 1-hydroxy-3-methoxycarbonyl-2,2,5,5-tetramethylpyrrolidine (CMH was introduced as a versatile method for high precision quantification of reactive oxygen species, including the superoxide radical in frozen biological samples such as cell suspensions, blood or biopsies. MATERIALS AND METHODS: Loss of measurement precision and accuracy due to variations in sample size and shape were minimized by assembling the sample in a well-defined volume. Measurement was carried out at low temperature (150 K using a nitrogen flow Dewar. The signal intensity was measured from the EPR 1st derivative amplitude, and related to a sample, 3-carboxy-proxyl (CP• with known spin concentration. RESULTS: The absolute spin concentration could be quantified with a precision and accuracy better than ±10 µM (k = 1. The spin concentration of samples stored at -80°C could be reproduced after 6 months of storage well within the same error estimate. CONCLUSION: The absolute spin concentration in wet biological samples such as biopsies, water solutions and cell cultures could be quantified with higher precision and accuracy than normally achievable using common techniques such as flat cells, tissue cells and various capillary tubes. In addition; biological samples could be collected and stored for future incubation with spin probe, and also further stored up to at least six months before EPR analysis, without loss of signal intensity. This opens for the possibility to store and transport incubated biological samples with known accuracy of the spin concentration over time.
Theoretical Research at the High Energy Frontier: Cosmology, Neutrinos, and Beyond
Krauss, Lawrence M; Vachaspati, Tanmay; Parikh, Maulik
2013-03-06
The DOE theory group grew from 2009-2012 from a single investigator, Lawrence Krauss, the PI on the grant, to include 3 faculty (with the addition of Maulik Parikh and Tanmay Vachaspati), and a postdoc covered by the grant, as well as partial support for a graduate student. The group has explored issues ranging from gravity and quantum field theory to topological defects, energy conditions in general relativity, primordial magnetic fields, neutrino astrophysics, quantum phases, gravitational waves from the early universe, dark matter detection schemes, signatures for dark matter at the LHC, and indirect astrophysical signatures for dark matter. In addition, we have run active international workshops each year, as well as a regular visitor program. As well, the PI's outreach activities, including popular books and articles, and columns for newspapers and magazines, as well as television and radio appearances have helped raise the profile of high energy physics internationally. The postdocs supported by the grant, James Dent and Roman Buniy have moved on successfully to a faculty positions in Louisiana and California.
COSMOLOGICAL ADAPTIVE MESH REFINEMENT MAGNETOHYDRODYNAMICS WITH ENZO
Collins, David C.; Xu Hao; Norman, Michael L.; Li Hui; Li Shengtai
2010-01-01
In this work, we present EnzoMHD, the extension of the cosmological code Enzo to include the effects of magnetic fields through the ideal magnetohydrodynamics approximation. We use a higher order Godunov method for the computation of interface fluxes. We use two constrained transport methods to compute the electric field from those interface fluxes, which simultaneously advances the induction equation and maintains the divergence of the magnetic field. A second-order divergence-free reconstruction technique is used to interpolate the magnetic fields in the block-structured adaptive mesh refinement framework already extant in Enzo. This reconstruction also preserves the divergence of the magnetic field to machine precision. We use operator splitting to include gravity and cosmological expansion. We then present a series of cosmological and non-cosmological test problems to demonstrate the quality of solution resulting from this combination of solvers.
Development of the Universe and New Cosmology
Sakharov, Alexander S
2003-01-01
Cosmology is undergoing an explosive period of activity, fueled both by new, accurate astrophysical data and by innovative theoretical developments. Cosmological parameters such as the total density of the Universe and the rate of cosmological expansion are being precisely measured for the first time, and a consistent standard picture of the Universe is beginning to emerge. Recent developments in cosmology give rise the intriguing possibility that all structures in the Universe, from superclusters to planets, had a quantum-mechanical origin in its earliest moments. Furthermore, these ideas are not idle theorizing, but predictive, and subject to meaningful experimental test. We review the concordance model of the development of the Universe, as well as evidence for the observational revolution that this field is going through. This already provides us with important information on particle physics, which is inaccessible to accelerators.
Maintaining high precision of isotope ratio analysis over extended periods of time.
Brand, Willi A
2009-06-01
Stable isotope ratios are reliable and long lasting process tracers. In order to compare data from different locations or different sampling times at a high level of precision, a measurement strategy must include reliable traceability to an international stable isotope scale via a reference material (RM). Since these international RMs are available in low quantities only, we have developed our own analysis schemes involving laboratory working RM. In addition, quality assurance RMs are used to control the long-term performance of the delta-value assignments. The analysis schemes allow the construction of quality assurance performance charts over years of operation. In this contribution, the performance of three typical techniques established in IsoLab at the MPI-BGC in Jena is discussed. The techniques are (1) isotope ratio mass spectrometry with an elemental analyser for delta(15)N and delta(13)C analysis of bulk (organic) material, (2) high precision delta(13)C and delta(18)O analysis of CO(2) in clean-air samples, and (3) stable isotope analysis of water samples using a high-temperature reaction with carbon. In addition, reference strategies on a laser ablation system for high spatial resolution delta(13)C analysis in tree rings is exemplified briefly.
Roos, Matts
2015-01-01
The Fourth Edition of Introduction to Cosmology provides a concise, authoritative study of cosmology at an introductory level. Starting from elementary principles and the early history of cosmology, the text carefully guides the student on to curved spacetimes, special and general relativity, gravitational lensing, the thermal history of the Universe, and cosmological models, including extended gravity models, black holes and Hawking's recent conjectures on the not-so-black holes.
Compendium of Neutron Beam Facilities for High Precision Nuclear Data Measurements
2014-07-01
The recent advances in the development of nuclear science and technology, demonstrating the globally growing economy, require highly accurate, powerful simulations and precise analysis of the experimental results. Confidence in these results is still determined by the accuracy of the atomic and nuclear input data. For studying material response, neutron beams produced from accelerators and research reactors in broad energy spectra are reliable and indispensable tools to obtain high accuracy experimental results for neutron induced reactions. The IAEA supports the accomplishment of high precision nuclear data using nuclear facilities in particular, based on particle accelerators and research reactors around the world. Such data are essential for numerous applications in various industries and research institutions, including the safety and economical operation of nuclear power plants, future fusion reactors, nuclear medicine and non-destructive testing technologies. The IAEA organized and coordinated the technical meeting Use of Neutron Beams for High Precision Nuclear Data Measurements, in Budapest, Hungary, 10–14 December 2012. The meeting was attended by participants from 25 Member States and three international organizations — the European Organization for Nuclear Research (CERN), the Joint Research Centre (JRC) and the Organisation for Economic Co-operation and Development (OECD) Nuclear Energy Agency (OECD/NEA). The objectives of the meeting were to provide a forum to exchange existing know-how and to share the practical experiences of neutron beam facilities and associated instrumentation, with regard to the measurement of high precision nuclear data using both accelerators and research reactors. Furthermore, the present status and future developments of worldwide accelerator and research reactor based neutron beam facilities were discussed. This publication is a summary of the technical meeting and additional materials supplied by the international
2014-07-01
The recent advances in the development of nuclear science and technology, demonstrating the globally growing economy, require highly accurate, powerful simulations and precise analysis of the experimental results. Confidence in these results is still determined by the accuracy of the atomic and nuclear input data. For studying material response, neutron beams produced from accelerators and research reactors in broad energy spectra are reliable and indispensable tools to obtain high accuracy experimental results for neutron induced reactions. The IAEA supports the accomplishment of high precision nuclear data using nuclear facilities in particular, based on particle accelerators and research reactors around the world. Such data are essential for numerous applications in various industries and research institutions, including the safety and economical operation of nuclear power plants, future fusion reactors, nuclear medicine and non-destructive testing technologies. The IAEA organized and coordinated the technical meeting Use of Neutron Beams for High Precision Nuclear Data Measurements, in Budapest, Hungary, 10–14 December 2012. The meeting was attended by participants from 25 Member States and three international organizations — the European Organization for Nuclear Research (CERN), the Joint Research Centre (JRC) and the Organisation for Economic Co-operation and Development (OECD) Nuclear Energy Agency (OECD/NEA). The objectives of the meeting were to provide a forum to exchange existing know-how and to share the practical experiences of neutron beam facilities and associated instrumentation, with regard to the measurement of high precision nuclear data using both accelerators and research reactors. Furthermore, the present status and future developments of worldwide accelerator and research reactor based neutron beam facilities were discussed. This publication is a summary of the technical meeting and additional materials supplied by the international
Cosmology with the Large Synoptic Survey Telescope: an overview
Zhan, Hu; Tyson, J. Anthony
2018-06-01
The Large Synoptic Survey Telescope (LSST) is a high étendue imaging facility that is being constructed atop Cerro Pachón in northern Chile. It is scheduled to begin science operations in 2022. With an ( effective) aperture, a novel three-mirror design achieving a seeing-limited field of view, and a 3.2 gigapixel camera, the LSST has the deep-wide-fast imaging capability necessary to carry out an survey in six passbands (ugrizy) to a coadded depth of over 10 years using of its observational time. The remaining of the time will be devoted to considerably deeper and faster time-domain observations and smaller surveys. In total, each patch of the sky in the main survey will receive 800 visits allocated across the six passbands with exposure visits. The huge volume of high-quality LSST data will provide a wide range of science opportunities and, in particular, open a new era of precision cosmology with unprecedented statistical power and tight control of systematic errors. In this review, we give a brief account of the LSST cosmology program with an emphasis on dark energy investigations. The LSST will address dark energy physics and cosmology in general by exploiting diverse precision probes including large-scale structure, weak lensing, type Ia supernovae, galaxy clusters, and strong lensing. Combined with the cosmic microwave background data, these probes form interlocking tests on the cosmological model and the nature of dark energy in the presence of various systematics. The LSST data products will be made available to the US and Chilean scientific communities and to international partners with no proprietary period. Close collaborations with contemporaneous imaging and spectroscopy surveys observing at a variety of wavelengths, resolutions, depths, and timescales will be a vital part of the LSST science program, which will not only enhance specific studies but, more importantly, also allow a more complete understanding of the Universe through different windows.
Precision tracking at high background rates with the ATLAS muon spectrometer
Hertenberger, Ralf; The ATLAS collaboration
2012-01-01
Since start of data taking the ATLAS muon spectrometer performs according to specification. End of this decade after the luminosity upgrade of LHC by a factor of ten the proportionally increasing background rates require the replacement of the detectors in the most forward part of the muon spectrometer to ensure high quality muon triggering and tracking at background hit rates of up to 15,kHz/cm$^2$. Square meter sized micromegas detectors together with improved thin gap trigger detectors are suggested as replacement. Micromegas detectors are intrinsically high rate capable. A single hit spatial resolution below 40,$mu$m has been shown for 250,$mu$m anode strip pitch and perpendicular incidence of high energy muons or pions. The ongoing development of large micromegas structures and their investigation under non-perpendicular incidence or in high background environments requires precise and reliable monitoring of muon tracks. A muon telescope consisting of six small micromegas works reliably and is presently ...
Chen, Xiaoxiao; Liu, Yang; Xu, QianFeng; Zhu, Jing; Poget, Sébastien F; Lyons, Alan M
2016-05-04
Precise dispensing of nanoliter droplets is necessary for the development of sensitive and accurate assays, especially when the availability of the source solution is limited. Conventional approaches are limited by imprecise positioning, large shear forces, surface tension effects, and high costs. To address the need for precise and economical dispensing of nanoliter volumes, we developed a new approach where the dispensed volume is dependent on the size and shape of defined surface features, thus freeing the dispensing process from pumps and fine-gauge needles requiring accurate positioning. The surface we fabricated, called a nanoliter droplet virtual well microplate (nVWP), achieves high-precision dispensing (better than ±0.5 nL or ±1.6% at 32 nL) of 20-40 nL droplets using a small source drop (3-10 μL) on isolated hydrophilic glass pedestals (500 μm on a side) bonded to arrays of polydimethylsiloxane conical posts. The sharp 90° edge of the glass pedestal pins the solid-liquid-vapor triple contact line (TCL), averting the wetting of the glass sidewalls while the fluid is prevented from receding from the edge. This edge creates a sufficiently large energy barrier such that microliter water droplets can be poised on the glass pedestals, exhibiting contact angles greater >150°. This approach relieves the stringent mechanical alignment tolerances required for conventional dispensing techniques, shifting the control of dispensed volume to the area circumscribed by the glass edge. The effects of glass surface chemistry and dispense velocity on droplet volume were studied using optical microscopy and high-speed video. Functionalization of the glass pedestal surface enabled the selective adsorption of specific peptides and proteins from synthetic and natural biomolecule mixtures, such as venom. We further demonstrate how the nVWP dispensing platform can be used for a variety of assays, including sensitive detection of proteins and peptides by fluorescence
A novel approach for high precision rapid potentiometric titrations: application to hydrazine assay.
Sahoo, P; Malathi, N; Ananthanarayanan, R; Praveen, K; Murali, N
2011-11-01
We propose a high precision rapid personal computer (PC) based potentiometric titration technique using a specially designed mini-cell to carry out redox titrations for assay of chemicals in quality control laboratories attached to industrial, R&D, and nuclear establishments. Using this technique a few microlitre of sample (50-100 μl) in a total volume of ~2 ml solution can be titrated and the waste generated after titration is extremely low comparing to that obtained from the conventional titration technique. The entire titration including online data acquisition followed by immediate offline analysis of data to get information about concentration of unknown sample is completed within a couple of minutes (about 2 min). This facility has been created using a new class of sensors, viz., pulsating sensors developed in-house. The basic concept in designing such instrument and the salient features of the titration device are presented in this paper. The performance of the titration facility was examined by conducting some of the high resolution redox titrations using dilute solutions--hydrazine against KIO(3) in HCl medium, Fe(II) against Ce(IV) and uranium using Davies-Gray method. The precision of titrations using this innovative approach lies between 0.048% and 1.0% relative standard deviation in different redox titrations. With the evolution of this rapid PC based titrator it was possible to develop a simple but high precision potentiometric titration technique for quick determination of hydrazine in nuclear fuel dissolver solution in the context of reprocessing of spent nuclear fuel in fast breeder reactors. © 2011 American Institute of Physics
High precision instrumentation for measuring the true exposure time in diagnostic X-ray examinations
Silva, Danubia B.; Santos, Marcus A.P.; Barros, Fabio R.; Santos, Luiz A.P.
2013-01-01
One of the most important physical quantities to be evaluated in diagnostic radiology is the radiation exposure time experimented by the patient during the X-ray examination. IAEA and WHO organizations have suggested that any country must create a quality surveillance program to verify if each type of ionizing radiation equipment used in the hospitals and medical clinics are in conformity with the accepted uncertainties following the international standards. The purpose of this work is to present a new high precision methodology for measuring true exposure time in diagnostic X-ray examinations: pulsed, continuous or digital one. An electronic system named CronoX, which will be soon registered at the Brazilian Patent Office (INPI), is the equipment that provides such a high precision measurement. The principle of measurement is based on the electrical signal captured by a sensor that enters in a regeneration amplifier to transform it in a digital signal, which is treated by a microprocessor (uP). The signal treatment results in a two measured times: 1) T rx , the true X-ray exposure time; 2) T nx , the time in which the X-ray machine is repeatedly cut off during the pulsed irradiation and there is no delivery dose to the patient. Conventional Polymat X-ray equipment and dental X-ray machines were used to generate X-ray photons and take the measurements with the electronic systems. The results show that such a high precision instrumentation displays the true exposure time in diagnostic X-ray examinations and indicates a new method to be purposed for the quality surveillance programs in radiology. (author)
Phantom cosmologies and fermions
Chimento, Luis P; Forte, Monica; Devecchi, Fernando P; Kremer, Gilberto M
2008-01-01
Form invariance transformations can be used for constructing phantom cosmologies starting with conventional cosmological models. In this work we reconsider the scalar field case and extend the discussion to fermionic fields, where the 'phantomization' process exhibits a new class of possible accelerated regimes. As an application we analyze the cosmological constant group for a fermionic seed fluid
Particle physics and cosmology
Schramm, D.N.; Turner, M.S.
1982-06-01
work is described in these areas: cosmological baryon production; cosmological production of free quarks and other exotic particle species; the quark-hadron transition in the early universe; astrophysical and cosmological constraints on particle properties; massive neutrinos; phase transitions in the early universe; and astrophysical implications of an axion-like particle
Weinberg, S.
1989-01-01
Cosmological constant problem is discussed. History of the problem is briefly considered. Five different approaches to solution of the problem are described: supersymmetry, supergravity, superstring; anthropic approach; mechamism of lagrangian alignment; modification of gravitation theory and quantum cosmology. It is noted that approach, based on quantum cosmology is the most promising one
. ______________________________________________________________________________________ Nobelist George Smoot to Direct Korean Cosmology Institute Nobel Laureate George Smoot has been appointed director of a new cosmology institute in South Korea that will work closely with the year-old Berkeley the Early Universe (IEU) at EWHA Womans University in Seoul, Korea will provide cosmology education
Davies, P.
1991-01-01
The main concepts of cosmology are discussed, and some of the misconceptions are clarified. The features of big bang cosmology are examined, and it is noted that the existence of the cosmic background radiation provides welcome confirmation of the big bang theory. Calculations of relative abundances of the elements conform with observations, further strengthening the confidence in the basic ideas of big bang cosmology
CERN. Geneva. Audiovisual Unit
2001-01-01
Cosmology and particle physics have enjoyed a useful relationship over the entire histories of both subjects. Today, ideas and techniques in cosmology are frequently used to elucidate and constrain theories of elementary particles. These lectures give an elementary overview of the essential elements of cosmology, which is necessary to understand this relationship.
CERN. Geneva
1999-01-01
Cosmology and particle physics have enjoyed a useful relationship over the entire histories of both subjects. Today, ideas and techniques in cosmology are frequently used to elucidate and constrain theories of elementary particles. These lectures give an elementary overview of the essential elements of cosmology, which is necessary to understand this relationship.
Langer, M.
2007-01-01
This is a very concise introductory lecture to Cosmology. We start by reviewing the basics of homogeneous and isotropic cosmology. We then spend some time on the description of the Cosmic Microwave Background. Finally, a small section is devoted to the discussion of the cosmological constant and of some of the related problems
Drag-Free Motion Control of Satellite for High-Precision Gravity Field Mapping
Ziegler, Bent Lindvig; Blanke, Mogens
2002-01-01
High precision mapping of the geoid and the Earth's gravity field are of importance to a wide range of ongoing studies in areas like ocean circulation, solid Earth physics and ice sheet dynamics. Using a satellite in orbit around the Earth gives the opportunity to map the Earth's gravity field in 3...... will compromise measurement accuracy, unless they are accurately compensated by on-board thrusters. The paper concerns the design of a control system to performing such delicate drag compensation. A six degrees-of-freedom model for the satellite is developed with the model including dynamics of the satellite...
Tests of a Fast Plastic Scintillator for High-Precision Half-Life Measurements
Laffoley, A. T.; Dunlop, R.; Finlay, P.; Leach, K. G.; Michetti-Wilson, J.; Rand, E. T.; Svensson, C. E.; Grinyer, G. F.; Thomas, J. C.; Ball, G.; Garnsworthy, A. B.; Hackman, G.; Orce, J. N.; Triambak, S.; Williams, S. J.; Andreoiu, C.; Cross, D.
2013-03-01
A fast plastic scintillator detector is evaluated for possible use in an ongoing program of high-precision half-life measurements of short lived β emitters. Using data taken at TRI-UMF's Isotope Separator and Accelerator Facility with a radioactive 26Na beam, a detailed investigation of potential systematic effects with this new detector setup is being performed. The technique will then be applied to other β-decay half-life measurements including the superallowed Fermi β emitters 10C, 14O, and T = 1/2 decay of 15O.
High-precision branching-ratio measurement for the superallowed β+ emitter 26Alm
Finlay, P.; Ball, G. C.; Leslie, J. R.; Svensson, C. E.; Andreoiu, C.; Austin, R. A. E.; Bandyopadhyay, D.; Cross, D. S.; Demand, G.; Djongolov, M.; Ettenauer, S.; Garrett, P. E.; Green, K. L.; Grinyer, G. F.; Hackman, G.; Leach, K. G.; Pearson, C. J.; Phillips, A. A.; Rand, E. T.; Sumithrarachchi, C. S.; Triambak, S.; Williams, S. J.
2012-05-01
A high-precision branching-ratio measurement for the superallowed β+ emitter 26Alm was performed at the TRIUMF-ISAC radioactive ion beam facility. An upper limit of ⩽12 ppm at 90% confidence level was found for the second forbidden β+ decay of 26Alm to the 21+ state at 1809 keV in 26Mg. An inclusive upper limit of ⩽15 ppm at 90% confidence level was found when considering all possible nonanalog β+/EC decay branches of 26Alm, resulting in a superallowed branching ratio of 100.0000-0.0015+0%.
Design and implementation of high-precision and low-jitter programmable delay circuitry
Gao Yuan; Cui Ke; Zhang Hongfei; Luo Chunli; Yang Dongxu; Liang Hao; Wang Jian
2011-01-01
A programmable delay circuit design which has characteristics of high-precision, low-jitter, wide-programmable-range and low power is introduced. The delay circuitry uses the scheme which has two parts: the coarse delay and the fine delay that could be controlled separately. Using different coarse delay chip can reach different maximum programmable range. And the fine delay programmable chip has the minimum step which is down to 10 ps. The whole circuitry jitter will be less than 100 ps. The design has been successfully applied in Quantum Key Distribution experiment. (authors)
A new approach to the BFKL mechanism. Application to high-precision HERA data
Kowalski, H.; Lipatov, L.N.; Ross, D.A.
2017-07-01
We analyse here in NLO the physical properties of the discrete eigenvalue solution for the BFKL equation. We show that a set of positive ω eigenfunctions together with a small contribution from a continuum of negative ω's provide an excellent description of high-precision HERA F_2 data in the region, x 6 GeV"2. The phases of the eigenfunctions can be obtained from a simple parametrisation of the pomeron spectrum, which has a natural motivation within BFKL. The data analysis shows that the first eigenfunction decouples or nearly decouples from the proton. This suggests that there exist an additional ground state, which has no nodes.
Mechanical optimisation of a high-precision fast wire scanner at CERN
Samuelsson, Sebastian; Veness, Raymond
Wire scanners are instruments used to measure the transverse beam prole in particle accelerators by passing a thin wire through the particle beam. To avoid the issues of vacuum leakage through the bellows and wire failure related to current designs of wire scanners, a new concept for a wire scanner has been developed at CERN. This design has all moving parts inside the beam vacuum and has a nominal wire scanning speed of 20 m/s. The demands on the design associated with this together with the high precision requirements create a need for\
Dimethyl ether reviewed: New results on using this gas in a high-precision drift chamber
Basile, M.; Bonvicini, G.; Cara Romeo, G.; Cifarelli, L.; Contin, A.; D'Ali, G.; Del Papa, C.; Maccarrone, G.; Massam, T.; Motta, F.; Nania, R.; Palmonari, F.; Rinaldi, G.; Sartorelli, G.; Spinetti, M.; Susinno, G.; Villa, F.; Voltano, L.; Zichichi, A.
1985-01-01
Two years ago, dimethyl ether (DME) was presented, for the first time, as a suitable gas for high-precision drift chambers. In fact our tests show that resolutions can be obtained which are better by at least a factor of 2 compared to what one can get with conventional gases. Moreover, DME is very well quenched. The feared formation of whiskers on the wires has not occurred, at least after months of use with a 10 μCi 106 Ru source. (orig.)
Meyer, Steffen
2017-01-01
and frequency resolved optical gating (FROG) are used, and the two frequency comb systems used for the experiments are thoroughly characterized, a Coherent Mira Ti:sapph oscillator and a MenloSystems fiber based frequency comb system. The potential of frequency comb driven Raman transitions is shown...... transition frequencies typically are on the order of a few THz. High precision measurements on these ions have many intriguing applications, for example the test of time-variations of fundamental constants, ultracold chemistry on the quantum level, and quantum information and computing, to name just a few...
MiniDSS: a low-power and high-precision miniaturized digital sun sensor
de Boer, B. M.; Durkut, M.; Laan, E.; Hakkesteegt, H.; Theuwissen, A.; Xie, N.; Leijtens, J. L.; Urquijo, E.; Bruins, P.
2017-11-01
A high-precision and low-power miniaturized digital sun sensor has been developed at TNO. The single-chip sun sensor comprises an application specific integrated circuit (ASIC) on which an active pixel sensor (APS), read-out and processing circuitry as well as communication circuitry are combined. The design was optimized for low recurrent cost. The sensor is albedo insensitive and the prototype combines an accuracy in the order of 0.03° with a mass of just 72 g and a power consumption of only 65 mW.
High Astrometric Precision in the Calculation of the Coordinates of Orbiters in the GEO Ring
Lacruz, E.; Abad, C.; Downes, J. J.; Hernández-Pérez, F.; Casanova, D.; Tresaco, E.
2018-04-01
We present an astrometric method for the calculation of the positions of orbiters in the GEO ring with a high precision, through a rigorous astrometric treatment of observations with a 1-m class telescope, which are part of the CIDA survey of the GEO ring. We compute the distortion pattern to correct for the systematic errors introduced by the optics and electronics of the telescope, resulting in absolute mean errors of 0.16″ and 0.12″ in right ascension and declination, respectively. These correspond to ≍25 m at the mean distance of the GEO ring, and are thus good quality results.
High precision simple interpolation asynchronous FIFO based on ACEX1K30 for HIRFL-CSRe
Li Guihua; Qiao Weimin; Jing Lan
2008-01-01
High precision simple interpolation asynchronous FIFO of HIRFL-CSRe was developed based on the ACEX1K30 FPGA in VHDL Hardware Description language. The FIFO runs in FPGA of DSP module of HIRFL-CSRe. The input data of FIFO is from DSP data bus and the output data is to DAC data bus. It's kernel adopts double buffer ping-pong mode and it can implement simple interpolation inside FPGA. The module can control out- put data time delay in 40 ns. The experimental results indicate that this module is practical and accurate to HIRFL-CSRe. (authors)
Cristiano, Barbara Fernandes G.; Dias, Fabio C.; Barros, Pedro D. de; Araujo, Radier Mario S. de; Delgado, Jose Ubiratan; Silva, Jose Wanderley S. da; Lopes, Ricardo T.
2011-01-01
The method of high precision potentiometric titration is widely used in the certification and characterization of uranium compounds. In order to reduce the analysis and diminish the influence if the annalist, a semi-automatic version of the method was developed at the safeguards laboratory of the CNEN-RJ, Brazil. The method was applied with traceability guaranteed by use of primary standard of potassium dichromate. The standard uncertainty combined in the determination of concentration of total uranium was of the order of 0.01%, which is better related to traditionally methods used by the nuclear installations which is of the order of 0.1%
Quantum gravity and quantum cosmology
Papantonopoulos, Lefteris; Siopsis, George; Tsamis, Nikos
2013-01-01
Quantum gravity has developed into a fast-growing subject in physics and it is expected that probing the high-energy and high-curvature regimes of gravitating systems will shed some light on how to eventually achieve an ultraviolet complete quantum theory of gravity. Such a theory would provide the much needed information about fundamental problems of classical gravity, such as the initial big-bang singularity, the cosmological constant problem, Planck scale physics and the early-time inflationary evolution of our Universe. While in the first part of this book concepts of quantum gravity are introduced and approached from different angles, the second part discusses these theories in connection with cosmological models and observations, thereby exploring which types of signatures of modern and mathematically rigorous frameworks can be detected by experiments. The third and final part briefly reviews the observational status of dark matter and dark energy, and introduces alternative cosmological models. ...
Frontend electronics for high-precision single photo-electron timing using FPGA-TDCs
Cardinali, M., E-mail: cardinal@kph.uni-mainz.de [Institut für Kernphysik, Johannes Gutenberg-University Mainz, Mainz (Germany); Helmholtz Institut Mainz, Mainz (Germany); Dzyhgadlo, R.; Gerhardt, A.; Götzen, K.; Hohler, R.; Kalicy, G.; Kumawat, H.; Lehmann, D.; Lewandowski, B.; Patsyuk, M.; Peters, K.; Schepers, G.; Schmitt, L.; Schwarz, C.; Schwiening, J.; Traxler, M.; Ugur, C.; Zühlsdorf, M. [GSI Helmholtzzentrum für Schwerionenforschung GmbH, Darmstadt (Germany); Dodokhov, V.Kh. [Joint Institute for Nuclear Research, Dubna (Russian Federation); Britting, A. [Friedrich Alexander-University of Erlangen-Nuremberg, Erlangen (Germany); and others
2014-12-01
The next generation of high-luminosity experiments requires excellent particle identification detectors which calls for Imaging Cherenkov counters with fast electronics to cope with the expected hit rates. A Barrel DIRC will be used in the central region of the Target Spectrometer of the planned PANDA experiment at FAIR. A single photo-electron timing resolution of better than 100 ps is required by the Barrel DIRC to disentangle the complicated patterns created on the image plane. R and D studies have been performed to provide a design based on the TRB3 readout using FPGA-TDCs with a precision better than 20 ps RMS and custom frontend electronics with high-bandwidth pre-amplifiers and fast discriminators. The discriminators also provide time-over-threshold information thus enabling walk corrections to improve the timing resolution. Two types of frontend electronics cards optimised for reading out 64-channel PHOTONIS Planacon MCP-PMTs were tested: one based on the NINO ASIC and the other, called PADIWA, on FPGA discriminators. Promising results were obtained in a full characterisation using a fast laser setup and in a test experiment at MAMI, Mainz, with a small scale DIRC prototype. - Highlights: • Frontend electronics for Cherenkov detectors have been developed. • FPGA-TDCs have been used for high precision timing. • Time over threshold has been utilised for walk correction. • Single photo-electron timing resolution less than 100 ps has been achieved.
A Fast and High-precision Orientation Algorithm for BeiDou Based on Dimensionality Reduction
ZHAO Jiaojiao
2015-05-01
Full Text Available A fast and high-precision orientation algorithm for BeiDou is proposed by deeply analyzing the constellation characteristics of BeiDou and GEO satellites features.With the advantage of good east-west geometry, the baseline vector candidate values were solved by the GEO satellites observations combined with the dimensionality reduction theory at first.Then, we use the ambiguity function to judge the values in order to obtain the optical baseline vector and get the wide lane integer ambiguities. On this basis, the B1 ambiguities were solved. Finally, the high-precision orientation was estimated by the determinating B1 ambiguities. This new algorithm not only can improve the ill-condition of traditional algorithm, but also can reduce the ambiguity search region to a great extent, thus calculating the integer ambiguities in a single-epoch.The algorithm is simulated by the actual BeiDou ephemeris and the result shows that the method is efficient and fast for orientation. It is capable of very high single-epoch success rate(99.31% and accurate attitude angle (the standard deviation of pitch and heading is respectively 0.07°and 0.13°in a real time and dynamic environment.
A high precision extrapolation method in multiphase-field model for simulating dendrite growth
Yang, Cong; Xu, Qingyan; Liu, Baicheng
2018-05-01
The phase-field method coupling with thermodynamic data has become a trend for predicting the microstructure formation in technical alloys. Nevertheless, the frequent access to thermodynamic database and calculation of local equilibrium conditions can be time intensive. The extrapolation methods, which are derived based on Taylor expansion, can provide approximation results with a high computational efficiency, and have been proven successful in applications. This paper presents a high precision second order extrapolation method for calculating the driving force in phase transformation. To obtain the phase compositions, different methods in solving the quasi-equilibrium condition are tested, and the M-slope approach is chosen for its best accuracy. The developed second order extrapolation method along with the M-slope approach and the first order extrapolation method are applied to simulate dendrite growth in a Ni-Al-Cr ternary alloy. The results of the extrapolation methods are compared with the exact solution with respect to the composition profile and dendrite tip position, which demonstrate the high precision and efficiency of the newly developed algorithm. To accelerate the phase-field and extrapolation computation, the graphic processing unit (GPU) based parallel computing scheme is developed. The application to large-scale simulation of multi-dendrite growth in an isothermal cross-section has demonstrated the ability of the developed GPU-accelerated second order extrapolation approach for multiphase-field model.
A high precision mass spectrometer for hydrogen isotopic analysis of water samples
Murthy, M.S.; Prahallada Rao, B.S.; Handu, V.K.; Satam, J.V.
1979-01-01
A high precision mass spectrometer with two ion collector assemblies and direct on line reduction facility (with uranium at 700 0 C) for water samples for hydrogen isotopic analysis has been designed and developed. The ion source particularly gives high sensitivity and at the same tike limits the H 3 + ions to a minimum. A digital ratiometer with a H 2 + compensator has also been developed. The overall precision obtained on the spectrometer is 0.07% 2sub(sigmasub(10)) value. Typical results on the performance of the spectrometer, which is working since a year and a half are given. Possible methods of extending the ranges of concentration the spectrometer can handle, both on lower and higher sides are discussed. Problems of memory between samples are briefly listed. A multiple inlet system to overcome these problems is suggested. This will also enable faster analysis when samples of highly varying concentrations are to be analyzed. A few probable areas in which the spectrometer will be shortly put to use are given. (auth.)
High-precision gamma-ray spectroscopy for enhancing production and application of medical isotopes
McCutchan, E. A.; Sonzogni, A. A.; Smith, S. V.; Muench, L.; Nino, M.; Greene, J. P.; Carpenter, M. P.; Zhu, S.; Chillery, T.; Chowdhury, P.; Harding, R.; Lister, C. J.
2015-10-01
Nuclear medicine is a field which requires precise decay data for use in planning radionuclide production and in imaging and therapeutic applications. To address deficiencies in decay data, sources of medical isotopes were produced and purified at the Brookhaven Linear Isotope Producer (BLIP) then shipped to Argonne National Laboratory where high-precision, gamma-ray measurements were performed using Gammasphere. New decay schemes for a number of PET isotopes and the impact on dose calculations will be presented. To investigate the production of next-generation theranostic or radiotherapeutic isotopes, cross section measurements with high energy protons have also been explored at BLIP. The 100-200 MeV proton energy regime is relatively unexplored for isotope production, thus offering high discovery potential but at the same time a challenging analysis due to the large number of open channels at these energies. Results of cross sections deduced from Compton-suppressed, coincidence gamma-ray spectroscopy performed at Lowell will be presented, focusing on the production of platinum isotopes by irradiating natural platinum foils with 100 to 200 MeV protons. DOE Isotope Program is acknowledged for funding ST5001030. Work supported by the US DOE under Grant DE-FG02-94ER40848 and Contracts DE-AC02-98CH10946 and DE-AC02-06CH11357.
High-precision and low-cost vibration generator for low-frequency calibration system
Li, Rui-Jun; Lei, Ying-Jun; Zhang, Lian-Sheng; Chang, Zhen-Xin; Fan, Kuang-Chao; Cheng, Zhen-Ying; Hu, Peng-Hao
2018-03-01
Low-frequency vibration is one of the harmful factors that affect the accuracy of micro-/nano-measuring machines because its amplitude is significantly small and it is very difficult to avoid. In this paper, a low-cost and high-precision vibration generator was developed to calibrate an optical accelerometer, which is self-designed to detect low-frequency vibration. A piezoelectric actuator is used as vibration exciter, a leaf spring made of beryllium copper is used as an elastic component, and a high-resolution, low-thermal-drift eddy current sensor is applied to investigate the vibrator’s performance. Experimental results demonstrate that the vibration generator can achieve steady output displacement with frequency range from 0.6 Hz to 50 Hz, an analytical displacement resolution of 3.1 nm and an acceleration range from 3.72 mm s-2 to 1935.41 mm s-2 with a relative standard deviation less than 1.79%. The effectiveness of the high-precision and low-cost vibration generator was verified by calibrating our optical accelerometer.
Cosmology with coalescing massive black holes
Hughes, Scott A; Holz, Daniel E
2003-01-01
The gravitational waves generated in the coalescence of massive binary black holes will be measurable by LISA to enormous distances. Redshifts z ∼ 10 or larger (depending somewhat on the mass of the binary) can potentially be probed by such measurements, suggesting that binary coalescences can be made into cosmological tools. We discuss two particularly interesting types of probe. First, by combining gravitational-wave measurements with information about the cosmography of the universe, we can study the evolution of black-hole masses and merger rates as a function of redshift, providing information about the growth of structures at high redshift and possibly constraining hierarchical merger scenarios. Second, if it is possible to associate an 'electromagnetic' counterpart with a coalescence, it may be possible to measure both redshift and luminosity distance to an event with less than ∼1% error. Such a measurement would constitute an amazingly precise cosmological standard candle. Unfortunately, gravitational lensing uncertainties will reduce the quality of this candle significantly. Though not as amazing as might have been hoped, such a candle would nonetheless very usefully complement other distance-redshift probes, in particular providing a valuable check on systematic effects in such measurements
Spectrophotometric high-precision seawater pH determination for use in underway measuring systems
S. Aßmann
2011-10-01
Full Text Available Autonomous sensors are required for a comprehensive documentation of the changes in the marine carbon system and thus to differentiate between its natural variability and anthropogenic impacts. Spectrophotometric determination of pH – a key variable of the seawater carbon system – is particularly suited to achieve precise and drift-free measurements. However, available spectrophotometric instruments are not suitable for integration into automated measurement systems (e.g. FerryBox since they do not meet the major requirements of reliability, stability, robustness and moderate cost. Here we report on the development and testing of a~new indicator-based pH sensor that meets all of these requirements. This sensor can withstand the rough conditions during long-term deployments on ships of opportunity and is applicable to the open ocean as well as to coastal waters with a complex matrix and highly variable conditions. The sensor uses a high resolution CCD spectrometer as detector connected via optical fibers to a custom-made cuvette designed to reduce the impact of air bubbles. The sample temperature can be precisely adjusted (25 °C ± 0.006 °C using computer-controlled power supplies and Peltier elements thus avoiding the widely used water bath. The overall setup achieves a measurement frequency of 1 min^{−1} with a precision of ±0.0007 pH units, an average offset of +0.0005 pH units to a reference system, and an offset of +0.0081 pH units to a certified standard buffer. Application of this sensor allows monitoring of seawater pH in autonomous underway systems, providing a key variable for characterization and understanding of the marine carbon system.
Chamcham, Khalil; Silk, Joseph; Barrow, John D.; Saunders, Simon
2017-04-01
Part I. Issues in the Philosophy of Cosmology: 1. Cosmology, cosmologia and the testing of cosmological theories George F. R. Ellis; 2. Black holes, cosmology and the passage of time: three problems at the limits of science Bernard Carr; 3. Moving boundaries? - comments on the relationship between philosophy and cosmology Claus Beisbart; 4. On the question why there exists something rather than nothing Roderich Tumulka; Part II. Structures in the Universe and the Structure of Modern Cosmology: 5. Some generalities about generality John D. Barrow; 6. Emergent structures of effective field theories Jean-Philippe Uzan; 7. Cosmological structure formation Joel R. Primack; 8. Formation of galaxies Joseph Silk; Part III. Foundations of Cosmology: Gravity and the Quantum: 9. The observer strikes back James Hartle and Thomas Hertog; 10. Testing inflation Chris Smeenk; 11. Why Boltzmann brains do not fluctuate into existence from the de Sitter vacuum Kimberly K. Boddy, Sean M. Carroll and Jason Pollack; 12. Holographic inflation revised Tom Banks; 13. Progress and gravity: overcoming divisions between general relativity and particle physics and between physics and HPS J. Brian Pitts; Part IV. Quantum Foundations and Quantum Gravity: 14. Is time's arrow perspectival? Carlo Rovelli; 15. Relational quantum cosmology Francesca Vidotto; 16. Cosmological ontology and epistemology Don N. Page; 17. Quantum origin of cosmological structure and dynamical reduction theories Daniel Sudarsky; 18. Towards a novel approach to semi-classical gravity Ward Struyve; Part V. Methodological and Philosophical Issues: 19. Limits of time in cosmology Svend E. Rugh and Henrik Zinkernagel; 20. Self-locating priors and cosmological measures Cian Dorr and Frank Arntzenius; 21. On probability and cosmology: inference beyond data? Martin Sahlén; 22. Testing the multiverse: Bayes, fine-tuning and typicality Luke A. Barnes; 23. A new perspective on Einstein's philosophy of cosmology Cormac O
Sarkar, Abir; Sethi, Shiv K.; Mondal, Rajesh; Bharadwaj, Somnath; Das, Subinoy; Marsh, David J.E.
2016-01-01
The particle nature of dark matter remains a mystery. In this paper, we consider two dark matter models—Late Forming Dark Matter (LFDM) and Ultra-Light Axion (ULA) models—where the matter power spectra show novel effects on small scales. The high redshift universe offers a powerful probe of their parameters. In particular, we study two cosmological observables: the neutral hydrogen (HI) redshifted 21-cm signal from the epoch of reionization, and the evolution of the collapsed fraction of HI in the redshift range 2 < z < 5. We model the theoretical predictions of the models using CDM-like N-body simulations with modified initial conditions, and generate reionization fields using an excursion set model. The N-body approximation is valid on the length and halo mass scales studied. We show that LFDM and ULA models predict an increase in the HI power spectrum from the epoch of reionization by a factor between 2–10 for a range of scales 0.1 < k < 4 Mpc −1 . Assuming a fiducial model where a neutral hydrogen fraction x-bar HI = 0.5 must be achieved by z = 8, the reionization process allows us to put approximate bounds on the redshift of dark matter formation z f > 4 × 10 5 (for LFDM) and the axion mass m a > 2.6 × 10 −23 eV (for ULA). The comparison of the collapsed mass fraction inferred from damped Lyman-α observations to the theoretical predictions of our models lead to the weaker bounds: z f > 2 × 10 5 and m a > 10 −23 eV. These bounds are consistent with other constraints in the literature using different observables; we briefly discuss how these bounds compare with possible constraints from the observation of luminosity function of galaxies at high redshifts. In the case of ULAs, these constraints are also consistent with a solution to the cusp-core problem of CDM
A novel approach for pulse width measurements with a high precision (8 ps RMS) TDC in an FPGA
Ugur, C.; Linev, S.; Schweitzer, T.; Traxler, M.; Michel, J.
2016-01-01
High precision time measurements are a crucial element in particle identification experiments, which likewise require pulse width information for Time-over-Threshold (ToT) measurements and charge measurements (correlated with pulse width). In almost all of the FPGA-based TDC applications, pulse width measurements are implemented using two of the TDC channels for leading and trailing edge time measurements individually. This method however, requires twice the number of resources. In this paper we present the latest precision improvements in the high precision TDC (8 ps RMS) developed before [1], as well as the novel way of measuring ToT using a single TDC channel, while still achieving high precision (as low as 11.7 ps RMS). The effect of voltage, generated by a DC-DC converter, over the precision is also discussed. Finally, the outcome of the temperature change over the pulse width measurement is shown and a correction method is suggested to limit the degradation
Philosophical Roots of Cosmology
Ivanovic, M.
2008-10-01
We shall consider the philosophical roots of cosmology in the earlier Greek philosophy. Our goal is to answer the question: Are earlier Greek theories of pure philosophical-mythological character, as often philosophers cited it, or they have scientific character. On the bases of methodological criteria, we shall contend that the latter is the case. In order to answer the question about contemporary situation of the relation philosophy-cosmology, we shall consider the next question: Is contemporary cosmology completely independent of philosophical conjectures? The answer demands consideration of methodological character about scientific status of contemporary cosmology. We also consider some aspects of the relation contemporary philosophy-cosmology.
High-precision predictions for the light CP-even Higgs boson mass of the MSSM
Hahn, T.; Hollik, W. [Max-Planck-Institut fuer Physik, Muenchen (Germany); Heinemeyer, S. [Instituto de Fisica de Cantabria (CSIC-UC), Santander (Spain); Rzehak, H. [Freiburg Univ. (Germany). Physikalisches Inst.; Weiglein, G. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)
2014-03-15
For the interpretation of the signal discovered in the Higgs searches at the LHC it will be crucial in particular to discriminate between the minimal Higgs sector realised in the Standard Model (SM) and its most commonly studied extension, the Minimal Supersymmetric SM (MSSM). The measured mass value, having already reached the level of a precision observable with an experimental accuracy of about 500 MeV, plays an important role in this context. In the MSSM the mass of the light CP-even Higgs boson, M{sub h}, can directly be predicted from the other parameters of the model. The accuracy of this prediction should at least match the one of the experimental result. The relatively high mass value of about 126 GeV has led to many investigations where the scalar top quarks are in the multi-TeV range. We improve the prediction for M{sub h} in the MSSM by combining the existing fixed-order result, comprising the full one-loop and leading and subleading two-loop corrections, with a resummation of the leading and subleading logarithmic contributions from the scalar top sector to all orders. In this way for the first time a high-precision prediction for the mass of the light CP-even Higgs boson in the MSSM is possible all the way up to the multi-TeV region of the relevant supersymmetric particles. The results are included in the code FeynHiggs.
A High Rigidity and Precision Scanning Tunneling Microscope with Decoupled XY and Z Scans.
Chen, Xu; Guo, Tengfei; Hou, Yubin; Zhang, Jing; Meng, Wenjie; Lu, Qingyou
2017-01-01
A new scan-head structure for the scanning tunneling microscope (STM) is proposed, featuring high scan precision and rigidity. The core structure consists of a piezoelectric tube scanner of quadrant type (for XY scans) coaxially housed in a piezoelectric tube with single inner and outer electrodes (for Z scan). They are fixed at one end (called common end). A hollow tantalum shaft is coaxially housed in the XY -scan tube and they are mutually fixed at both ends. When the XY scanner scans, its free end will bring the shaft to scan and the tip which is coaxially inserted in the shaft at the common end will scan a smaller area if the tip protrudes short enough from the common end. The decoupled XY and Z scans are desired for less image distortion and the mechanically reduced scan range has the superiority of reducing the impact of the background electronic noise on the scanner and enhancing the tip positioning precision. High quality atomic resolution images are also shown.
A High Rigidity and Precision Scanning Tunneling Microscope with Decoupled XY and Z Scans
Xu Chen
2017-01-01
Full Text Available A new scan-head structure for the scanning tunneling microscope (STM is proposed, featuring high scan precision and rigidity. The core structure consists of a piezoelectric tube scanner of quadrant type (for XY scans coaxially housed in a piezoelectric tube with single inner and outer electrodes (for Z scan. They are fixed at one end (called common end. A hollow tantalum shaft is coaxially housed in the XY-scan tube and they are mutually fixed at both ends. When the XY scanner scans, its free end will bring the shaft to scan and the tip which is coaxially inserted in the shaft at the common end will scan a smaller area if the tip protrudes short enough from the common end. The decoupled XY and Z scans are desired for less image distortion and the mechanically reduced scan range has the superiority of reducing the impact of the background electronic noise on the scanner and enhancing the tip positioning precision. High quality atomic resolution images are also shown.
An investigation of highly accurate and precise robotic hole measurements using non-contact devices
Usman Zahid
2016-01-01
Full Text Available Industrial robots arms are widely used in manufacturing industry because of their support for automation. However, in metrology, robots have had limited application due to their insufficient accuracy. Even using error compensation and calibration methods, robots are not effective for micrometre (μm level metrology. Non-contact measurement devices can potentially enable the use of robots for highly accurate metrology. However, the use of such devices on robots has not been investigated. The research work reported in this paper explores the use of different non-contact measurement devices on an industrial robot. The aim is to experimentally investigate the effects of robot movements on the accuracy and precision of measurements. The focus has been on assessing the ability to accurately measure various geometric and surface parameters of holes despite the inherent inaccuracies of industrial robot. This involves the measurement of diameter, roundness and surface roughness. The study also includes scanning of holes for measuring internal features such as start and end point of a taper. Two different non-contact measurement devices based on different technologies are investigated. Furthermore, effects of eccentricity, vibrations and thermal variations are also assessed. The research contributes towards the use of robots for highly accurate and precise robotic metrology.
A High Precision Position Sensor Design and Its Signal Processing Algorithm for a Maglev Train
Wensen Chang
2012-04-01
Full Text Available High precision positioning technology for a kind of high speed maglev train with an electromagnetic suspension (EMS system is studied. At first, the basic structure and functions of the position sensor are introduced and some key techniques to enhance the positioning precision are designed. Then, in order to further improve the positioning signal quality and the fault-tolerant ability of the sensor, a new kind of discrete-time tracking differentiator (TD is proposed based on nonlinear optimal control theory. This new TD has good filtering and differentiating performances and a small calculation load. It is suitable for real-time signal processing. The stability, convergence property and frequency characteristics of the TD are studied and analyzed thoroughly. The delay constant of the TD is figured out and an effective time delay compensation algorithm is proposed. Based on the TD technology, a filtering process is introduced in to improve the positioning signal waveform when the sensor is under bad working conditions, and a two-sensor switching algorithm is designed to eliminate the positioning errors caused by the joint gaps of the long stator. The effectiveness and stability of the sensor and its signal processing algorithms are proved by the experiments on a test train during a long-term test run.
Sagala, Ricardo Alfencius; Harjadi, P. J. Prih; Heryandoko, Nova; Sianipar, Dimas
2017-07-01
Sumatra was one of the most high seismicity regions in Indonesia. The subduction of Indo-Australian plate beneath Eurasian plate in western Sumatra contributes for many significant earthquakes that occur in this area. These earthquake events can be used to analyze the seismotectonic of Sumatra subduction zone and its system. In this study we use teleseismic double-difference method to obtain more high precision earthquake distribution in Sumatra subduction zone. We use a 3D nested regional-global velocity model. We use a combination of data from both of ISC (International Seismological Center) and BMKG (Agency for Meteorology Climatology and Geophysics, Indonesia). We successfully relocate about 6886 earthquakes that occur on period of 1981-2015. We consider that this new location is more precise than the regular bulletin. The relocation results show greatly reduced of RMS residual of travel time. Using this data, we can construct a new seismotectonic map of Sumatra. A well-built geometry of subduction slab, faults and volcano arc can be obtained from the new bulletin. It is also showed that at a depth of 140-170 km, there is many events occur as moderate-to-deep earthquakes, and we consider about the relation of the slab's events with volcanic arc and inland fault system. A reliable slab model is also built from regression equation using new relocated data. We also analyze the spatial-temporal of seismotectonic using b-value mapping that inspected in detail horizontally and vertically cross-section.
Yan, Peng; Zhang, Yangming
2018-06-01
High performance scanning of nano-manipulators is widely deployed in various precision engineering applications such as SPM (scanning probe microscope), where trajectory tracking of sophisticated reference signals is an challenging control problem. The situation is further complicated when rate dependent hysteresis of the piezoelectric actuators and the stress-stiffening induced nonlinear stiffness of the flexure mechanism are considered. In this paper, a novel control framework is proposed to achieve high precision tracking of a piezoelectric nano-manipulator subjected to hysteresis and stiffness nonlinearities. An adaptive parameterized rate-dependent Prandtl-Ishlinskii model is constructed and the corresponding adaptive inverse model based online compensation is derived. Meanwhile a robust adaptive control architecture is further introduced to improve the tracking accuracy and robustness of the compensated system, where the parametric uncertainties of the nonlinear dynamics can be well eliminated by on-line estimations. Comparative experimental studies of the proposed control algorithm are conducted on a PZT actuated nano-manipulating stage, where hysteresis modeling accuracy and excellent tracking performance are demonstrated in real-time implementations, with significant improvement over existing results.
A high precision position sensor design and its signal processing algorithm for a maglev train.
Xue, Song; Long, Zhiqiang; He, Ning; Chang, Wensen
2012-01-01
High precision positioning technology for a kind of high speed maglev train with an electromagnetic suspension (EMS) system is studied. At first, the basic structure and functions of the position sensor are introduced and some key techniques to enhance the positioning precision are designed. Then, in order to further improve the positioning signal quality and the fault-tolerant ability of the sensor, a new kind of discrete-time tracking differentiator (TD) is proposed based on nonlinear optimal control theory. This new TD has good filtering and differentiating performances and a small calculation load. It is suitable for real-time signal processing. The stability, convergence property and frequency characteristics of the TD are studied and analyzed thoroughly. The delay constant of the TD is figured out and an effective time delay compensation algorithm is proposed. Based on the TD technology, a filtering process is introduced in to improve the positioning signal waveform when the sensor is under bad working conditions, and a two-sensor switching algorithm is designed to eliminate the positioning errors caused by the joint gaps of the long stator. The effectiveness and stability of the sensor and its signal processing algorithms are proved by the experiments on a test train during a long-term test run.
A Miniaturized Colorimeter with a Novel Design and High Precision for Photometric Detection.
Yan, Jun-Chao; Chen, Yan; Pang, Yu; Slavik, Jan; Zhao, Yun-Fei; Wu, Xiao-Ming; Yang, Yi; Yang, Si-Fan; Ren, Tian-Ling
2018-03-08
Water quality detection plays an increasingly important role in environmental protection. In this work, a novel colorimeter based on the Beer-Lambert law was designed for chemical element detection in water with high precision and miniaturized structure. As an example, the colorimeter can detect phosphorus, which was accomplished in this article to evaluate the performance. Simultaneously, a modified algorithm was applied to extend the linear measurable range. The colorimeter encompassed a near infrared laser source, a microflow cell based on microfluidic technology and a light-sensitive detector, then Micro-Electro-Mechanical System (MEMS) processing technology was used to form a stable integrated structure. Experiments were performed based on the ammonium molybdate spectrophotometric method, including the preparation of phosphorus standard solution, reducing agent, chromogenic agent and color reaction. The device can obtain a wide linear response range (0.05 mg/L up to 7.60 mg/L), a wide reliable measuring range up to 10.16 mg/L after using a novel algorithm, and a low limit of detection (0.02 mg/L). The size of flow cell in this design is 18 mm × 2.0 mm × 800 μm, obtaining a low reagent consumption of 0.004 mg ascorbic acid and 0.011 mg ammonium molybdate per determination. Achieving these advantages of miniaturized volume, high precision and low cost, the design can also be used in automated in situ detection.
A Miniaturized Colorimeter with a Novel Design and High Precision for Photometric Detection
Jun-Chao Yan
2018-03-01
Full Text Available Water quality detection plays an increasingly important role in environmental protection. In this work, a novel colorimeter based on the Beer-Lambert law was designed for chemical element detection in water with high precision and miniaturized structure. As an example, the colorimeter can detect phosphorus, which was accomplished in this article to evaluate the performance. Simultaneously, a modified algorithm was applied to extend the linear measurable range. The colorimeter encompassed a near infrared laser source, a microflow cell based on microfluidic technology and a light-sensitive detector, then Micro-Electro-Mechanical System (MEMS processing technology was used to form a stable integrated structure. Experiments were performed based on the ammonium molybdate spectrophotometric method, including the preparation of phosphorus standard solution, reducing agent, chromogenic agent and color reaction. The device can obtain a wide linear response range (0.05 mg/L up to 7.60 mg/L, a wide reliable measuring range up to 10.16 mg/L after using a novel algorithm, and a low limit of detection (0.02 mg/L. The size of flow cell in this design is 18 mm × 2.0 mm × 800 μm, obtaining a low reagent consumption of 0.004 mg ascorbic acid and 0.011 mg ammonium molybdate per determination. Achieving these advantages of miniaturized volume, high precision and low cost, the design can also be used in automated in situ detection.
Schwientek, Marc; Guillet, Gaëlle; Rügner, Hermann; Kuch, Bertram; Grathwohl, Peter
2016-01-01
Increasing numbers of organic micropollutants are emitted into rivers via municipal wastewaters. Due to their persistence many pollutants pass wastewater treatment plants without substantial removal. Transport and fate of pollutants in receiving waters and export to downstream ecosystems is not well understood. In particular, a better knowledge of processes governing their environmental behavior is needed. Although a lot of data are available concerning the ubiquitous presence of micropollutants in rivers, accurate data on transport and removal rates are lacking. In this paper, a mass balance approach is presented, which is based on the Lagrangian sampling scheme, but extended to account for precise transport velocities and mixing along river stretches. The calculated mass balances allow accurate quantification of pollutants' reactivity along river segments. This is demonstrated for representative members of important groups of micropollutants, e.g. pharmaceuticals, musk fragrances, flame retardants, and pesticides. A model-aided analysis of the measured data series gives insight into the temporal dynamics of removal processes. The occurrence of different removal mechanisms such as photooxidation, microbial degradation, and volatilization is discussed. The results demonstrate, that removal processes are highly variable in time and space and this has to be considered for future studies. The high precision sampling scheme presented could be a powerful tool for quantifying removal processes under different boundary conditions and in river segments with contrasting properties. Copyright © 2015. Published by Elsevier B.V.
The NANOGrav 11-year Data Set: High-precision Timing of 45 Millisecond Pulsars
Arzoumanian, Zaven; Brazier, Adam; Burke-Spolaor, Sarah; Chamberlin, Sydney; Chatterjee, Shami; Christy, Brian; Cordes, James M.; Cornish, Neil J.; Crawford, Fronefield; Thankful Cromartie, H.; Crowter, Kathryn; DeCesar, Megan E.; Demorest, Paul B.; Dolch, Timothy; Ellis, Justin A.; Ferdman, Robert D.; Ferrara, Elizabeth C.; Fonseca, Emmanuel; Garver-Daniels, Nathan; Gentile, Peter A.; Halmrast, Daniel; Huerta, E. A.; Jenet, Fredrick A.; Jessup, Cody; Jones, Glenn; Jones, Megan L.; Kaplan, David L.; Lam, Michael T.; Lazio, T. Joseph W.; Levin, Lina; Lommen, Andrea; Lorimer, Duncan R.; Luo, Jing; Lynch, Ryan S.; Madison, Dustin; Matthews, Allison M.; McLaughlin, Maura A.; McWilliams, Sean T.; Mingarelli, Chiara; Ng, Cherry; Nice, David J.; Pennucci, Timothy T.; Ransom, Scott M.; Ray, Paul S.; Siemens, Xavier; Simon, Joseph; Spiewak, Renée; Stairs, Ingrid H.; Stinebring, Daniel R.; Stovall, Kevin; Swiggum, Joseph K.; Taylor, Stephen R.; Vallisneri, Michele; van Haasteren, Rutger; Vigeland, Sarah J.; Zhu, Weiwei; The NANOGrav Collaboration
2018-04-01
We present high-precision timing data over time spans of up to 11 years for 45 millisecond pulsars observed as part of the North American Nanohertz Observatory for Gravitational Waves (NANOGrav) project, aimed at detecting and characterizing low-frequency gravitational waves. The pulsars were observed with the Arecibo Observatory and/or the Green Bank Telescope at frequencies ranging from 327 MHz to 2.3 GHz. Most pulsars were observed with approximately monthly cadence, and six high-timing-precision pulsars were observed weekly. All were observed at widely separated frequencies at each observing epoch in order to fit for time-variable dispersion delays. We describe our methods for data processing, time-of-arrival (TOA) calculation, and the implementation of a new, automated method for removing outlier TOAs. We fit a timing model for each pulsar that includes spin, astrometric, and (for binary pulsars) orbital parameters; time-variable dispersion delays; and parameters that quantify pulse-profile evolution with frequency. The timing solutions provide three new parallax measurements, two new Shapiro delay measurements, and two new measurements of significant orbital-period variations. We fit models that characterize sources of noise for each pulsar. We find that 11 pulsars show significant red noise, with generally smaller spectral indices than typically measured for non-recycled pulsars, possibly suggesting a different origin. A companion paper uses these data to constrain the strength of the gravitational-wave background.
High precision locations of long-period events at La Fossa Crater (Vulcano Island, Italy
Salvatore Rapisarda
2009-06-01
Full Text Available Since the last eruption in 1888-90, the volcanic activity on Vulcano Island (Aeolian Archipelago, Italy has been limited to fumarolic degassing. Fumaroles are mainly concentred at the active cone of La Fossa in the northern sector of the island and are periodically characterized by increases in temperature as well as in the amount of both CO2 and He. Seismic background activity at Vulcano is dominated by micro-seismicity originating at shallow depth (<1-1.5 km under La Fossa cone. This seismicity is related to geothermal system processes and comprises long period (LP events. LPs are generally considered as the resonance of a fluid-filled volume in response to a trigger. We analyzed LP events recorded during an anomalous degassing period (August-October 2006 applying a high precision technique to define the shape of the trigger source. Absolute and high precision locations suggest that LP events recorded at Vulcano during 2006 were produced by a shallow focal zone ca. 200 m long, 40 m wide and N30-40E oriented. Their occurrence is linked to magmatic fluid inputs that by modifying the hydrothermal system cause excitation of a fluid-filled cavity.
Aurélie Kapusta
2011-04-01
Full Text Available During the sexual cycle of the ciliate Paramecium, assembly of the somatic genome includes the precise excision of tens of thousands of short, non-coding germline sequences (Internal Eliminated Sequences or IESs, each one flanked by two TA dinucleotides. It has been reported previously that these genome rearrangements are initiated by the introduction of developmentally programmed DNA double-strand breaks (DSBs, which depend on the domesticated transposase PiggyMac. These DSBs all exhibit a characteristic geometry, with 4-base 5' overhangs centered on the conserved TA, and may readily align and undergo ligation with minimal processing. However, the molecular steps and actors involved in the final and precise assembly of somatic genes have remained unknown. We demonstrate here that Ligase IV and Xrcc4p, core components of the non-homologous end-joining pathway (NHEJ, are required both for the repair of IES excision sites and for the circularization of excised IESs. The transcription of LIG4 and XRCC4 is induced early during the sexual cycle and a Lig4p-GFP fusion protein accumulates in the developing somatic nucleus by the time IES excision takes place. RNAi-mediated silencing of either gene results in the persistence of free broken DNA ends, apparently protected against extensive resection. At the nucleotide level, controlled removal of the 5'-terminal nucleotide occurs normally in LIG4-silenced cells, while nucleotide addition to the 3' ends of the breaks is blocked, together with the final joining step, indicative of a coupling between NHEJ polymerase and ligase activities. Taken together, our data indicate that IES excision is a "cut-and-close" mechanism, which involves the introduction of initiating double-strand cleavages at both ends of each IES, followed by DSB repair via highly precise end joining. This work broadens our current view on how the cellular NHEJ pathway has cooperated with domesticated transposases for the emergence of new
Kapusta, Aurélie; Matsuda, Atsushi; Marmignon, Antoine; Ku, Michael; Silve, Aude; Meyer, Eric; Forney, James D; Malinsky, Sophie; Bétermier, Mireille
2011-04-01
During the sexual cycle of the ciliate Paramecium, assembly of the somatic genome includes the precise excision of tens of thousands of short, non-coding germline sequences (Internal Eliminated Sequences or IESs), each one flanked by two TA dinucleotides. It has been reported previously that these genome rearrangements are initiated by the introduction of developmentally programmed DNA double-strand breaks (DSBs), which depend on the domesticated transposase PiggyMac. These DSBs all exhibit a characteristic geometry, with 4-base 5' overhangs centered on the conserved TA, and may readily align and undergo ligation with minimal processing. However, the molecular steps and actors involved in the final and precise assembly of somatic genes have remained unknown. We demonstrate here that Ligase IV and Xrcc4p, core components of the non-homologous end-joining pathway (NHEJ), are required both for the repair of IES excision sites and for the circularization of excised IESs. The transcription of LIG4 and XRCC4 is induced early during the sexual cycle and a Lig4p-GFP fusion protein accumulates in the developing somatic nucleus by the time IES excision takes place. RNAi-mediated silencing of either gene results in the persistence of free broken DNA ends, apparently protected against extensive resection. At the nucleotide level, controlled removal of the 5'-terminal nucleotide occurs normally in LIG4-silenced cells, while nucleotide addition to the 3' ends of the breaks is blocked, together with the final joining step, indicative of a coupling between NHEJ polymerase and ligase activities. Taken together, our data indicate that IES excision is a "cut-and-close" mechanism, which involves the introduction of initiating double-strand cleavages at both ends of each IES, followed by DSB repair via highly precise end joining. This work broadens our current view on how the cellular NHEJ pathway has cooperated with domesticated transposases for the emergence of new mechanisms
High-Precision Half-Life Measurements for the Superallowed Fermi β+ Emitters 14O and 18Ne
Laffoley, A. T.; Andreoiu, C.; Austin, R. A. E.; Ball, G. C.; Bender, P. C.; Bidaman, H.; Bildstein, V.; Blank, B.; Bouzomita, H.; Cross, D. S.; Deng, G.; Diaz Varela, A.; Dunlop, M. R.; Dunlop, R.; Finlay, P.; Garnsworthy, A. B.; Garrett, P.; Giovinazzo, J.; Grinyer, G. F.; Grinyer, J.; Hadinia, B.; Jamieson, D. S.; Jigmeddorj, B.; Ketelhut, S.; Kisliuk, D.; Leach, K. G.; Leslie, J. R.; MacLean, A.; Miller, D.; Mills, B.; Moukaddam, M.; Radich, A. J.; Rajabali, M. M.; Rand, E. T.; Svensson, C. E.; Tardiff, E.; Thomas, J. C.; Turko, J.; Voss, P.; Unsworth, C.
High-precision half-life measurements, at the level of ±0.04%, for the superallowed Fermi emitters 14O and 18Ne have been performed at TRIUMF's Isotope Separator and Accelerator facility. Using 3 independent detector systems, a gas-proportional counter, a fast plastic scintillator, and a high-purity germanium array, a series of direct β and γ counting measurements were performed for each of the isotopes. In the case of 14O, these measurements were made to help resolve an existing discrepancy between detection methods, whereas for 18Ne the half-life precision has been improved in anticipation of forthcoming high-precision branching ratio measurements.
High Precision Stokes Polarimetry for Scattering Light using Wide Dynamic Range Intensity Detector
Shibata Shuhei
2015-01-01
Full Text Available This paper proposes a Stokes polarimetry for scattering light from a sample surface. To achieve a high accuracy measurement two approaches of an intensity detector and analysis algorism of a Stokes parameter were proposed. The dynamic range of this detector can achieve up to 1010 by combination of change of neutral-density (ND filters having different density and photon counting units. Stokes parameters can be measured by dual rotating of a retarder and an analyzer. The algorism of dual rotating polarimeter can be calibrated small linear diattenuation and linear retardance error of the retarder. This system can measured Stokes parameters from −20° to 70° of its scattering angle. It is possible to measure Stokes parameters of scattering of dust and scratch of optical device with high precision. This paper shows accuracy of this system, checking the polarization change of scattering angle and influence of beam size.
High-precision soft x-ray polarimeter at Diamond Light Source.
Wang, H; Dhesi, S S; Maccherozzi, F; Cavill, S; Shepherd, E; Yuan, F; Deshmukh, R; Scott, S; van der Laan, G; Sawhney, K J S
2011-12-01
The development and performance of a high-precision polarimeter for the polarization analysis in the soft x-ray region is presented. This versatile, high-vacuum compatible instrument is supported on a hexapod to simplify the alignment with a resolution less than 5 μrad, and can be moved with its own independent control system easily between different beamlines and synchrotron facilities. The polarimeter can also be used for the characterization of reflection and transmission properties of optical elements. A W/B(4)C multilayer phase retarder was used to characterize the polarization state up to 1200 eV. A fast and accurate alignment procedure was developed, and complete polarization analysis of the APPLE II undulator at 712 eV has been performed.
Kim, Jong-Ahn; Kim, Jae Wan; Kang, Chu-Shik; Jin, Jonghan; Eom, Tae Bong
2011-11-01
We present an angle generator with high resolution and accuracy, which uses multiple ultrasonic motors and a self-calibratable encoder. A cylindrical air bearing guides a rotational motion, and the ultrasonic motors achieve high resolution over the full circle range with a simple configuration. The self-calibratable encoder can compensate the scale error of a divided circle (signal period: 20") effectively by applying the equal-division-averaged method. The angle generator configures a position feedback control loop using the readout of the encoder. By combining the ac and dc operation mode, the angle generator produced stepwise angular motion with 0.005" resolution. We also evaluated the performance of the angle generator using a precision angle encoder and an autocollimator. The expanded uncertainty (k = 2) in the angle generation was estimated less than 0.03", which included the calibrated scale error and the nonlinearity error. © 2011 American Institute of Physics
Flow-Based Systems for Rapid and High-Precision Enzyme Kinetics Studies
Supaporn Kradtap Hartwell
2012-01-01
Full Text Available Enzyme kinetics studies normally focus on the initial rate of enzymatic reaction. However, the manual operation of steps of the conventional enzyme kinetics method has some drawbacks. Errors can result from the imprecise time control and time necessary for manual changing the reaction cuvettes into and out of the detector. By using the automatic flow-based analytical systems, enzyme kinetics studies can be carried out at real-time initial rate avoiding the potential errors inherent in manual operation. Flow-based systems have been developed to provide rapid, low-volume, and high-precision analyses that effectively replace the many tedious and high volume requirements of conventional wet chemistry analyses. This article presents various arrangements of flow-based techniques and their potential use in future enzyme kinetics applications.
Song, J.L.; Li, Y.T.; Liu, Z.Q.; Fu, J.H.; Ting, K.L.
2009-01-01
According to the disadvantages of conventional bar cutting technology such as low-cutting speed, inferior section quality, high-processing cost and so on, a kind of novel precision bar cutting technology has been proposed. The cutting mechanism has also been analyzed. Finite element numerical simulation of the bar cutting process under different working conditions has been carried out with DEFORM. The stress and strain fields at different cutting speed and the variation curves of the cutting force and appropriate cutting parameters have been obtained. Scanning electron microscopy analysis of the cutting surface showed that the finite-element simulation result is correct and better cutting quality can be obtained with the developed bar cutting technology and equipment based on high speed and restrained state
Ding, Chunling; Li, Jiahua; Yu, Rong; Hao, Xiangying; Wu, Ying
2012-03-26
A scheme for realizing two-dimensional (2D) atom localization is proposed based on controllable spontaneous emission in a coherently driven cycle-configuration atomic system. As the spatial-position-dependent atom-field interaction, the frequency of the spontaneously emitted photon carries the information about the position of the atom. Therefore, by detecting the emitted photon one could obtain the position information available, and then we demonstrate high-precision and high-resolution 2D atom localization induced by the quantum interference between the multiple spontaneous decay channels. Moreover, we can achieve 100% probability of finding the atom at an expected position by choosing appropriate system parameters under certain conditions.
High precision Cross-correlated imaging in Few-mode fibers
Muliar, Olena; Usuga Castaneda, Mario A.; Kristensen, Torben
2017-01-01
us to distinguishing differential time delays between HOMs in the picosecond timescale. Broad wavelength scanning in combination with spectral shaping, allows us to estimate the modal behavior of FMF without prior knowledge of the fiber parameters. We performed our demonstration at wavelengths from...... existing approaches for modal content analysis, several methods as S2, C2 in time and frequency domain are available. In this contribution we will present an improved time-domain cross-correlated (C2) imaging technique for the experimental evaluation of modal properties in HOM fibers over a broad range......) in a few-mode fiber (FMF) are used as multiple spatial communication channels, comes in this context as a viable approach to enable the optimization of high-capacity links. From this perspective, it becomes highly necessary to possess a diagnostic tool for the precise modal characterization of FMFs. Among...
Bojowald, Martin
2008-01-01
Quantum gravity is expected to be necessary in order to understand situations in which classical general relativity breaks down. In particular in cosmology one has to deal with initial singularities, i.e., the fact that the backward evolution of a classical spacetime inevitably comes to an end after a finite amount of proper time. This presents a breakdown of the classical picture and requires an extended theory for a meaningful description. Since small length scales and high curvatures are involved, quantum effects must play a role. Not only the singularity itself but also the surrounding spacetime is then modified. One particular theory is loop quantum cosmology, an application of loop quantum gravity to homogeneous systems, which removes classical singularities. Its implications can be studied at different levels. The main effects are introduced into effective classical equations, which allow one to avoid the interpretational problems of quantum theory. They give rise to new kinds of early-universe phenomenology with applications to inflation and cyclic models. To resolve classical singularities and to understand the structure of geometry around them, the quantum description is necessary. Classical evolution is then replaced by a difference equation for a wave function, which allows an extension of quantum spacetime beyond classical singularities. One main question is how these homogeneous scenarios are related to full loop quantum gravity, which can be dealt with at the level of distributional symmetric states. Finally, the new structure of spacetime arising in loop quantum gravity and its application to cosmology sheds light on more general issues, such as the nature of time. Supplementary material is available for this article at 10.12942/lrr-2008-4.
Bojowald Martin
2008-07-01
Full Text Available Quantum gravity is expected to be necessary in order to understand situations in which classical general relativity breaks down. In particular in cosmology one has to deal with initial singularities, i.e., the fact that the backward evolution of a classical spacetime inevitably comes to an end after a finite amount of proper time. This presents a breakdown of the classical picture and requires an extended theory for a meaningful description. Since small length scales and high curvatures are involved, quantum effects must play a role. Not only the singularity itself but also the surrounding spacetime is then modified. One particular theory is loop quantum cosmology, an application of loop quantum gravity to homogeneous systems, which removes classical singularities. Its implications can be studied at different levels. The main effects are introduced into effective classical equations, which allow one to avoid the interpretational problems of quantum theory. They give rise to new kinds of early-universe phenomenology with applications to inflation and cyclic models. To resolve classical singularities and to understand the structure of geometry around them, the quantum description is necessary. Classical evolution is then replaced by a difference equation for a wave function, which allows an extension of quantum spacetime beyond classical singularities. One main question is how these homogeneous scenarios are related to full loop quantum gravity, which can be dealt with at the level of distributional symmetric states. Finally, the new structure of spacetime arising in loop quantum gravity and its application to cosmology sheds light on more general issues, such as the nature of time.
Bojowald Martin
2005-12-01
Full Text Available Quantum gravity is expected to be necessary in order to understand situations where classical general relativity breaks down. In particular in cosmology one has to deal with initial singularities, i.e., the fact that the backward evolution of a classical space-time inevitably comes to an end after a finite amount of proper time. This presents a breakdown of the classical picture and requires an extended theory for a meaningful description. Since small length scales and high curvatures are involved, quantum effects must play a role. Not only the singularity itself but also the surrounding space-time is then modified. One particular realization is loop quantum cosmology, an application of loop quantum gravity to homogeneous systems, which removes classical singularities. Its implications can be studied at different levels. Main effects are introduced into effective classical equations which allow to avoid interpretational problems of quantum theory. They give rise to new kinds of early universe phenomenology with applications to inflation and cyclic models. To resolve classical singularities and to understand the structure of geometry around them, the quantum description is necessary. Classical evolution is then replaced by a difference equation for a wave function which allows to extend space-time beyond classical singularities. One main question is how these homogeneous scenarios are related to full loop quantum gravity, which can be dealt with at the level of distributional symmetric states. Finally, the new structure of space-time arising in loop quantum gravity and its application to cosmology sheds new light on more general issues such as time.
High-precision GPS autonomous platforms for sea ice dynamics and physical oceanography
Elosegui, P.; Wilkinson, J.; Olsson, M.; Rodwell, S.; James, A.; Hagan, B.; Hwang, B.; Forsberg, R.; Gerdes, R.; Johannessen, J.; Wadhams, P.; Nettles, M.; Padman, L.
2012-12-01
Project "Arctic Ocean sea ice and ocean circulation using satellite methods" (SATICE), is the first high-rate, high-precision, continuous GPS positioning experiment on sea ice in the Arctic Ocean. The SATICE systems collect continuous, dual-frequency carrier-phase GPS data while drifting on sea ice. Additional geophysical measurements also collected include ocean water pressure, ocean surface salinity, atmospheric pressure, snow-depth, air-ice-ocean temperature profiles, photographic imagery, and others, enabling sea ice drift, freeboard, weather, ice mass balance, and sea-level height determination. Relatively large volumes of data from each buoy are streamed over a satellite link to a central computer on the Internet in near real time, where they are processed to estimate the time-varying buoy positions. SATICE system obtains continuous GPS data at sub-minute intervals with a positioning precision of a few centimetres in all three dimensions. Although monitoring of sea ice motions goes back to the early days of satellite observations, these autonomous platforms bring out a level of spatio-temporal detail that has never been seen before, especially in the vertical axis. These high-resolution data allows us to address new polar science questions and challenge our present understanding of both sea ice dynamics and Arctic oceanography. We will describe the technology behind this new autonomous platform, which could also be adapted to other applications that require high resolution positioning information with sustained operations and observations in the polar marine environment, and present results pertaining to sea ice dynamics and physical oceanography.
The STiC ASIC. High precision timing with silicon photomultipliers
Harion, Tobias
2015-01-01
In recent years, Silicon Photomultipliers are being increasingly used for Time of Flight measurements in particle detectors. To utilize the high intrinsic time resolution of these sensors in detector systems, the development of specialized, highly integrated readout electronics is required. In this thesis, a mixed-signal application specific integrated circuit, named STiC, has been developed, characterized and integrated in a detector system. STiC has been specifically designed for high precision timing measurements with SiPMs, and is in particular dedicated to the EndoTOFPET-US project, which aims to achieve a coincidence time resolution of 200 ps FWHM and an energy resolution of less than 20% in an endoscopic positron emission tomography system. The chip integrates 64 high precision readout channels for SiPMs together with a digital core logic to process, store and transfer the recorded events to a data acquisition system. The performance of the chip has been validated in coincidence measurements using detector modules consisting of 3.1 x 3.1 x 15 mm 3 LYSO crystals coupled to Silicon Photomultipliers from Hamamatsu. The measurements show an energy resolution of 15% FWHM for the detection of 511 keV photons. A coincidence time resolution of 213 ps FWHM has been measured, which is among the best resolution values achieved to date with this detector topology. STiC has been integrated in the EndoTOFPET-US detector system and has been chosen as the baseline design for the readout of SiPM sensors in the Mu3e experiment.
Cosmology seeking friendship with sterile neutrinos
Hamann, Jan; Hannestad, Steen; Raffelt, G.G.
2011-01-01
Precision cosmology and big-bang nucleosynthesis mildly favour extra radiation in the universe beyond photons and ordinary neutrinos, lending support to the existence of low-mass sterile neutrinos. We present bounds on the common mass scale ms and effective number Ns of thermally excited sterile ...
Cosmology with cosmic microwave background anisotropy
Measurements of CMB anisotropy and, more recently, polarization have played a very important role in allowing precise determination of various parameters of the `standard' cosmological model. The expectation of the paradigm of inflation and the generic prediction of the simplest realization of inflationary scenario in the ...
High precision measurement of the {eta} meson mass at COSY-ANKE
Goslawski, Paul
2013-07-01
Previous measurements of the {eta} meson mass performed at different experimental facilities resulted in very precise data but differ by up to more than eight standard deviations, i.e., 0.5 MeV/c. Interestingly, the difference seems to be dependent on the measuring method: two missing mass experiments, which produce the {eta} meson in the {sup 3}He{eta} final state, deviate from the recent invariant mass ones. In order to clarify this ambiguous situation a high precision mass measurement was realised at the COSY-ANKE facility. Therefore, a set of deuteron laboratory beam momenta and their associated {sup 3}He centre-of-mass momenta was measured in the dp{yields}{sup 3}HeX reaction near the {eta} production threshold. The {eta} meson was identified by the missing mass peak, whereas its mass was extracted by fixing the production threshold. The individual beam momenta were determined with a relative precision of 3 x 10{sup -5} for values just above 3 GeV/c by using a polarised deuteron beam and inducing an artificial depolarising spin resonance occurring at a well-defined frequency. The final state momenta in the two-body reaction dp{yields}{sup 3}He{eta} were investigated in detail by studying the size of the {sup 3}He momentum sphere with the forward detection system of the ANKE spectrometer. Final alignment and momentum calibration of the spectrometer was achieved by a comprehensive study of the {sup 3}He final state momenta as a function of the centre-of-mass angles, taking advantage of the full geometrical acceptance. The value obtained for the mass at COSY-ANKE m{sub {eta}}=(547.873{+-}0.005{sub stat.}{+-}0.027{sub syst.}) MeV/c{sup 2} is therefore worldwide the most precise one. This mass value is contrary to earlier missing mass experiments but it is consistent and competitive with recent invariant mass measurements, in which the meson was detected through its decay products.
CHEOPS: a space telescope for ultra-high precision photometry of exoplanet transits
Cessa, V.; Beck, T.; Benz, W.; Broeg, C.; Ehrenreich, D.; Fortier, A.; Peter, G.; Magrin, D.; Pagano, I.; Plesseria, J.-Y.; Steller, M.; Szoke, J.; Thomas, N.; Ragazzoni, R.; Wildi, F.
2017-11-01
The CHaracterising ExOPlanet Satellite (CHEOPS) is a joint ESA-Switzerland space mission dedicated to search for exoplanet transits by means of ultra-high precision photometry whose launch readiness is expected end 2017. The CHEOPS instrument will be the first space telescope dedicated to search for transits on bright stars already known to host planets. By being able to point at nearly any location on the sky, it will provide the unique capability of determining accurate radii for a subset of those planets for which the mass has already been estimated from ground-based spectroscopic surveys. CHEOPS will also provide precision radii for new planets discovered by the next generation ground-based transits surveys (Neptune-size and smaller). The main science goals of the CHEOPS mission will be to study the structure of exoplanets with radii typically ranging from 1 to 6 Earth radii orbiting bright stars. With an accurate knowledge of masses and radii for an unprecedented sample of planets, CHEOPS will set new constraints on the structure and hence on the formation and evolution of planets in this mass range. To reach its goals CHEOPS will measure photometric signals with a precision of 20 ppm in 6 hours of integration time for a 9th magnitude star. This corresponds to a signal to noise of 5 for a transit of an Earth-sized planet orbiting a solar-sized star (0.9 solar radii). This precision will be achieved by using a single frame-transfer backside illuminated CCD detector cool down at 233K and stabilized within {10 mK . The CHEOPS optical design is based on a Ritchey-Chretien style telescope with 300 mm effective aperture diameter, which provides a defocussed image of the target star while minimizing straylight using a dedicated field stop and baffle system. As CHEOPS will be in a LEO orbit, straylight suppression is a key point to allow the observation of faint stars. The telescope will be the only payload on a spacecraft platform providing pointing stability of
Labeyrie, Antoine; Le Coroller, Herve; Dejonghe, Julien; Lardiere, Olivier; Aime, Claude; Dohlen, Kjetil; Mourard, Denis; Lyon, Richard; Carpenter, Kenneth G.
2008-01-01
Luciola is a large (one kilometer) "multi-aperture densified-pupil imaging interferometer", or "hypertelescope" employing many small apertures, rather than a few large ones, for obtaining direct snapshot images with a high information content. A diluted collector mirror, deployed in space as a flotilla of small mirrors, focuses a sky image which is exploited by several beam-combiner spaceships. Each contains a pupil densifier micro-lens array to avoid the diffractive spread and image attenuation caused by the small sub-apertures. The elucidation of hypertelescope imaging properties during the last decade has shown that many small apertures tend to be far more efficient, regarding the science yield, than a few large ones providing a comparable collecting area. For similar underlying physical reasons, radio-astronomy has also evolved in the direction of many-antenna systems such as the proposed Low Frequency Array having hundreds of thousands of individual receivers . With its high limiting magnitude, reaching the mv=30 limit of HST when 100 collectors of 25cm will match its collecting area, high-resolution direct imaging in multiple channels, broad spectral coverage from the 1200 Angstrom ultra-violet to the 20 micron infra-red, apodization, coronagraphic and spectroscopic capabilities, the proposed hypertelescope observatory addresses very broad and innovative science covering different areas of ESA s Cosmic Vision program. In the initial phase, a focal spacecraft covering the UV to near IR spectral range of EMCCD photon-counting cameras ( currently 200 to 1000nm), will image details on the surface of many stars, as well as their environment, including multiple stars and clusters. Spectra will be obtained for each resel. It will also image neutron star, black-hole and micro-quasar candidates, as well as active galactic nuclei, quasars, gravitational lenses, and other Cosmic Vision targets observable with the initial modest crowding limit. With subsequent upgrade
ZHOU Yuliang; LU Guihua; JIN Juliang; TONG Fang; ZHOU Ping
2006-01-01
Precise comprehensive evaluation of flood disaster loss is significant for the prevention and mitigation of flood disasters. Here, one of the difficulties involved is how to establish a model capable of describing the complex relation between the input and output data of the system of flood disaster loss. Genetic programming (GP) solves problems by using ideas from genetic algorithm and generates computer programs automatically. In this study a new method named the evaluation of the grade of flood disaster loss (EGFD) on the basis of improved genetic programming (IGP) is presented (IGPEGFD). The flood disaster area and the direct economic loss are taken as the evaluation indexes of flood disaster loss. Obviously that the larger the evaluation index value, the larger the corresponding value of the grade of flood disaster loss is. Consequently the IGP code is designed to make the value of the grade of flood disaster be an increasing function of the index value. The result of the application of the IGP-EGFD model to Henan Province shows that a good function expression can be obtained within a bigger searched function space; and the model is of high precision and considerable practical significance.Thus, IGP-EGFD can be widely used in automatic modeling and other evaluation systems.
High-precision tracking of brownian boomerang colloidal particles confined in quasi two dimensions.
Chakrabarty, Ayan; Wang, Feng; Fan, Chun-Zhen; Sun, Kai; Wei, Qi-Huo
2013-11-26
In this article, we present a high-precision image-processing algorithm for tracking the translational and rotational Brownian motion of boomerang-shaped colloidal particles confined in quasi-two-dimensional geometry. By measuring mean square displacements of an immobilized particle, we demonstrate that the positional and angular precision of our imaging and image-processing system can achieve 13 nm and 0.004 rad, respectively. By analyzing computer-simulated images, we demonstrate that the positional and angular accuracies of our image-processing algorithm can achieve 32 nm and 0.006 rad. Because of zero correlations between the displacements in neighboring time intervals, trajectories of different videos of the same particle can be merged into a very long time trajectory, allowing for long-time averaging of different physical variables. We apply this image-processing algorithm to measure the diffusion coefficients of boomerang particles of three different apex angles and discuss the angle dependence of these diffusion coefficients.
ARTIFICIAL INCOHERENT SPECKLES ENABLE PRECISION ASTROMETRY AND PHOTOMETRY IN HIGH-CONTRAST IMAGING
Jovanovic, N.; Guyon, O.; Pathak, P.; Kudo, T. [National Astronomical Observatory of Japan, Subaru Telescope, 650 North A’Ohoku Place, Hilo, HI, 96720 (United States); Martinache, F. [Observatoire de la Cote d’Azur, Boulevard de l’Observatoire, F-06304 Nice (France); Hagelberg, J., E-mail: jovanovic.nem@gmail.com [Institute for Astronomy, University of Hawaii, 2680 Woodlawn Drive, Honolulu, HI 96822 (United States)
2015-11-10
State-of-the-art coronagraphs employed on extreme adaptive optics enabled instruments are constantly improving the contrast detection limit for companions at ever-closer separations from the host star. In order to constrain their properties and, ultimately, compositions, it is important to precisely determine orbital parameters and contrasts with respect to the stars they orbit. This can be difficult in the post-coronagraphic image plane, as by definition the central star has been occulted by the coronagraph. We demonstrate the flexibility of utilizing the deformable mirror in the adaptive optics system of the Subaru Coronagraphic Extreme Adaptive Optics system to generate a field of speckles for the purposes of calibration. Speckles can be placed up to 22.5 λ/D from the star, with any position angle, brightness, and abundance required. Most importantly, we show that a fast modulation of the added speckle phase, between 0 and π, during a long science integration renders these speckles effectively incoherent with the underlying halo. We quantitatively show for the first time that this incoherence, in turn, increases the robustness and stability of the adaptive speckles, which will improve the precision of astrometric and photometric calibration procedures. This technique will be valuable for high-contrast imaging observations with imagers and integral field spectrographs alike.
High-precision GNSS ocean positioning with BeiDou short-message communication
Li, Bofeng; Zhang, Zhiteng; Zang, Nan; Wang, Siyao
2018-04-01
The current popular GNSS RTK technique would be not applicable on ocean due to the limited communication access for transmitting differential corrections. A new technique is proposed for high-precision ocean RTK, referred to as ORTK, where the corrections are transmitted by employing the function of BeiDou satellite short-message communication (SMC). To overcome the limitation of narrow bandwidth of BeiDou SMC, a new strategy of simplifying and encoding corrections is proposed instead of standard differential corrections, which reduces the single-epoch corrections from more than 1000 to less than 300 bytes. To solve the problems of correction delays, cycle slips, blunders and abnormal epochs over ultra-long baseline ORTK, a series of powerful algorithms were designed at the user-end software for achieving the stable and precise kinematic solutions on far ocean applications. The results from two long baselines of 240 and 420 km and real ocean experiments reveal that the kinematic solutions with horizontal accuracy of 5 cm and vertical accuracy of better than 15 cm are achievable by convergence time of 3-10 min. Compared to commercial ocean PPP with satellite telecommunication, ORTK is of much cheaper expense, higher accuracy and shorter convergence. It will be very prospective in many location-based ocean services.
Zhiguo Huang
2017-11-01
Full Text Available Infrared (IR radiometry technology is an important method for characterizing the IR signature of targets, such as aircrafts or rockets. However, the received signal of targets could be reduced by a combination of atmospheric molecule absorption and aerosol scattering. Therefore, atmospheric correction is a requisite step for obtaining the real radiance of targets. Conventionally, the atmospheric transmittance and the air path radiance are calculated by an atmospheric radiative transfer calculation software. In this paper, an improved IR radiometric method based on constant reference correction of atmospheric attenuation is proposed. The basic principle and procedure of this method are introduced, and then the linear model of high-speed calibration in consideration of the integration time is employed and confirmed, which is then applicable in various complex conditions. To eliminate stochastic errors, radiometric experiments were conducted for multiple integration times. Finally, several experiments were performed on a mid-wave IR system with Φ600 mm aperture. The radiometry results indicate that the radiation inversion precision of the novel method is 4.78–4.89%, while the precision of the conventional method is 10.86–13.81%.
A High Precision $3.50 Open Source 3D Printed Rain Gauge Calibrator
Lopez Alcala, J. M.; Udell, C.; Selker, J. S.
2017-12-01
Currently available rain gauge calibrators tend to be designed for specific rain gauges, are expensive, employ low-precision water reservoirs, and do not offer the flexibility needed to test the ever more popular small-aperture rain gauges. The objective of this project was to develop and validate a freely downloadable, open-source, 3D printed rain gauge calibrator that can be adjusted for a wide range of gauges. The proposed calibrator provides for applying low, medium, and high intensity flow, and allows the user to modify the design to conform to unique system specifications based on parametric design, which may be modified and printed using CAD software. To overcome the fact that different 3D printers yield different print qualities, we devised a simple post-printing step that controlled critical dimensions to assure robust performance. Specifically, the three orifices of the calibrator are drilled to reach the three target flow rates. Laboratory tests showed that flow rates were consistent between prints, and between trials of each part, while the total applied water was precisely controlled by the use of a volumetric flask as the reservoir.
Microsurgery robots: addressing the needs of high-precision surgical interventions.
Mattos, Leonardo S; Caldwell, Darwin G; Peretti, Giorgio; Mora, Francesco; Guastini, Luca; Cingolani, Roberto
2016-01-01
Robotics has a significant potential to enhance the overall capacity and efficiency of healthcare systems. Robots can help surgeons perform better quality operations, leading to reductions in the hospitalisation time of patients and in the impact of surgery on their postoperative quality of life. In particular, robotics can have a significant impact on microsurgery, which presents stringent requirements for superhuman precision and control of the surgical tools. Microsurgery is, in fact, expected to gain importance in a growing range of surgical specialties as novel technologies progressively enable the detection, diagnosis and treatment of diseases at earlier stages. Within such scenarios, robotic microsurgery emerges as one of the key components of future surgical interventions, and will be a vital technology for addressing major surgical challenges. Nonetheless, several issues have yet to be overcome in terms of mechatronics, perception and surgeon-robot interfaces before microsurgical robots can achieve their full potential in operating rooms. Research in this direction is progressing quickly and microsurgery robot prototypes are gradually demonstrating significant clinical benefits in challenging applications such as reconstructive plastic surgery, ophthalmology, otology and laryngology. These are reassuring results offering confidence in a brighter future for high-precision surgical interventions.
Precise coulometric titration of uranium in a high-purity uranium metal and in uranium compounds
Tanaka, Tatsuhiko; Yoshimori, Takayoshi
1975-01-01
Uranium in uranyl nitrate, uranium trioxide and a high-purity uranium metal was assayed by the coulometric titration with biamperometric end-point detection. Uranium (VI) was reduced to uranium (IV) by solid bismuth amalgam in 5M sulfuric acid solution. The reduced uranium was reoxidized to uranium (VI) with a large excess of ferric ion at a room temperature, and the ferrous ion produced was titrated with the electrogenerated manganese(III) fluoride. In the analyses of uranium nitrate and uranium trioxide, the results were precise enough when the error from uncertainty in water content in the samples was considered. The standard sample of pure uranium metal (JAERI-U4) was assayed by the proposed method. The sample was cut into small chips of about 0.2g. Oxides on the metal surface were removed by the procedure shown by National Bureau of Standards just before weighing. The mean assay value of eleven determinations corrected for 3ppm of iron was (99.998+-0.012) % (the 95% confidence interval for the mean), with a standard deviation of 0.018%. The proposed coulometric method is simple and permits accurate and precise determination of uranium which is matrix constituent in a sample. (auth.)
Application of the spherical harmonic gravity model in high precision inertial navigation systems
Wang, Jing; Yang, Gongliu; Zhou, Xiao; Li, Xiangyun
2016-01-01
The spherical harmonic gravity model (SHM) may, in general, be considered as a suitable alternative to the normal gravity model (NGM), because it represents the Earth’s gravitational field more accurately. However, the high-resolution SHM has never been used in current inertial navigation systems (INSs) due to its extremely complex expression. In this paper, the feasibility and accuracy of a truncated SHM are discussed for application in a real-time free-INS with a precision demand better than 0.8 nm h −1 . In particular, the time and space complexity are analyzed mathematically to verify the feasibility of the SHM. Also, a test on a typical navigation computer shows a storable range of cut-off degrees. To further evaluate the appropriate degree and accuracy of the truncated SHM, analyses of covariance and truncation error are proposed. Finally, a SHM of degree 12 is demonstrated to be the appropriate model for routine INSs in the precision range of 0.4–0.75 nm h −1 . Flight simulations and road tests show its outstanding performance over the traditional NGM. (paper)
High-precision positioning system of four-quadrant detector based on the database query
Zhang, Xin; Deng, Xiao-guo; Su, Xiu-qin; Zheng, Xiao-qiang
2015-02-01
The fine pointing mechanism of the Acquisition, Pointing and Tracking (APT) system in free space laser communication usually use four-quadrant detector (QD) to point and track the laser beam accurately. The positioning precision of QD is one of the key factors of the pointing accuracy to APT system. A positioning system is designed based on FPGA and DSP in this paper, which can realize the sampling of AD, the positioning algorithm and the control of the fast swing mirror. We analyze the positioning error of facular center calculated by universal algorithm when the facular energy obeys Gauss distribution from the working principle of QD. A database is built by calculation and simulation with MatLab software, in which the facular center calculated by universal algorithm is corresponded with the facular center of Gaussian beam, and the database is stored in two pieces of E2PROM as the external memory of DSP. The facular center of Gaussian beam is inquiry in the database on the basis of the facular center calculated by universal algorithm in DSP. The experiment results show that the positioning accuracy of the high-precision positioning system is much better than the positioning accuracy calculated by universal algorithm.
Research on a high-precision calibration method for tunable lasers
Xiang, Na; Li, Zhengying; Gui, Xin; Wang, Fan; Hou, Yarong; Wang, Honghai
2018-03-01
Tunable lasers are widely used in the field of optical fiber sensing, but nonlinear tuning exists even for zero external disturbance and limits the accuracy of the demodulation. In this paper, a high-precision calibration method for tunable lasers is proposed. A comb filter is introduced and the real-time output wavelength and scanning rate of the laser are calibrated by linear fitting several time-frequency reference points obtained from it, while the beat signal generated by the auxiliary interferometer is interpolated and frequency multiplied to find more accurate zero crossing points, with these points being used as wavelength counters to resample the comb signal to correct the nonlinear effect, which ensures that the time-frequency reference points of the comb filter are linear. A stability experiment and a strain sensing experiment verify the calibration precision of this method. The experimental result shows that the stability and wavelength resolution of the FBG demodulation can reach 0.088 pm and 0.030 pm, respectively, using a tunable laser calibrated by the proposed method. We have also compared the demodulation accuracy in the presence or absence of the comb filter, with the result showing that the introduction of the comb filter results to a 15-fold wavelength resolution enhancement.
High precision tools for slepton pair production processes at hadron colliders
Thier, Stephan Christoph
2015-01-01
In this thesis, we develop high precision tools for the simulation of slepton pair production processes at hadron colliders and apply them to phenomenological studies at the LHC. Our approach is based on the POWHEG method for the matching of next-to-leading order results in perturbation theory to parton showers. We calculate matrix elements for slepton pair production and for the production of a slepton pair in association with a jet perturbatively at next-to-leading order in supersymmetric quantum chromodynamics. Both processes are subsequently implemented in the POWHEG BOX, a publicly available software tool that contains general parts of the POWHEG matching scheme. We investigate phenomenological consequences of our calculations in several setups that respect experimental exclusion limits for supersymmetric particles and provide precise predictions for slepton signatures at the LHC. The inclusion of QCD emissions in the partonic matrix elements allows for an accurate description of hard jets. Interfacing our codes to the multi-purpose Monte-Carlo event generator PYTHIA, we simulate parton showers and slepton decays in fully exclusive events. Advanced kinematical variables and specific search strategies are examined as means for slepton discovery in experimentally challenging setups.
Effect of stellar activity on the high precision transit light curve
Oshagh, M.
2015-01-01
Full Text Available Stellar activity features such as spots and plages can create difficulties in determining planetary parameters through spectroscopic and photometric observations. The overlap of a transiting planet and a stellar spot, for instance, can produce anomalies in the transit light curve that may lead to inaccurate estimation of the transit duration, depth, and timing. Such inaccuracies can affect the precise derivation of the planet’s radius. In this talk we will present the results of a quantitative study on the effects of stellar spots on high precision transit light curves. We show that spot anomalies can lead to the estimate of a planet radius that is 4% smaller than the real value. The effects on the transit duration can also be of the order of 4%, longer or shorter. Depending on the size and distribution of spots, anomalies can also produce transit timing variations with significant amplitudes. For instance, TTVs with signal amplitudes of 200 seconds can be produced by spots as large as the largest sunspot. Finally, we examine the impact of active regions on the transit depth measurements in different wavelengths, in order to probe the impact of this effect on transmission spectroscopy measurements. We show that significant (up to 10% underestimation/overestimation of the planet-to-star radius ratio can be measured, especially in the short wavelength regime.
Shuffle motor: a high force, high precision linear electrostatic stepper motor
Tas, Niels Roelof; Wissink, Jeroen; Sander, A.F.M.; Sander, Louis; Lammerink, Theodorus S.J.; Elwenspoek, Michael Curt
1997-01-01
The shuffle motor is a electrostatic stepper motor that employs a mechanical transformation to obtain high forces and small steps. A model has been made to calculate the driving voltage, step size and maximum load to pull as well as the optimal geometry. Tests results are an effective step size of