Raja Roy Choudhury
2013-01-01
Full Text Available A faster and accurate semianalytical formulation with a robust optimization solution for estimating the splice loss of graded-index fibers has been proposed. The semianalytical optimization of modal parameters has been carried out by Nelder-Mead method of nonlinear unconstrained minimization suitable for functions which are uncertain, noisy, or even discontinuous. Instead of normally used Gaussian function, as the trial field for the fundamental mode of graded-index optical fiber a novel sinc function with exponentially and R-3/2 (R is the normalized radius of the optical fiber decaying trailing edge has been used. Due to inclusion of three parameters in the optimization of fundamental modal solution and application of an efficient optimization technique with simple analytical expressions for various modal parameters, the results are found to be accurate and computationally easier to find than the standard numerical method solution.
Roy Choudhury, Raja; Roy Choudhury, Arundhati; Kanti Ghose, Mrinal
2013-01-01
A semi-analytical model with three optimizing parameters and a novel non-Gaussian function as the fundamental modal field solution has been proposed to arrive at an accurate solution to predict various propagation parameters of graded-index fibers with less computational burden than numerical methods. In our semi analytical formulation the optimization of core parameter U which is usually uncertain, noisy or even discontinuous, is being calculated by Nelder-Mead method of nonlinear unconstrained minimizations as it is an efficient and compact direct search method and does not need any derivative information. Three optimizing parameters are included in the formulation of fundamental modal field of an optical fiber to make it more flexible and accurate than other available approximations. Employing variational technique, Petermann I and II spot sizes have been evaluated for triangular and trapezoidal-index fibers with the proposed fundamental modal field. It has been demonstrated that, the results of the proposed solution identically match with the numerical results over a wide range of normalized frequencies. This approximation can also be used in the study of doped and nonlinear fiber amplifier.
Kovtyukh, V. V.; Gorlova, N.; Belik, S. I.
2012-01-01
The oxygen 7771-4 A triplet is a good indicator of luminosity in A-G supergiants. However, its strength also depends on other atmospheric parameters. In this study, we present the luminosity calibrations, where, for the first time, the effects of the effective temperature, microturbulent velocity, surface gravity, and the abundance have been disentangled. The calibrations are derived on the basis of a dataset of high-dispersion spectra of 60 yellow supergiants with highly reliable luminositie...
Bruntt, H
2009-01-01
The CoRoT satellite has provided high-quality light curves of several solar-like stars. Analysis of the light curves provides oscillation frequencies that make it possible to probe the interior of the stars. However, additional constraints on the fundamental parameters of the stars are important for the theoretical modelling to be successful. We will estimate the fundamental parameters (mass, radius and luminosity) of the first four solar-like targets to be observed in the asteroseismic field. In addition, we will determine their effective temperature, metallicity and detailed abundance pattern. To constrain the stellar mass, radius and age we use the SHOTGUN software which compares the location of the stars in the Hertzsprung-Russell diagram with theoretical evolution models. This method takes into account the uncertainties of the observed parameters including the large separation determined from the solar-like oscillations. We determine the effective temperatures and abundance patterns in the stars from the...
The fundamental parameters of physics
The four parameters space, time, mass and charge are shown to possess an exact symmetry as a group of order 4. The explicit properties of the parameters as displayed in this group are then used to propose derivations of the fundamental principles of classical mechanics, electromagnetic theory and particle physics. The derivations suggest that the laws of physics and the fundamental particles have a single origin in the initial process of direct measurement. (Auth.)
Fundamental Parameters of Massive Stars
Crowther, Paul A.
2003-01-01
We discuss the determination of fundamental parameters of `normal' hot, massive OB-type stars, namely temperatures, luminosities, masses, gravities and surface abundances. We also present methods used to derive properties of stellar winds -- mass-loss rates and wind velocities from early-type stars.
Reconstruction of fundamental SUSY parameters
P. M. Zerwas et al.
2003-09-25
We summarize methods and expected accuracies in determining the basic low-energy SUSY parameters from experiments at future e{sup +}e{sup -} linear colliders in the TeV energy range, combined with results from LHC. In a second step we demonstrate how, based on this set of parameters, the fundamental supersymmetric theory can be reconstructed at high scales near the grand unification or Planck scale. These analyses have been carried out for minimal supergravity [confronted with GMSB for comparison], and for a string effective theory.
Fundamental Parameters of 4 Massive Eclipsing Binaries in Westerlund 1
Bonanos, Alceste Z.; Koumpia, E.
2011-05-01
We present fundamental parameters of 4 massive eclipsing binaries in the young massive cluster Westerlund 1. The goal is to measure accurate masses and radii of their component stars, which provide much needed constraints for evolutionary models of massive stars. Accurate parameters can further be used to determine a dynamical lower limit for the magnetar progenitor and to obtain an independent distance to the cluster. Our results confirm and extend the evidence for a high mass for the progenitor of the magnetar. The authors acknowledge research and travel support from the European Commission Framework Program Seven under the Marie Curie International Reintegration Grant PIRG04-GA-2008-239335.
Atomic spectroscopy and highly accurate measurement: determination of fundamental constants
This document reviews the theoretical and experimental achievements of the author concerning highly accurate atomic spectroscopy applied for the determination of fundamental constants. A pure optical frequency measurement of the 2S-12D 2-photon transitions in atomic hydrogen and deuterium has been performed. The experimental setting-up is described as well as the data analysis. Optimized values for the Rydberg constant and Lamb shifts have been deduced (R = 109737.31568516 (84) cm-1). An experiment devoted to the determination of the fine structure constant with an aimed relative uncertainty of 10-9 began in 1999. This experiment is based on the fact that Bloch oscillations in a frequency chirped optical lattice are a powerful tool to transfer coherently many photon momenta to the atoms. We have used this method to measure accurately the ratio h/m(Rb). The measured value of the fine structure constant is α-1 = 137.03599884 (91) with a relative uncertainty of 6.7*10-9. The future and perspectives of this experiment are presented. This document presented before an academic board will allow his author to manage research work and particularly to tutor thesis students. (A.C.)
Fundamental Parameters and Chemical Composition of Arcturus
Ramirez, I
2011-01-01
We derive a self-consistent set of atmospheric parameters and abundances of 17 elements for the red giant star Arcturus: Teff = 4286+/-30 K, logg = 1.66+/-0.05, and [Fe/H] = -0.52+/-0.04. The effective temperature was determined using model atmosphere fits to the observed spectral energy distribution from the blue to the mid-infrared (0.44 to 10 um). The surface gravity was calculated using the trigonometric parallax of the star and stellar evolution models. A differential abundance analysis relative to the solar spectrum allowed us to derive iron abundances from equivalent width measurements of 37 FeI and 9 FeII lines, unblended in the spectra of both Arcturus and the Sun; the [Fe/H] value adopted is derived from FeI lines. We also determine the mass, radius, and age of Arcturus: M = 1.08+/-0.06 Msun, R = 25.4+/-0.2 Rsun, and t = 7.1(+1.5/-1.2) Gyr. Finally, abundances of the following elements are measured from an equivalent width analysis of atomic features: C, O, Na, Mg, Al, Si, K, Ca, Sc, Ti, V, Cr, Mn, ...
Fundamental Parameters of Kepler Eclipsing Binaries. I. KIC 5738698
Matson, Rachel A.; Gies, Douglas R.; Guo, Zhao; Orosz, Jerome A.
2016-06-01
Eclipsing binaries serve as a valuable source of stellar masses and radii that inform stellar evolutionary models and provide insight into additional astrophysical processes. The exquisite light curves generated by space-based missions such as Kepler offer the most stringent tests to date. We use the Kepler light curve of the 4.8 day eclipsing binary KIC 5739896 with ground based optical spectra to derive fundamental parameters for the system. We reconstruct the component spectra to determine the individual atmospheric parameters, and model the Kepler photometry with the binary synthesis code Eclipsing Light Curve to obtain accurate masses and radii. The two components of KIC 5738698 are F-type stars with {M}1\\=\\1.39+/- 0.04 {M}ȯ , {M}2\\=\\1.34+/- 0.06 {M}ȯ , and {R}1\\=\\1.84+/- 0.03 {R}ȯ , {R}2\\=\\1.72+/- 0.03 {R}ȯ . We also report a small eccentricity (e≲ 0.0017) and unusual albedo values that are required to match the detailed shape of the Kepler light curve. Comparison with evolutionary models indicate an approximate age of 2.3 Gyr for the system.
Fundamental Parameters of Kepler Eclipsing Binaries. I. KIC 5738698
Matson, Rachel A; Guo, Zhao; Orosz, Jerome A
2016-01-01
Eclipsing binaries serve as a valuable source of stellar masses and radii that inform stellar evolutionary models and provide insight into additional astrophysical processes. The exquisite light curves generated by space-based missions such as Kepler offer the most stringent tests to date. We use the Kepler light curve of the 4.8-day eclipsing binary KIC 5739896 with ground based optical spectra to derive fundamental parameters for the system. We reconstruct the component spectra to determine the individual atmospheric parameters, and model the Kepler photometry with the binary synthesis code ELC to obtain accurate masses and radii. The two components of KIC 5738698 are F-type stars with M1 = 1.39+/-0.04M, M2 = 1.34+/-0.06M, and R1 = 1.84+/-0.03R, R2 = 1.72+/-0.03R. We also report a small eccentricity (e < 0.0017) and unusual albedo values that are required to match the detailed shape of the Kepler light curve. Comparisons with evolutionary models indicate an approximate age of 2.3 Gyr for the system.
Kallinger, Thomas; Mosser, Benoit; Hekker, Saskia;
2010-01-01
, as well as investigating the stellar population in our Galaxy. Aims. We aim to extract accurate seismic parameters from the Kepler time series and use them to infer asteroseismic fundamental parameters from scaling relations and a comparison with red-giant models. Methods. We fit a global model...... and to place the stars in an H-R diagram by using an extensive grid of stellar models that covers a wide parameter range. Using Bayesian techniques throughout our entire analysis allows us to determine reliable uncertainties for all parameters. Results. We provide accurate seismic parameters...... and their uncertainties for a large sample of red giants and determine their asteroseismic fundamental parameters. We investigate the influence of the stars' metallicities on their positions in the H-R diagram. Finally, we study the red-giant populations in the red clump and bump and compare them to a synthetic...
Accurate Parameter Estimation for Unbalanced Three-Phase System
Yuan Chen; Hing Cheung So
2014-01-01
Smart grid is an intelligent power generation and control console in modern electricity networks, where the unbalanced three-phase power system is the commonly used model. Here, parameter estimation for this system is addressed. After converting the three-phase waveforms into a pair of orthogonal signals via the α β-transformation, the nonlinear least squares (NLS) estimator is developed for accurately finding the frequency, phase, and voltage parameters. The estimator is realized by the Newt...
Accurate Estimation of Low Fundamental Frequencies from Real-Valued Measurements
Christensen, Mads Græsbøll
2013-01-01
In this paper, the difficult problem of estimating low fundamental frequencies from real-valued measurements is addressed. The methods commonly employed do not take the phenomena encountered in this scenario into account and thus fail to deliver accurate estimates. The reason for this is that they...... employ asymptotic approximations that are violated when the harmonics are not well-separated in frequency, something that happens when the observed signal is real-valued and the fundamental frequency is low. To mitigate this, we analyze the problem and present some exact fundamental frequency estimators...... that are aimed at solving this problem. These esti- mators are based on the principles of nonlinear least-squares, harmonic fitting, optimal filtering, subspace orthogonality, and shift-invariance, and they all reduce to already published methods for a high number of observations. In experiments, the...
Machine learning of parameters for accurate semiempirical quantum chemical calculations
We investigate possible improvements in the accuracy of semiempirical quantum chemistry (SQC) methods through the use of machine learning (ML) models for the parameters. For a given class of compounds, ML techniques require sufficiently large training sets to develop ML models that can be used for adapting SQC parameters to reflect changes in molecular composition and geometry. The ML-SQC approach allows the automatic tuning of SQC parameters for individual molecules, thereby improving the accuracy without deteriorating transferability to molecules with molecular descriptors very different from those in the training set. The performance of this approach is demonstrated for the semiempirical OM2 method using a set of 6095 constitutional isomers C7H10O2, for which accurate ab initio atomization enthalpies are available. The ML-OM2 results show improved average accuracy and a much reduced error range compared with those of standard OM2 results, with mean absolute errors in atomization enthalpies dropping from 6.3 to 1.7 kcal/mol. They are also found to be superior to the results from specific OM2 reparameterizations (rOM2) for the same set of isomers. The ML-SQC approach thus holds promise for fast and reasonably accurate high-throughput screening of materials and molecules
Lattice QCD and fundamental parameters of the Standard Model
Our present theory for the elemental particles in nature, the Standard Model, consists of 6 leptons and 6 quarks, plus the 4 bosons which mediate the electromagnetic, weak, and strong forces. The theory has several free parameters which must be constrained by experiment before it is entirely predictive. In Nature quarks never appear alone; only bound states of strongly coupled valence quarks (and/or anti-quarks) are detected. Consequently, the parameters governing quark flavor mixing are difficult to constrain by experiment, which measures properties of the bound states. Numerical simulations are needed to connect the theory of how quarks and gluons interact, quantum chromodynamics (formulated on a spacetime lattice), to the physically observed properties. Recent theory innovations and computer developments have allowed us finally to do lattice QCD simulations with realistic parameters. This paper describes the exciting progress using lattice QCD simulations to determine fundamental parameters of the Standard Model
Multi-Layered Neural Networks Infer Fundamental Stellar Parameters
Verma, Kuldeep; Bhattacharya, Jishnu; Antia, H M; Krishnamurthy, Ganapathy
2016-01-01
The advent of space-based observatories such as CoRoT and Kepler has enabled the testing of our understanding of stellar evolution on thousands of stars. Evolutionary models typically require five input parameters, the mass, initial Helium abundance, initial metallicity, mixing-length (assumed to be constant over time) and the age to which the star must be evolved. These parameters are also very useful in characterizing the associated planets and in studying galactic archaeology. How to obtain the parameters from observations rapidly and accurately, specifically in the context of surveys of thousands of stars, is an outstanding question, one that has eluded straightforward resolution. For a given star, we typically measure the effective temperature and surface metallicity spectroscopically and low-degree oscillation frequencies through space observatories. Here we demonstrate that statistical learning, using multi-layered neural networks, is successful in determining the evolutionary parameters based on spect...
Accurate 3D quantification of the bronchial parameters in MDCT
Saragaglia, A.; Fetita, C.; Preteux, F.; Brillet, P. Y.; Grenier, P. A.
2005-08-01
The assessment of bronchial reactivity and wall remodeling in asthma plays a crucial role in better understanding such a disease and evaluating therapeutic responses. Today, multi-detector computed tomography (MDCT) makes it possible to perform an accurate estimation of bronchial parameters (lumen and wall areas) by allowing a quantitative analysis in a cross-section plane orthogonal to the bronchus axis. This paper provides the tools for such an analysis by developing a 3D investigation method which relies on 3D reconstruction of bronchial lumen and central axis computation. Cross-section images at bronchial locations interactively selected along the central axis are generated at appropriate spatial resolution. An automated approach is then developed for accurately segmenting the inner and outer bronchi contours on the cross-section images. It combines mathematical morphology operators, such as "connection cost", and energy-controlled propagation in order to overcome the difficulties raised by vessel adjacencies and wall irregularities. The segmentation accuracy was validated with respect to a 3D mathematically-modeled phantom of a pair bronchus-vessel which mimics the characteristics of real data in terms of gray-level distribution, caliber and orientation. When applying the developed quantification approach to such a model with calibers ranging from 3 to 10 mm diameter, the lumen area relative errors varied from 3.7% to 0.15%, while the bronchus area was estimated with a relative error less than 5.1%.
Clusters as benchmarks for measuring fundamental stellar parameters
Bell, Cameron P M
2016-01-01
In this contribution I will discuss fundamental stellar parameters as determined from young star clusters; specifically those with ages less than or approximately equal to that of the Pleiades. I will focus primarily on the use of stellar evolutionary models to determine the ages and masses of stars, as well as discuss the limitations of such models using a combination of both young clusters and eclipsing binary systems. In addition, I will also highlight a few interesting recent results from large on-going spectroscopic surveys (specifically Gaia-ESO and APOGEE/IN-SYNC) which are continuing to challenge our understanding of the formation and early evolutionary stages of young clusters.
Accurate estimation of electromagnetic parameters using FEA for Indus-2 RF cavity
The 2.5 GeV INDUS-2 SRS has four normal conducting bell shaped RF cavities operating at 505.8122 MHz fundamental frequency and peak cavity voltage of 650 kV. The RF frequency and other parameters of the cavity need to be estimated accurately for beam dynamics and cavity electromagnetic studies at fundamental as well as higher frequencies (HOMs). The 2D axis-symmetric result of the fundamental frequency with fine discretization using SUPERFISH code has shown a difference of ∼2.5 MHz from the designed frequency, which leads to inaccurate estimation of electromagnetic parameters. Therefore, for accuracy, a complete 3D model comprising of all ports needs to be considered in the RF domain with correct boundary conditions. All ports in the cavity were modeled using FEA tool ANSYS. The Eigen mode simulation of complete cavity model by considering various ports is used for estimation of parameters. A mesh convergence test for these models is performed. The methodologies adopted for calculating different electromagnetic parameters using small macros are described. Various parameters that affect the cavity frequency are discussed in this paper. A comparison of FEA (ANSYS) results is done with available experimental measurements. (author)
Fundamental Parameters of Exoplanets and Their Host Stars
Coughlin, Jeffrey L
2013-01-01
For much of human history we have wondered how our solar system formed, and whether there are any other planets like ours around other stars. Only in the last 20 years have we had direct evidence for the existence of exoplanets, with the number of known exoplanets dramatically increasing in recent years, especially with the success of the Kepler mission. Observations of these systems are becoming increasingly more precise and numerous, thus allowing for detailed studies of their masses, radii, densities, temperatures, and atmospheric compositions. However, one cannot accurately study exoplanets without examining their host stars in equal detail, and understanding what assumptions must be made to calculate planetary parameters from the directly derived observational parameters. In this thesis, I present observations and models of the primary transits and secondary eclipses of transiting exoplanets from both the ground and Kepler in order to better study their physical characteristics and search for additional ...
Precise determination of fundamental parameters of six exoplanet host stars and their planets
The aim of this paper is to determinate the fundamental parameters of six exoplanet host (EH) stars and their planets. Because techniques for detecting exo-planets yield properties of the planet only as a function of the properties of the host star, we must accurately determine the parameters of the EH stars first. For this reason, we constructed a grid of stellar models including diffusion and rotation-induced extra-mixing with given ranges of input parameters (i.e. mass, metallicity and initial rotation rate). In addition to the commonly used observational constraints such as the effective temperature Teff, luminosity L and metallicity [Fe/H], we added two observational constraints, the lithium abundance log N (Li) and the rotational period Prot. These two additional observed parameters can set further constraints on the model due to their correlations with mass, age and other stellar properties. Hence, our estimations of the fundamental parameters for these EH stars and their planets have a higher precision than previous works. Therefore, the combination of rotational period and lithium helps us to obtain more accurate parameters for stars, leading to an improvement in knowledge about the physical state of EH stars and their planets. (research papers)
Fundamental parameters of 8 Am stars: comparing observations with theory
Catanzaro, G
2014-01-01
In this paper we present a detailed analysis of a sample of eight Am stars, four of them are in the {\\it Kepler} field of view. We derive fundamental parameters for all observed stars, effective temperature, gravity, rotational and radial velocities, and chemical abundances by spectral synthesis method. Further, to place these stars in the HR diagram, we computed their luminosity. Two objects among our sample, namely HD\\,114839 and HD\\,179458 do not present the typical characteristic of Am stars, while for the others six we confirm their nature. The behavior of lithium abundance as a function of the temperature with respect the normal A-type stars has been also investigated, we do not find any difference between metallic and normal A stars. All the pulsating Am stars present in our sample (five out of eight) lies in the $\\delta$~Sct instability strip, close to the red edge.
Fundamental radiation effects parameters in metals and ceramics
Zinkle, S.J. [Oak Ridge National Lab., TN (United States)
1998-03-01
Useful information on defect production and migration can be obtained from examination of the fluence-dependent defect densities in irradiated materials, particularly when a transition from linear to sublinear accumulation is observed. Further work is needed on several intriguing reported radiation effects in metals. The supralinear defect cluster accumulation regime in thin foil irradiated metals needs further experimental confirmation, and the physical mechanisms responsible for its presence need to be established. Radiation hardening and the associated reduction in strain hardening capacity in FCC metals is a serious concern for structural materials. In general, the loss of strain hardening capacity is associated with dislocation channeling, which occurs when a high density of small defect clusters are produced (stainless steel irradiated near room temperature is a notable exception). Detailed investigations of the effect of defect cluster density and other physical parameters such as stacking fault energy on dislocation channeling are needed. Although it is clearly established that radiation hardening depends on the grain size (radiation-modified Hall-Petch effect), further work is needed to identify the physical mechanisms. In addition, there is a need for improved hardening superposition models when a range of different obstacle strengths are present. Due to a lack of information on point defect diffusivities and the increased complexity of radiation effects in ceramics compared to metals, many fundamental radiation effects parameters in ceramics have yet to be determined. Optical spectroscopy data suggest that the oxygen monovacancy and freely migrating interstitial fraction in fission neutron irradiated MgO and Al{sub 2}O{sub 3} are {approximately}10% of the NRT displacement value. Ionization induced diffusion can strongly influence microstructural evolution in ceramics. Therefore, fundamental data on ceramics obtained from highly ionizing radiation sources
Fundamental Parameters of Main-Sequence Stars in an Instant with Machine Learning
Bellinger, Earl P; Hekker, Saskia; Basu, Sarbani; Ball, Warrick; Guggenberger, Elisabeth
2016-01-01
Owing to the remarkable photometric precision of space observatories like Kepler, stellar and planetary systems beyond our own are now being characterized en masse for the first time. These characterizations are pivotal for endeavors such as searching for Earth-like planets and solar twins, understanding the mechanisms that govern stellar evolution, and tracing the dynamics of our Galaxy. The volume of data that is becoming available, however, brings with it the need to process this information accurately and rapidly. While existing methods can constrain fundamental stellar parameters such as ages, masses, and radii from these observations, they require substantial computational efforts to do so. We develop a method based on machine learning for rapidly estimating fundamental parameters of main-sequence solar-like stars from classical and asteroseismic observations. We first demonstrate this method on a hare-and-hound exercise and then apply it to the Sun, 16 Cyg A & B, and 34 planet-hosting candidates th...
Fundamental Parameters of Eclipsing Binaries in the Kepler Field of View
Matson, Rachel A.
2016-01-01
Accurate knowledge of stellar parameters such as mass, radius, composition, and age inform our understanding of stellar evolution and constrain theoretical models. Binaries and, in particular, eclipsing binaries make it possible to directly measure these parameters without reliance on models or scaling relations. In my dissertation I derive fundamental parameters of stars in close binary systems with and without (detected) tertiary companions and obtain accurate masses and radii of the components to compare with evolutionary models. Radial velocities and spectroscopic orbits are derived from optical spectra, while Doppler tomography is used to determine effective temperatures, projected rotational velocities, and metallicities for each component of the binary. These parameters are then combined with Kepler photometry to obtain accurate masses and radii through light curve and radial velocity fitting with the binary modeling software ELC. Here, I present spectroscopic orbits, atmospheric parameters, and estimated masses for 41 eclipsing binaries (including seven with tertiary companions) that were observed with Kepler and have periods less then six days. Further analysis, including binary modeling and comparison with evolutionary models is shown for a sub-sample of these stars.
Determination of fundamental asteroseismic parameters using the Hilbert transform
Kiefer, René; Herzberg, Wiebke; Roth, Markus
2015-01-01
Context. Solar-like oscillations exhibit a regular pattern of frequencies. This pattern is dominated by the small and large frequency separations between modes. The accurate determination of these parameters is of great interest, because they give information about e.g. the evolutionary state and the mass of a star. Aims. We want to develop a robust method to determine the large and small frequency separations for time series with low signal-tonoise ratio. For this purpose, we analyse a time series of the Sun from the GOLF instrument aboard SOHO and a time series of the star KIC 5184732 from the NASA Kepler satellite by employing a combination of Fourier and Hilbert transform. Methods. We use the analytic signal of filtered stellar oscillation time series to compute the signal envelope. Spectral analysis of the signal envelope then reveals frequency differences of dominant modes in the periodogram of the stellar time series. Results. With the described method the large frequency separation $\\Delta\
Predicting Fundamental Stellar Parameters From Photometric Light Curves
Miller, Adam; Richards, J.; Bloom, J. S.; a larger Team
2014-01-01
We present a new machine-learning-based framework for the prediction of the fundamental stellar parameters, Teff, log g, and [Fe/H], based on the photometric light curves of variable stellar sources. The method was developed following a systematic spectroscopic survey of stellar variability. Variable sources were selected from repeated Sloan Digital Sky Survey (SDSS) observations of Stripe 82, and spectroscopic observations were obtained with Hectospec on the 6.5-m Multi-Mirror Telescope. In sum, spectra were obtained for ~9000 stellar variables (including ~3000 from the SDSS archive), for which we measured Teff, log g, and [Fe/H] using the Segue Stellar Parameters Pipeline (SSPP). Examining the full sample of ~67k variables in Stripe 82, we show that the vast majority of photometric variables are consistent with main-sequence stars, even after restricting the search to high galactic latitudes. From the spectroscopic sample we confirm that most of these stellar variables are G and K dwarfs, though there is a bias in the output of the SSPP that prevents the identification of M type variables. We are unable to identify the dominant source of variability for these stars, but eclipsing systems and/or star spots are the most likely explanation. We develop a machine-learning model that can determine Teff, log g, and [Fe/H] without obtaining a spectrum. Instead, the random-forest-regression model uses SDSS color information and light-curve features to infer stellar properties. We detail how the feature set is pruned and the model is optimized to produce final predictions of Teff, log g, and [Fe/H] with a typical scatter of 165 K, 0.42 dex, and 0.33 dex, respectively. We further show that for the subset of variables with at least 50 observations in the g band the typical scatter reduces to 75 K, 0.19 dex, and 0.16 dex, respectively. We consider these results an important step on the path to the efficient and optimal extraction of information from future time
Accurate lattice parameter measurements of stoichiometric uranium dioxide
Highlights: • The lattice parameter of stoichiometric uranium dioxide has been re-evaluated. • The new value is substantially higher than the generally accepted value. • The new value has an improved precision. • Earlier published values on the lattice parameter of UO2 are carefully re-assessed. • High accuracy was obtained on both stoichiometry and lattice parameter measurements. - Abstract: The paper presents and discusses lattice parameter analyses of pure, stoichiometric UO2. Attention was paid to prepare stoichiometric samples and to maintain stoichiometry throughout the analyses. The lattice parameter of UO2.000±0.001 was evaluated as being 547.127 ± 0.008 pm at 20 °C, which is substantially higher than many published values for the UO2 lattice constant and has an improved precision by about one order of magnitude. The higher value of the lattice constant is mainly attributed to the avoidance of hyperstoichiometry in the present study and to a minor extent to the use of the currently accepted Cu Kα1 X-ray wavelength value. Many of the early studies used Cu Kα1 wavelength values that differ from the currently accepted value, which also contributed to an underestimation of the true lattice parameter
Accurate lattice parameter measurements of stoichiometric uranium dioxide
Leinders, Gregory, E-mail: gregory.leinders@sckcen.be [KU Leuven, Department of Chemistry, Celestijnenlaan 200F, P.O. Box 2404, B-3001 Heverlee (Belgium); Belgian Nuclear Research Centre (SCK-CEN), Institute for Nuclear Materials Science, Boeretang 200, B-2400 Mol (Belgium); Cardinaels, Thomas [Belgian Nuclear Research Centre (SCK-CEN), Institute for Nuclear Materials Science, Boeretang 200, B-2400 Mol (Belgium); Binnemans, Koen [KU Leuven, Department of Chemistry, Celestijnenlaan 200F, P.O. Box 2404, B-3001 Heverlee (Belgium); Verwerft, Marc [Belgian Nuclear Research Centre (SCK-CEN), Institute for Nuclear Materials Science, Boeretang 200, B-2400 Mol (Belgium)
2015-04-15
Highlights: • The lattice parameter of stoichiometric uranium dioxide has been re-evaluated. • The new value is substantially higher than the generally accepted value. • The new value has an improved precision. • Earlier published values on the lattice parameter of UO{sub 2} are carefully re-assessed. • High accuracy was obtained on both stoichiometry and lattice parameter measurements. - Abstract: The paper presents and discusses lattice parameter analyses of pure, stoichiometric UO{sub 2}. Attention was paid to prepare stoichiometric samples and to maintain stoichiometry throughout the analyses. The lattice parameter of UO{sub 2.000±0.001} was evaluated as being 547.127 ± 0.008 pm at 20 °C, which is substantially higher than many published values for the UO{sub 2} lattice constant and has an improved precision by about one order of magnitude. The higher value of the lattice constant is mainly attributed to the avoidance of hyperstoichiometry in the present study and to a minor extent to the use of the currently accepted Cu Kα{sub 1} X-ray wavelength value. Many of the early studies used Cu Kα{sub 1} wavelength values that differ from the currently accepted value, which also contributed to an underestimation of the true lattice parameter.
Fundamental Parameters of Nearby Red Dwarfs: Stellar Radius as an Indicator of Age
Silverstein, Michele L.; Henry, Todd J.; Winters, Jennifer G.; Jao, Wei-Chun; Riedel, Adric R.; Dieterich, Sergio; RECONS Team
2016-01-01
Red dwarfs dominate the Galactic population, yet determining one of their most fundamental characteristics --- age --- has proven difficult. The characterization of red dwarfs in terms of their age is fundamental to mapping the history of star and, ultimately, planet formation in the Milky Way. Here we report on a compelling technique to evaluate the radii of red dwarfs, which can be used to provide leverage in estimating their ages. These radii are also particularly valuable in the cases of transiting exoplanet hosts because accurate stellar radii are required to determine accurate planetary radii.In this work, we use the BT-Settl models in combination with Johnson-Kron-Cousins VRI, 2MASS JHK, and WISE All-Sky Release photometry to produce spectral energy distributions (SEDs) to determine the temperatures and bolometric fluxes for 500 red dwarfs, most of which are in the southern sky. The full suites of our photometric and astrometric data (including hundreds of accurate new parallaxes from the RECONS team at the CTIO/SMARTS 0.9m) allow us to also determine the bolometric luminosities and radii. This method of radius determination is validated by a comparison of our measurements to those found using the CHARA Array (Boyajian et al. 2012), which match within a few percent.In addition to a compilation of red dwarf fundamental parameters, our findings provide a snapshot of relative stellar ages in the solar neighborhood. Of particular interest are the cohorts of very young and very old stars identified within 50 pc. These outliers exemplify the demographic extremes of the nearest stars.This effort has been supported by the NSF through grants AST-0908402, AST-1109445, and AST-1412026, and via observations made possible by the SMARTS Consortium.
Accurate spectroscopic parameters of WASP planet host stars
Doyle, Amanda P; Maxted, P F L; Anderson, D R; Cameron, A Collier; Gillon, M; Hellier, C; Pollacco, D; Queloz, D; Triaud, A H M J; West, R G
2012-01-01
We have made a detailed spectral analysis of eleven Wide Angle Search for Planets (WASP) planet host stars using high signal-to-noise (S/N) HARPS spectra. Our line list was carefully selected from the spectra of the Sun and Procyon, and we made a critical evaluation of the atomic data. The spectral lines were measured using equivalent widths. The procedures were tested on the Sun and Procyon prior to be being used on the WASP stars. The effective temperature, surface gravity, microturbulent velocity and metallicity were determined for all the stars. We show that abundances derived from high S/N spectra are likely to be higher than those obtained from low S/N spectra, as noise can cause the equivalent width to be underestimated. We also show that there is a limit to the accuracy of stellar parameters that can be achieved, despite using high S/N spectra, and the average uncertainty in effective temperature, surface gravity, microturbulent velocity and metallicity is 83 K, 0.11 dex, 0.11 km/s and 0.10 dex respec...
Study of fundamental parameters in hybrid laser welding
Suder, Wojciech
2011-01-01
This thesis undertakes a study of laser welding in terms of basic laser material interaction parameters. This includes power density, interaction time and specific point energy. A detailed study of the correlation between the laser material interaction parameters and the observed weld bead profiles is carried out. The results show that the power density and the specific point energy control the depth of penetration, whilst the interaction time controls the weld width. These parameters uniquel...
Predicting accurate line shape parameters for CO2 transitions
The vibrational dependence of CO2 half-widths and line shifts are given by a modification of the model proposed by Gamache and Hartmann [Gamache R, Hartmann J-M. J Quant Spectrosc Radiat Transfer 2004;83:119]. This model allows the half-widths and line shifts for a ro-vibrational transition to be expressed in terms of the number of vibrational quanta exchanged in the transition raised to a power and a reference ro-vibrational transition. Calculations were made for 24 bands for lower rotational quantum numbers from 0 to 160 for N2-, O2-, air-, and self-collisions with CO2. These data were extrapolated to J″=200 to accommodate several databases. Comparison of the CRB calculations with measurement gives very high confidence in the data. In the model a Quantum Coordinate is defined by (c1 |Δν1|+c2 |Δν2|+c3|Δν3|)p. The power p is adjusted and a linear least-squares fit to the data by the model expression is made. The procedure is iterated on the correlation coefficient, R, until [|R|−1] is less than a threshold. The results demonstrate the appropriateness of the model. The model allows the determination of the slope and intercept as a function of rotational transition, broadening gas, and temperature. From the data of the fits, the half-width, line shift, and the temperature dependence of the half-width can be estimated for any ro-vibrational transition, allowing spectroscopic CO2 databases to have complete information for the line shape parameters. -- Highlights: • Development of a quantum coordinate model for the half-width and line shift. • Calculations of γ and δ for N2-, O2-, air-, and CO2–CO2 systems for 24 bands. • J″=0–160, bands up to Δν1=3, Δν2=5, Δν3=9, 9 temperatures from 200–2000 K. • γ, n, δ, prediction routines for all ro-vibrational transitions up to J″=200
Nucleon structure as a background for determination of fundamental parameters
We consider deep inelastic, (quasi-) elastic lepton-nucleon scattering and investigate the possibilities of eliminating or suppressing theoretical uncertainties induced by nucleon structure in measuring the Standard Model parameters or in searching for new physics. On the basis of rather general hypothesis about nucleon structure we have obtained new relations between cross sections and neutral current parameters which are independent of the nucleon structure. We also investigate a dependence of the QCD Λ-parameter extracted from the data on unknown large scale nucleon structure and propose a modification of the conventional QCD predictions which are weakly dependent of this uncertainty factor. (author). 9 refs, 1 tab
Duncan K Ralph
2016-01-01
Full Text Available VDJ rearrangement and somatic hypermutation work together to produce antibody-coding B cell receptor (BCR sequences for a remarkable diversity of antigens. It is now possible to sequence these BCRs in high throughput; analysis of these sequences is bringing new insight into how antibodies develop, in particular for broadly-neutralizing antibodies against HIV and influenza. A fundamental step in such sequence analysis is to annotate each base as coming from a specific one of the V, D, or J genes, or from an N-addition (a.k.a. non-templated insertion. Previous work has used simple parametric distributions to model transitions from state to state in a hidden Markov model (HMM of VDJ recombination, and assumed that mutations occur via the same process across sites. However, codon frame and other effects have been observed to violate these parametric assumptions for such coding sequences, suggesting that a non-parametric approach to modeling the recombination process could be useful. In our paper, we find that indeed large modern data sets suggest a model using parameter-rich per-allele categorical distributions for HMM transition probabilities and per-allele-per-position mutation probabilities, and that using such a model for inference leads to significantly improved results. We present an accurate and efficient BCR sequence annotation software package using a novel HMM "factorization" strategy. This package, called partis (https://github.com/psathyrella/partis/, is built on a new general-purpose HMM compiler that can perform efficient inference given a simple text description of an HMM.
Measuring Accurate Body Parameters of Dressed Humans with Large-Scale Motion Using a Kinect Sensor
Sidan Du
2013-08-01
Full Text Available Non-contact human body measurement plays an important role in surveillance, physical healthcare, on-line business and virtual fitting. Current methods for measuring the human body without physical contact usually cannot handle humans wearing clothes, which limits their applicability in public environments. In this paper, we propose an effective solution that can measure accurate parameters of the human body with large-scale motion from a Kinect sensor, assuming that the people are wearing clothes. Because motion can drive clothes attached to the human body loosely or tightly, we adopt a space-time analysis to mine the information across the posture variations. Using this information, we recover the human body, regardless of the effect of clothes, and measure the human body parameters accurately. Experimental results show that our system can perform more accurate parameter estimation on the human body than state-of-the-art methods.
Estimating stellar fundamental parameters using PCA: application to early type stars of GES data
Farah, W; Paletou, F; Blomme, R
2015-01-01
This work addresses a procedure to estimate fundamental stellar parameters such as T eff , logg, [Fe/H], and v sin i using a dimensionality reduction technique called Principal Component Analysis (PCA), applied to a large database of synthetic spectra. This technique shows promising results for inverting stellar parameters of observed targets from Gaia ESO Survey.
An accurate few-parameter ground state wave function for the Lithium atom
Guevara, Nicolais L; Turbiner, Alexander V
2009-01-01
A simple, seven-parameter trial function is proposed for a description of the ground state of the Lithium atom. It includes both spin functions. Inter-electronic distances appear in exponential form as well as in a pre-exponential factor, and the necessary energy matrix elements are evaluated by numerical integration in the space of the relative coordinates. Encouragingly accurate values of the energy and the cusp parameters are obtained.
In-situ measurements of material thermal parameters for accurate LED lamp thermal modelling
Vellvehi, M.; Perpina, X.; Jorda, X.; Werkhoven, R.J.; Kunen, J.M.G.; Jakovenko, J.; Bancken, P.; Bolt, P.J.
2013-01-01
This work deals with the extraction of key thermal parameters for accurate thermal modelling of LED lamps: air exchange coefficient around the lamp, emissivity and thermal conductivity of all lamp parts. As a case study, an 8W retrofit lamp is presented. To assess simulation results, temperature is
Harb, Moussab
2016-01-05
An essential issue in developing new semiconductors for photovoltaics devices is to design materials with appropriate fundamental parameters related to the light absorption, photogenerated exciton dissociation and charge carrier diffusion. These phenomena are governed by intrinsic properties of the semiconductor like the bandgap, the dielectric constant, the charge carrier effective masses, and the exciton binding energy. We present here the results of a systematic theoretical study on the fundamental properties of a series of selected semiconductors widely used in inorganic photovoltaic and dye-sensitized solar cells such as Si, Ge, CdS, CdSe, CdTe, and GaAs. These intrinsic properties were computed in the framework of the density functional theory (DFT) along with the standard PBE and the range-separated hybrid (HSE06) exchange-correlation functionals. Our calculations clearly show that the computed values using HSE06 reproduce with high accuracy the experimental data. The evaluation and accurate prediction of these key properties using HSE06 open nice perspectives for in silico design of new suitable candidate materials for solar energy conversion applications.
Measuring Accurate Body Parameters of Dressed Humans with Large-Scale Motion Using a Kinect Sensor
Sidan Du; Yang Li; Yao Yu; Yu Zhou; Huanghao Xu
2013-01-01
Non-contact human body measurement plays an important role in surveillance, physical healthcare, on-line business and virtual fitting. Current methods for measuring the human body without physical contact usually cannot handle humans wearing clothes, which limits their applicability in public environments. In this paper, we propose an effective solution that can measure accurate parameters of the human body with large-scale motion from a Kinect sensor, assuming that the people are wearing clo...
Accurate state and parameter estimation in nonlinear systems with sparse observations
Rey, Daniel; Eldridge, Michael; Kostuk, Mark [Department of Physics, University of California, San Diego, 9500 Gilman Drive, La Jolla, CA 92093-0374 (United States); Abarbanel, Henry D.I., E-mail: habarbanel@ucsd.edu [Department of Physics, University of California, San Diego, 9500 Gilman Drive, La Jolla, CA 92093-0374 (United States); Marine Physical Laboratory, Scripps Institution of Oceanography, University of California, San Diego, 9500 Gilman Drive, La Jolla, CA 92093-0374 (United States); Schumann-Bischoff, Jan [Max Planck Institute for Dynamics and Self-Organization, Am Faßberg 17, 37077 Göttingen (Germany); Institute for Nonlinear Dynamics, Georg-August-Universität Göttingen, Am Faßberg 17, 37077 Göttingen (Germany); Parlitz, Ulrich, E-mail: ulrich.parlitz@ds.mpg.de [Max Planck Institute for Dynamics and Self-Organization, Am Faßberg 17, 37077 Göttingen (Germany); Institute for Nonlinear Dynamics, Georg-August-Universität Göttingen, Am Faßberg 17, 37077 Göttingen (Germany)
2014-02-01
Transferring information from observations to models of complex systems may meet impediments when the number of observations at any observation time is not sufficient. This is especially so when chaotic behavior is expressed. We show how to use time-delay embedding, familiar from nonlinear dynamics, to provide the information required to obtain accurate state and parameter estimates. Good estimates of parameters and unobserved states are necessary for good predictions of the future state of a model system. This method may be critical in allowing the understanding of prediction in complex systems as varied as nervous systems and weather prediction where insufficient measurements are typical.
3-Parameter Hough Ellipse Detection Algorithm for Accurate Location of Human Eyes
Qiufen Yang
2014-05-01
Full Text Available Accurately positioning the Human Eyes plays an important role in the detection of the fatigue driving. In order to improve the performance of positioning of human face and eyes, an accurately positioning method of the human eyes is proposed based on the 3-parameter Hough ellipse detection. Firstly, the human face area is divided by using the skin color clustering and segmentation algorithm. Then, the segmented image is filtered by using its geometric structure and the approximate positions of the human face area and eyes are calculated. Finally, on the basis of the spinning cone-shape eye model, the position of human face and eyes is accurately determined by using the 3-parameter Hough transformation ellipse detection algorithm. The different images of human face are used to test the performance of the proposed method. The experimental results show that the extreme value of upper and lower eyelids and the actual position is 0.104 and the proposed algorithm has higher positioning accuracy
Schonrich, R.; Bergemann, M.
2013-01-01
We present a unified framework to derive fundamental stellar parameters by combining all available observational and theoretical information for a star. The algorithm relies on the method of Bayesian inference, which for the first time directly integrates the spectroscopic analysis pipeline based on the global spectrum synthesis and allows for comprehensive and objective error calculations given the priors. Arbitrary input datasets can be included into our analysis and other stellar quantitie...
Inversion of stellar fundamental parameters from Espadons and Narval high-resolution spectra
Paletou, F; Böhm, T.; Watson, V.; Trouilhet, J. -F.
2014-01-01
The general context of this study is the inversion of stellar fundamental parameters from high-resolution Echelle spectra. We aim indeed at developing a fast and reliable tool for the post-processing of spectra produced by Espadons and Narval spectropolarimeters. Our inversion tool relies on principal component analysis. It allows reduction of dimensionality and the definition of a specific metric for the search of nearest neighbours between an observed spectrum and a set of observed spectra ...
PCA-based inversion of stellar fundamental parameters from high-resolution Echelle spectra
Paletou, F; Trouilhet, J. -F.; Boehm, T.
2014-01-01
The general context of this study is the inversion of stellar fundamental parameters from high-resolution Echelle spectra. We aim at developing a fast and reliable tool for the post-processing of spectra produced, in particular, by the Espadons and Narval spectropolarimeters. Our inversion tool relies on principal component analysis. It allows reduction of dimensionality and the definition of a specific metric for the search of nearest neighbours between an observed spectrum and a set of synt...
The Pan-Pacific Planet Search V. Fundamental Parameters for 164 Evolved Stars
Wittenmyer, Robert A.; Liu, Fan; Wang, Liang; Casagrande, Luca; Johnson, John Asher; Tinney, C.G.
2016-01-01
We present spectroscopic stellar parameters for the complete target list of 164 evolved stars from the Pan-Pacific Planet Search, a five-year radial velocity campaign using the 3.9m Anglo-Australian Telescope. For 87 of these bright giants, our work represents the first determination of their fundamental parameters. Our results carry typical uncertainties of 100 K, 0.15 dex, and 0.1 dex in $T_{\\rm eff}$, $\\log g$, and [Fe/H] and are consistent with literature values where available. The deriv...
Xu Long; Fei Ge; Lei Wang; Youshi Hong
2009-01-01
This paper investigates the effects of structure parameters on dynamic responses of submerged floating tunnel (SFT) under hydrodynamic loads. The structure parameters includes buoyancy-weight ratio (BWR), stiffness coefficients of the cable systems, tunnel net buoyancy and tunnel length. First, the importance of structural damp in relation to the dynamic responses of SFT is demonstrated and the mechanism of structural damp effect is discussed. Thereafter, the fundamental structure parameters are investi-gated through the analysis of SFT dynamic responses under hydrodynamic loads. The results indicate that the BWR of SFT is a key structure parameter. When BWR is 1.2, there is a remarkable trend change in the vertical dynamic response of SFT under hydrodynamic loads. The results also indicate that the ratio of the tunnel net buoyancy to the cable stiffness coefficient is not a characteristic factor affecting the dynamic responses of SFT under hydrodynamic loads.
Passegger, Vera Maria; Reiners, Ansgar
2016-01-01
M-dwarf stars are the most numerous stars in the Universe; they span a wide range in mass and are in the focus of ongoing and planned exoplanet surveys. To investigate and understand their physical nature, detailed spectral information and accurate stellar models are needed. We use a new synthetic atmosphere model generation and compare model spectra to observations. To test the model accuracy, we compared the models to four benchmark stars with atmospheric parameters for which independent information from interferometric radius measurements is available. We used $\\chi^2$ -based methods to determine parameters from high-resolution spectroscopic observations. Our synthetic spectra are based on the new PHOENIX grid that uses the ACES description for the equation of state. This is a model generation expected to be especially suitable for the low-temperature atmospheres. We identified suitable spectral tracers of atmospheric parameters and determined the uncertainties in $T_{\\rm eff}$, $\\log{g}$, and [Fe/H] resul...
Shaltout, Abdallah A.; Moharram, Mohammed A.; Mostafa, Nasser Y.
2012-01-01
This work is the first attempt to quantify trace elements in the Catha edulis plant (Khat) with a fundamental parameter approach. C. edulis is a famous drug plant in east Africa and Arabian Peninsula. We have previously confirmed that hydroxyapatite represents one of the main inorganic compounds in the leaves and stalks of C. edulis. Comparable plant leaves from basil, mint and green tea were included in the present investigation as well as trifolium leaves were included as a non-related plant. The elemental analyses of the plants were done by Wavelength Dispersive X-Ray Fluorescence (WDXRF) spectroscopy. Standard-less quantitative WDXRF analysis was carried out based on the fundamental parameter approaches. According to the standard-less analysis algorithms, there is an essential need for an accurate determination of the amount of organic material in the sample. A new approach, based on the differential thermal analysis, was successfully used for the organic material determination. The obtained results based on this approach were in a good agreement with the commonly used methods. Depending on the developed method, quantitative analysis results of eighteen elements including; Al, Br, Ca, Cl, Cu, Fe, K, Na, Ni, Mg, Mn, P, Rb, S, Si, Sr, Ti and Zn were obtained for each plant. The results of the certified reference materials of green tea (NCSZC73014, China National Analysis Center for Iron and Steel, Beijing, China) confirmed the validity of the proposed method.
Accurate estimation of motion blur parameters in noisy remote sensing image
Shi, Xueyan; Wang, Lin; Shao, Xiaopeng; Wang, Huilin; Tao, Zhong
2015-05-01
The relative motion between remote sensing satellite sensor and objects is one of the most common reasons for remote sensing image degradation. It seriously weakens image data interpretation and information extraction. In practice, point spread function (PSF) should be estimated firstly for image restoration. Identifying motion blur direction and length accurately is very crucial for PSF and restoring image with precision. In general, the regular light-and-dark stripes in the spectrum can be employed to obtain the parameters by using Radon transform. However, serious noise existing in actual remote sensing images often causes the stripes unobvious. The parameters would be difficult to calculate and the error of the result relatively big. In this paper, an improved motion blur parameter identification method to noisy remote sensing image is proposed to solve this problem. The spectrum characteristic of noisy remote sensing image is analyzed firstly. An interactive image segmentation method based on graph theory called GrabCut is adopted to effectively extract the edge of the light center in the spectrum. Motion blur direction is estimated by applying Radon transform on the segmentation result. In order to reduce random error, a method based on whole column statistics is used during calculating blur length. Finally, Lucy-Richardson algorithm is applied to restore the remote sensing images of the moon after estimating blur parameters. The experimental results verify the effectiveness and robustness of our algorithm.
Blossier, Benoit
2014-01-01
The 2-years old observation at LHC of a new boson, with a mass of 126 GeV, is a great achievement. Its interpretation as a Brout-Englert-Higgs boson is very plausible and appealing to complete the zoology of fundamental particles in the Standard Model. The interplay between theorists and experimentalists that we have witnessed has come with a huge work to determine with enough precision the parameters of the Standard Model: couplings, masses, mixing angles. Among the various tools developed by physicists, lattice QCD is particularly suitable to know those parameters in the quark sector. In this report I discuss the lattice measurement of Standard Model fundamental parameters that are closely related to Higgs boson: its main production mode is the gluon-gluon fusion, whose the magnitude is governed by the strong coupling constant, while its most favored decay channel, $H \\to b \\bar{b}$, has a coupling proportional to the $b$ quark mass. I outline the improvements brought by the lattice community: simulations w...
Jeong, Hyunjo; Zhang, Shuzeng; Barnard, Dan; Li, Xiongbing
2015-09-01
The accurate measurement of acoustic nonlinearity parameter β for fluids or solids generally requires making corrections for diffraction effects due to finite size geometry of transmitter and receiver. These effects are well known in linear acoustics, while those for second harmonic waves have not been well addressed and therefore not properly considered in previous studies. In this work, we explicitly define the attenuation and diffraction corrections using the multi-Gaussian beam (MGB) equations which were developed from the quasilinear solutions of the KZK equation. The effects of making these corrections are examined through the simulation of β determination in water. Diffraction corrections are found to have more significant effects than attenuation corrections, and the β values of water can be estimated experimentally with less than 5% errors when the exact second harmonic diffraction corrections are used together with the negligible attenuation correction effects on the basis of linear frequency dependence between attenuation coefficients, α2 ≃ 2α1.
Atmospheric and Fundamental Parameters of Stars in Hubble's Next Generation Spectral Library
Heap, Sally
2010-01-01
Hubble's Next Generation Spectral Library (NGSL) consists of R approximately 1000 spectra of 374 stars of assorted temperature, gravity, and metallicity. We are presently working to determine the atmospheric and fundamental parameters of the stars from the NGSL spectra themselves via full-spectrum fitting of model spectra to the observed (extinction-corrected) spectrum over the full wavelength range, 0.2-1.0 micron. We use two grids of model spectra for this purpose: the very low-resolution spectral grid from Castelli-Kurucz (2004), and the grid from MARCS (2008). Both the observed spectrum and the MARCS spectra are first degraded in resolution to match the very low resolution of the Castelli-Kurucz models, so that our fitting technique is the same for both model grids. We will present our preliminary results with a comparison with those from the Sloan/Segue Stellar Parameter Pipeline, ELODIE, and MILES, etc.
The Pan-Pacific Planet Search V. Fundamental Parameters for 164 Evolved Stars
Wittenmyer, Robert A; Wang, Liang; Casagrande, Luca; Johnson, John Asher; Tinney, C G
2016-01-01
We present spectroscopic stellar parameters for the complete target list of 164 evolved stars from the Pan-Pacific Planet Search, a five-year radial velocity campaign using the 3.9m Anglo-Australian Telescope. For 87 of these bright giants, our work represents the first determination of their fundamental parameters. Our results carry typical uncertainties of 100 K, 0.15 dex, and 0.1 dex in $T_{\\rm eff}$, $\\log g$, and [Fe/H] and are consistent with literature values where available. The derived stellar masses have a mean of $1.31^{+0.28}_{-0.25}$ Msun, with a tail extending to $\\sim$2 Msun, consistent with the interpretation of these targets as "retired" A-F type stars.
A MACHINE-LEARNING METHOD TO INFER FUNDAMENTAL STELLAR PARAMETERS FROM PHOTOMETRIC LIGHT CURVES
A fundamental challenge for wide-field imaging surveys is obtaining follow-up spectroscopic observations: there are >109 photometrically cataloged sources, yet modern spectroscopic surveys are limited to ∼few× 106 targets. As we approach the Large Synoptic Survey Telescope era, new algorithmic solutions are required to cope with the data deluge. Here we report the development of a machine-learning framework capable of inferring fundamental stellar parameters (T eff, log g, and [Fe/H]) using photometric-brightness variations and color alone. A training set is constructed from a systematic spectroscopic survey of variables with Hectospec/Multi-Mirror Telescope. In sum, the training set includes ∼9000 spectra, for which stellar parameters are measured using the SEGUE Stellar Parameters Pipeline (SSPP). We employed the random forest algorithm to perform a non-parametric regression that predicts T eff, log g, and [Fe/H] from photometric time-domain observations. Our final optimized model produces a cross-validated rms error (RMSE) of 165 K, 0.39 dex, and 0.33 dex for T eff, log g, and [Fe/H], respectively. Examining the subset of sources for which the SSPP measurements are most reliable, the RMSE reduces to 125 K, 0.37 dex, and 0.27 dex, respectively, comparable to what is achievable via low-resolution spectroscopy. For variable stars this represents a ≈12%-20% improvement in RMSE relative to models trained with single-epoch photometric colors. As an application of our method, we estimate stellar parameters for ∼54,000 known variables. We argue that this method may convert photometric time-domain surveys into pseudo-spectrographic engines, enabling the construction of extremely detailed maps of the Milky Way, its structure, and history
Passegger, V. M.; Wende-von Berg, S.; Reiners, A.
2016-03-01
M-dwarf stars are the most numerous stars in the Universe; they span a wide range in mass and are in the focus of ongoing and planned exoplanet surveys. To investigate and understand their physical nature, detailed spectral information and accurate stellar models are needed. We use a new synthetic atmosphere model generation and compare model spectra to observations. To test the model accuracy, we compared the models to four benchmark stars with atmospheric parameters for which independent information from interferometric radius measurements is available. We used χ2-based methods to determine parameters from high-resolution spectroscopic observations. Our synthetic spectra are based on the new PHOENIX grid that uses the ACES description for the equation of state. This is a model generation expected to be especially suitable for the low-temperature atmospheres. We identified suitable spectral tracers of atmospheric parameters and determined the uncertainties in Teff, log g, and [Fe/H] resulting from degeneracies between parameters and from shortcomings of the model atmospheres. The inherent uncertainties we find are σTeff = 35 K, σlog g = 0.14, and σ[Fe/H] = 0.11. The new model spectra achieve a reliable match to our observed data; our results for Teff and log g are consistent with literature values to within 1σ. However, metallicities reported from earlier photometric and spectroscopic calibrations in some cases disagree with our results by more than 3σ. A possible explanation are systematic errors in earlier metallicity determinations that were based on insufficient descriptions of the cool atmospheres. At this point, however, we cannot definitely identify the reason for this discrepancy, but our analysis indicates that there is a large uncertainty in the accuracy of M-dwarf parameter estimates. Based on observations carried out with UVES at ESO VLT.
Fundamental parameters of QCD from non-perturbative methods for two and four flavors
The non-perturbative formulation of Quantumchromodynamics (QCD) on a four dimensional space-time Euclidean lattice together with the finite size techniques enable us to perform the renormalization of the QCD parameters non-perturbatively. In order to obtain precise predictions from lattice QCD, one needs to include the dynamical fermions into lattice QCD simulations. We consider QCD with two and four mass degenerate flavors of O(a) improved Wilson quarks. In this thesis, we improve the existing determinations of the fundamental parameters of two and four flavor QCD. In four flavor theory, we compute the precise value of the Λ parameter in the units of the scale Lmax defined in the hadronic regime. We also give the precise determination of the Schroedinger functional running coupling in four flavour theory and compare it to the perturbative results. The Monte Carlo simulations of lattice QCD within the Schroedinger Functional framework were performed with a platform independent program package Schroedinger Funktional Mass Preconditioned Hybrid Monte Carlo (SF-MP-HMC), developed as a part of this project. Finally, we compute the strange quark mass and the Λ parameter in two flavour theory, performing a well-controlled continuum limit and chiral extrapolation. To achieve this, we developed a universal program package for simulating two flavours of Wilson fermions, Mass Preconditioned Hybrid Monte Carlo (MP-HMC), which we used to run large scale simulations on small lattice spacings and on pion masses close to the physical value.
This work is the first attempt to quantify trace elements in the Catha edulis plant (Khat) with a fundamental parameter approach. C. edulis is a famous drug plant in east Africa and Arabian Peninsula. We have previously confirmed that hydroxyapatite represents one of the main inorganic compounds in the leaves and stalks of C. edulis. Comparable plant leaves from basil, mint and green tea were included in the present investigation as well as trifolium leaves were included as a non-related plant. The elemental analyses of the plants were done by Wavelength Dispersive X-Ray Fluorescence (WDXRF) spectroscopy. Standard-less quantitative WDXRF analysis was carried out based on the fundamental parameter approaches. According to the standard-less analysis algorithms, there is an essential need for an accurate determination of the amount of organic material in the sample. A new approach, based on the differential thermal analysis, was successfully used for the organic material determination. The obtained results based on this approach were in a good agreement with the commonly used methods. Depending on the developed method, quantitative analysis results of eighteen elements including; Al, Br, Ca, Cl, Cu, Fe, K, Na, Ni, Mg, Mn, P, Rb, S, Si, Sr, Ti and Zn were obtained for each plant. The results of the certified reference materials of green tea (NCSZC73014, China National Analysis Center for Iron and Steel, Beijing, China) confirmed the validity of the proposed method. - Highlights: ► Quantitative analysis of Catha edulis was carried out using standardless WDXRF. ► Differential thermal analysis was used for determination of the loss of ignition. ► The existence of hydroxyapatite in Catha edulis plant has been confirmed. ► The CRM results confirmed the validity of the developed method.
Shaltout, Abdallah A., E-mail: shaltout_a@hotmail.com [Spectroscopy Department, Physics Division, National Research Center, El Behooth Str., 12622 Dokki, Cairo (Egypt); Faculty of science, Taif University, 21974 Taif, P.O. Box 888 (Saudi Arabia); Moharram, Mohammed A. [Spectroscopy Department, Physics Division, National Research Center, El Behooth Str., 12622 Dokki, Cairo (Egypt); Mostafa, Nasser Y. [Faculty of science, Taif University, 21974 Taif, P.O. Box 888 (Saudi Arabia); Chemistry Department, Faculty of Science, Suez Canal University, Ismailia (Egypt)
2012-01-15
This work is the first attempt to quantify trace elements in the Catha edulis plant (Khat) with a fundamental parameter approach. C. edulis is a famous drug plant in east Africa and Arabian Peninsula. We have previously confirmed that hydroxyapatite represents one of the main inorganic compounds in the leaves and stalks of C. edulis. Comparable plant leaves from basil, mint and green tea were included in the present investigation as well as trifolium leaves were included as a non-related plant. The elemental analyses of the plants were done by Wavelength Dispersive X-Ray Fluorescence (WDXRF) spectroscopy. Standard-less quantitative WDXRF analysis was carried out based on the fundamental parameter approaches. According to the standard-less analysis algorithms, there is an essential need for an accurate determination of the amount of organic material in the sample. A new approach, based on the differential thermal analysis, was successfully used for the organic material determination. The obtained results based on this approach were in a good agreement with the commonly used methods. Depending on the developed method, quantitative analysis results of eighteen elements including; Al, Br, Ca, Cl, Cu, Fe, K, Na, Ni, Mg, Mn, P, Rb, S, Si, Sr, Ti and Zn were obtained for each plant. The results of the certified reference materials of green tea (NCSZC73014, China National Analysis Center for Iron and Steel, Beijing, China) confirmed the validity of the proposed method. - Highlights: Black-Right-Pointing-Pointer Quantitative analysis of Catha edulis was carried out using standardless WDXRF. Black-Right-Pointing-Pointer Differential thermal analysis was used for determination of the loss of ignition. Black-Right-Pointing-Pointer The existence of hydroxyapatite in Catha edulis plant has been confirmed. Black-Right-Pointing-Pointer The CRM results confirmed the validity of the developed method.
Mérand, A.; Kervella, P.; Pribulla, T.; Petr-Gotzens, M. G.; Benisty, M.; Natta, A.; Duvert, G.; Schertl, D.; Vannier, M.
2011-08-01
Context. The triple stellar system δ Vel (composed of two A-type and one F-type main-sequence stars) is particularly interesting because it contains one of the nearest and brightest eclipsing binaries. It therefore presents a unique opportunity to determine independently the physical properties of the three components of the system, as well as its distance. Aims: We aim at determining the fundamental parameters (masses, radii, luminosities, rotational velocities) of the three components of δ Vel, as well as the parallax of the system, independently from the existing Hipparcos measurement. Methods: We determined dynamical masses from high-precision astrometry of the orbits of Aab-B and Aa-Ab using adaptive optics (VLT/NACO) and optical interferometry (VLTI/AMBER). The main component is an eclipsing binary composed of two early A-type stars in rapid rotation. We modeled the photometric and radial velocity measurements of the eclipsing pair Aa-Ab using a self-consistent method based on physical parameters (mass, radius, luminosity, rotational velocity). Results: From our self-consistent modeling of the primary and secondary components of the δ Vel A eclipsing pair, we derive their fundamental parameters with a typical accuracy of 1%. We find that they have similar masses, 2.43 ± 0.02 M⊙ and 2.27 ± 0.02 M⊙. The physical parameters of the tertiary component (δ Vel B) are also estimated, although to a lower accuracy. We obtain a parallax π = 39.8 ± 0.4 mas for the system, in satisfactory agreement (-1.2 σ) with the Hipparcos value (πHip = 40.5 ± 0.4 mas). Conclusions: The physical parameters we derive represent a consistent set of constraints for the evolutionary modeling of this system. The agreement of the parallax we measure with the Hipparcos value to a 1% accuracy is also an interesting confirmation of the true accuracy of these two independent measurements. Based on observations made with ESO telescopes at Paranal Observatory, under ESO programs 076
Inversion of stellar fundamental parameters from Espadons and Narval high-resolution spectra
Paletou, F; Watson, V; Trouilhet, J -F
2014-01-01
The general context of this study is the inversion of stellar fundamental parameters from high-resolution Echelle spectra. We aim indeed at developing a fast and reliable tool for the post-processing of spectra produced by Espadons and Narval spectropolarimeters. Our inversion tool relies on principal component analysis. It allows reduction of dimensionality and the definition of a specific metric for the search of nearest neighbours between an observed spectrum and a set of observed spectra taken from the Elodie stellar library. Effective temperature, surface gravity, total metallicity and projected rotational velocity are derived. Various tests presented in this study, and done from the sole information coming from a spectral band centered around the Mg I b-triplet and with spectra from FGK stars are very promising.
PCA-based inversion of stellar fundamental parameters from high-resolution Echelle spectra
Paletou, F; Boehm, T
2014-01-01
The general context of this study is the inversion of stellar fundamental parameters from high-resolution Echelle spectra. We aim at developing a fast and reliable tool for the post-processing of spectra produced, in particular, by the Espadons and Narval spectropolarimeters. Our inversion tool relies on principal component analysis. It allows reduction of dimensionality and the definition of a specific metric for the search of nearest neighbours between an observed spectrum and a set of synthetic spectra. Effective temperature, surface gravity, total metallicity and projected rotation velocity are derived. Our first tests, essentially done from the sole information coming from the spectral band that the RVS spectrometer will soon observe from the GAIA space observatory, and with spectra from mainly FGK-dwarfs are very promising. We also tested our method with a few targets beyond this domain of the H-R diagram.
Neilson, Hilding R; Norris, Ryan; Kloppenborg, Brian; Lester, John B
2016-01-01
One of the great challenges in understanding stars is measuring their masses. The best methods for measuring stellar masses include binary interaction, asteroseismology and stellar evolution models, but these methods are not ideal for red giant and supergiant stars. In this work, we propose a novel method for inferring stellar masses of evolved red giant and supergiant stars using interferometric and spectrophotometric observations combined with spherical model stellar atmospheres to measure what we call the stellar mass index, defined as the ratio between the stellar radius and mass. The method is based on the correlation between different measurements of angular diameter, used as a proxy for atmospheric extension, and fundamental stellar parameters. For a given star, spectrophotometry measures the Rosseland angular diameter while interferometric observations generally probe a larger limb-darkened angular diameter. The ratio of these two angular diameters is proportional to the relative extension of the stel...
Lachaume, Regis; Rabus, Markus; Jordan, Andres
2015-08-01
In stellar interferometry, the assumption that the observables can be seen as Gaussian, independent variables is the norm. In particular, neither the optical interferometry FITS (OIFITS) format nor the most popular fitting software in the field, LITpro, offer means to specify a covariance matrix or non-Gaussian uncertainties. Interferometric observables are correlated by construct, though. Also, the calibration by an instrumental transfer function ensures that the resulting observables are not Gaussian, even if uncalibrated ones happened to be so.While analytic frameworks have been published in the past, they are cumbersome and there is no generic implementation available. We propose here a relatively simple way of dealing with correlated errors without the need to extend the OIFITS specification or making some Gaussian assumptions. By repeatedly picking at random which interferograms, which calibrator stars, and which are the errors on their diameters, and performing the data processing on the bootstrapped data, we derive a sampling of p(O), the multivariate probability density function (PDF) of the observables O. The results can be stored in a normal OIFITS file. Then, given a model m with parameters P predicting observables O = m(P), we can estimate the PDF of the model parameters f(P) = p(m(P)) by using a density estimation of the observables' PDF p.With observations repeated over different baselines, on nights several days apart, and with a significant set of calibrators systematic errors are de facto taken into account. We apply the technique to a precise and accurate assessment of stellar diameters obtained at the Very Large Telescope Interferometer with PIONIER.
Accurate Frequency Estimation Based On Three-Parameter Sine-Fitting With Three FFT Samples
Liu Xin
2015-09-01
Full Text Available This paper presents a simple DFT-based golden section searching algorithm (DGSSA for the single tone frequency estimation. Because of truncation and discreteness in signal samples, Fast Fourier Transform (FFT and Discrete Fourier Transform (DFT are inevitable to cause the spectrum leakage and fence effect which lead to a low estimation accuracy. This method can improve the estimation accuracy under conditions of a low signal-to-noise ratio (SNR and a low resolution. This method firstly uses three FFT samples to determine the frequency searching scope, then – besides the frequency – the estimated values of amplitude, phase and dc component are obtained by minimizing the least square (LS fitting error of three-parameter sine fitting. By setting reasonable stop conditions or the number of iterations, the accurate frequency estimation can be realized. The accuracy of this method, when applied to observed single-tone sinusoid samples corrupted by white Gaussian noise, is investigated by different methods with respect to the unbiased Cramer-Rao Low Bound (CRLB. The simulation results show that the root mean square error (RMSE of the frequency estimation curve is consistent with the tendency of CRLB as SNR increases, even in the case of a small number of samples. The average RMSE of the frequency estimation is less than 1.5 times the CRLB with SNR = 20 dB and N = 512.
Accurate parameters for HD 209458 and its planet from HST spectrophotometry
del Burgo, C.; Allende Prieto, C.
2016-08-01
We present updated parameters for the star HD 209458 and its transiting giant planet. The stellar angular diameter θ=0.2254±0.0017 mas is obtained from the average ratio between the absolute flux observed with the Hubble Space Telescope and that of the best-fitting Kurucz model atmosphere. This angular diameter represents an improvement in precision of more than four times compared to available interferometric determinations. The stellar radius R⋆=1.20±0.05 R⊙ is ascertained by combining the angular diameter with the Hipparcos trigonometric parallax, which is the main contributor to its uncertainty, and therefore the radius accuracy should be significantly improved with Gaia's measurements. The radius of the exoplanet Rp=1.41±0.06 RJ is derived from the corresponding transit depth in the light curve and our stellar radius. From the model fitting, we accurately determine the effective temperature, Teff=6071±20 K, which is in perfect agreement with the value of 6070±24 K calculated from the angular diameter and the integrated spectral energy distribution. We also find precise values from recent Padova Isochrones, such as R⋆=1.20±0.06 R⊙ and Teff=6099±41 K. We arrive at a consistent picture from these methods and compare the results with those from the literature.
Open clusters. I. Fundamental parameters of B stars in NGC 3766 and NGC 4755
Aidelman, Y.; Cidale, L. S.; Zorec, J.; Arias, M. L.
2012-08-01
Context. Spectroscopic investigations of galactic open clusters are scarce and limited to a reduced sample of cluster members. Aims: We intend to perform a complete study of the physical parameters of two galactic clusters as well as of their individual members. Methods: To carry out this study, we used the BCD (Barbier-Chalonge-Divan) spectrophotometric system, which is based on the study of the Balmer discontinuity and is independent of interstellar and circumstellar extinction. Additional physical properties were derived from the line profiles (FWHM) and stellar evolution models. We analyzed low-resolution spectra around the Balmer discontinuity for normal B-type and Be stars in two open clusters: NGC 3766 and NGC 4755. We determined the stellar fundamental parameters, such as effective temperatures, surface gravities, spectral types, luminosity classes, absolute and bolometric magnitudes, and color gradient excesses. The stellar rotation velocity was also determined. Complementary information, mainly stellar mass, age, and radius of the star population were calculated using stellar evolution models. In some cases, the stellar fundamental parameters were derived for the first time. The obtained results allowed us also to determine the reddening, age, and distance to the clusters. Results: The cluster parameters obtained through the BCD method agree very well with those derived from classical methods based on photometric data. The BCD system also provides physical properties of the star members. This study enables us to test the good behavior of Mbol(λ1,D)-calibrations and detect systematic discrepancies between log g estimates from model atmospheres and those derived from stellar evolution models. To improve our knowledge on the formation and evolution of the clusters, more statistical studies on the initial mass luminosity and angular momentum distributions should be addressed. Therefore, the BCD spectrophotometric system could be a powerful tool for studying
Aidelman, Y.; Cidale, L. S.; Zorec, J.; Panei, J. A.
2015-05-01
Context. The knowledge of accurate values of effective temperature, surface gravity, and luminosity of stars in open clusters is very important not only to derive cluster distances and ages but also to discuss the stellar structure and evolution. Unfortunately, stellar parameters are still very scarce. Aims: Our goal is to study five open clusters to derive stellar parameters of the B and Be star population and discuss the cluster properties. In a near future, we intend to gather a statistically relevant samples of Be stars to discuss their origin and evolution. Methods: We use the Barbier-Chalonge-Divan spectrophotometric system, based on the study of low-resolution spectra around the Balmer discontinuity, since it is independent of the interstellar and circumstellar extinction and provides accurate Hertzsprung-Russell diagrams and stellar parameters. Results: We determine stellar fundamental parameters, such as effective temperatures, surface gravities, spectral types, luminosity classes, absolute and bolometric magnitudes and colour gradient excesses of the stars in the field of Collinder 223, Hogg 16, NGC 2645, NGC 3114, and NGC 6025. Additional information, mainly masses and ages of cluster stellar populations, is obtained using stellar evolution models. In most cases, stellar fundamental parameters have been derived for the first time. We also discuss the derived cluster properties of reddening, age and distance. Conclusions: Collinder 223 cluster parameters are overline{E(B-V) = 0.25 ± 0.03} mag and overline{(mv - M_v)0 = 11.21 ± 0.25} mag. In Hogg 16, we clearly distinguish two groups of stars (Hogg 16a and Hogg 16b) with very different mean true distance moduli (8.91 ± 0.26 mag and 12.51 ± 0.38 mag), mean colour excesses (0.26 ± 0.03 mag and 0.63 ± 0.08 mag), and spectral types (B early-type and B late-/A-type stars, respectively). The farthest group could be merged with Collinder 272. NGC 2645 is a young cluster (114 are overline{E(B-V) = 0.10 ± 0
An empirical calibration to estimate cool dwarf fundamental parameters from H-band spectra
Newton, Elisabeth R; Irwin, Jonathan; Mann, Andrew W
2014-01-01
Interferometric radius measurements provide a direct probe of the fundamental parameters of M dwarfs, but is within reach for only a limited sample of nearby, bright stars. We use interferometrically-measured radii, bolometric luminosities, and effective temperatures to develop new empirical calibrations based on low-resolution, near-infrared spectra. We use H-band Mg and Al features to determine effective temperature, radius and log luminosity; the standard deviations in the residuals of our best fits are, respectively, 73K, 0.027Rsun, and 0.049 dex (11% error on luminosity). These relationships are valid for mid-K to mid-M dwarfs, roughly corresponding to temperatures between 3100 and 4800K. We apply our calibrations to M dwarfs targeted by the MEarth transiting planet survey and to the cool Kepler Objects of Interest (KOIs). We independently validate our calibrations by demonstrating a clear relationship between our inferred parameters and the absolute K magnitudes of the MEarth stars. We identify objects ...
AN EMPIRICAL CALIBRATION TO ESTIMATE COOL DWARF FUNDAMENTAL PARAMETERS FROM H-BAND SPECTRA
Newton, Elisabeth R.; Charbonneau, David; Irwin, Jonathan [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States); Mann, Andrew W., E-mail: enewton@cfa.harvard.edu [Astronomy Department, University of Texas at Austin, Austin, TX 78712 (United States)
2015-02-20
Interferometric radius measurements provide a direct probe of the fundamental parameters of M dwarfs. However, interferometry is within reach for only a limited sample of nearby, bright stars. We use interferometrically measured radii, bolometric luminosities, and effective temperatures to develop new empirical calibrations based on low-resolution, near-infrared spectra. We find that H-band Mg and Al spectral features are good tracers of stellar properties, and derive functions that relate effective temperature, radius, and log luminosity to these features. The standard deviations in the residuals of our best fits are, respectively, 73 K, 0.027 R {sub ☉}, and 0.049 dex (an 11% error on luminosity). Our calibrations are valid from mid K to mid M dwarf stars, roughly corresponding to temperatures between 3100 and 4800 K. We apply our H-band relationships to M dwarfs targeted by the MEarth transiting planet survey and to the cool Kepler Objects of Interest (KOIs). We present spectral measurements and estimated stellar parameters for these stars. Parallaxes are also available for many of the MEarth targets, allowing us to independently validate our calibrations by demonstrating a clear relationship between our inferred parameters and the stars' absolute K magnitudes. We identify objects with magnitudes that are too bright for their inferred luminosities as candidate multiple systems. We also use our estimated luminosities to address the applicability of near-infrared metallicity calibrations to mid and late M dwarfs. The temperatures we infer for the KOIs agree remarkably well with those from the literature; however, our stellar radii are systematically larger than those presented in previous works that derive radii from model isochrones. This results in a mean planet radius that is 15% larger than one would infer using the stellar properties from recent catalogs. Our results confirm the derived parameters from previous in-depth studies of KOIs 961 (Kepler
AN EMPIRICAL CALIBRATION TO ESTIMATE COOL DWARF FUNDAMENTAL PARAMETERS FROM H-BAND SPECTRA
Interferometric radius measurements provide a direct probe of the fundamental parameters of M dwarfs. However, interferometry is within reach for only a limited sample of nearby, bright stars. We use interferometrically measured radii, bolometric luminosities, and effective temperatures to develop new empirical calibrations based on low-resolution, near-infrared spectra. We find that H-band Mg and Al spectral features are good tracers of stellar properties, and derive functions that relate effective temperature, radius, and log luminosity to these features. The standard deviations in the residuals of our best fits are, respectively, 73 K, 0.027 R ☉, and 0.049 dex (an 11% error on luminosity). Our calibrations are valid from mid K to mid M dwarf stars, roughly corresponding to temperatures between 3100 and 4800 K. We apply our H-band relationships to M dwarfs targeted by the MEarth transiting planet survey and to the cool Kepler Objects of Interest (KOIs). We present spectral measurements and estimated stellar parameters for these stars. Parallaxes are also available for many of the MEarth targets, allowing us to independently validate our calibrations by demonstrating a clear relationship between our inferred parameters and the stars' absolute K magnitudes. We identify objects with magnitudes that are too bright for their inferred luminosities as candidate multiple systems. We also use our estimated luminosities to address the applicability of near-infrared metallicity calibrations to mid and late M dwarfs. The temperatures we infer for the KOIs agree remarkably well with those from the literature; however, our stellar radii are systematically larger than those presented in previous works that derive radii from model isochrones. This results in a mean planet radius that is 15% larger than one would infer using the stellar properties from recent catalogs. Our results confirm the derived parameters from previous in-depth studies of KOIs 961 (Kepler-42
Inversion of stellar fundamental parameters from ESPaDOnS and Narval high-resolution spectra
Paletou, F.; Böhm, T.; Watson, V.; Trouilhet, J.-F.
2015-01-01
The general context of this study is the inversion of stellar fundamental parameters from high-resolution Echelle spectra. We aim at developing a fast and reliable tool for the post-processing of spectra produced by ESPaDOnS and Narval spectropolarimeters. Our inversion tool relies on principal component analysis. It allows reducing dimensionality and defining a specific metric for the search of nearest neighbours between an observed spectrum and a set of observed spectra taken from the Elodie stellar library. Effective temperature, surface gravity, total metallicity, and projected rotational velocity are derived. Various tests presented in this study that were based solely on information coming from a spectral band centred on the Mg i b-triplet and had spectra from FGK stars are very promising. Based on observations obtained at the Télescope Bernard Lyot (TBL, Pic du Midi, France), which is operated by the Observatoire Midi-Pyrénées, Université de Toulouse, Centre National de la Recherche Scientifique (France) and the Canada-France-Hawaii Telescope (CFHT) which is operated by the National Research Council of Canada, CNRS/INSU and the University of Hawaii (USA).
Schönrich, Ralph; Bergemann, Maria
2014-09-01
We present a unified framework to derive fundamental stellar parameters by combining all available observational and theoretical information for a star. The algorithm relies on the method of Bayesian inference, which for the first time directly integrates the spectroscopic analysis pipeline based on the global spectrum synthesis and allows for comprehensive and objective error calculations given the priors. Arbitrary input data sets can be included into our analysis and other stellar quantities, in addition to stellar age, effective temperature, surface gravity, and metallicity, can be computed on demand. We lay out the mathematical framework of the method and apply it to several observational data sets, including high- and low-resolution spectra (UVES, NARVAL, HARPS, SDSS/SEGUE). We find that simpler approximations for the spectroscopic probability distribution function, which are inherent to past Bayesian approaches, lead to deviations of several standard deviations and unreliable errors on the same data. By its flexibility and the simultaneous analysis of multiple independent measurements for a star, it will be ideal to analyse and cross-calibrate the large ongoing and forthcoming surveys, like Gaia-European Southern Observatory (ESO), SDSS, Gaia and LSST.
Fundamental approaches for analysis thermal hydraulic parameter for Puspati Research Reactor
Hashim, Zaredah; Lanyau, Tonny Anak; Farid, Mohamad Fairus Abdul; Kassim, Mohammad Suhaimi; Azhar, Noraishah Syahirah
2016-01-01
The 1-MW PUSPATI Research Reactor (RTP) is the one and only nuclear pool type research reactor developed by General Atomic (GA) in Malaysia. It was installed at Malaysian Nuclear Agency and has reached the first criticality on 8 June 1982. Based on the initial core which comprised of 80 standard TRIGA fuel elements, the very fundamental thermal hydraulic model was investigated during steady state operation using the PARET-code. The main objective of this paper is to determine the variation of temperature profiles and Departure of Nucleate Boiling Ratio (DNBR) of RTP at full power operation. The second objective is to confirm that the values obtained from PARET-code are in agreement with Safety Analysis Report (SAR) for RTP. The code was employed for the hot and average channels in the core in order to calculate of fuel's center and surface, cladding, coolant temperatures as well as DNBR's values. In this study, it was found that the results obtained from the PARET-code showed that the thermal hydraulic parameters related to safety for initial core which was cooled by natural convection was in agreement with the designed values and safety limit in SAR.
Fundamental approaches for analysis thermal hydraulic parameter for Puspati Research Reactor
Hashim, Zaredah, E-mail: zaredah@nm.gov.my; Lanyau, Tonny Anak, E-mail: tonny@nm.gov.my; Farid, Mohamad Fairus Abdul; Kassim, Mohammad Suhaimi [Reactor Technology Centre, Technical Support Division, Malaysia Nuclear Agency, Ministry of Science, Technology and Innovation, Bangi, 43000, Kajang, Selangor Darul Ehsan (Malaysia); Azhar, Noraishah Syahirah [Universiti Teknologi Malaysia, 80350, Johor Bahru, Johor Darul Takzim (Malaysia)
2016-01-22
The 1-MW PUSPATI Research Reactor (RTP) is the one and only nuclear pool type research reactor developed by General Atomic (GA) in Malaysia. It was installed at Malaysian Nuclear Agency and has reached the first criticality on 8 June 1982. Based on the initial core which comprised of 80 standard TRIGA fuel elements, the very fundamental thermal hydraulic model was investigated during steady state operation using the PARET-code. The main objective of this paper is to determine the variation of temperature profiles and Departure of Nucleate Boiling Ratio (DNBR) of RTP at full power operation. The second objective is to confirm that the values obtained from PARET-code are in agreement with Safety Analysis Report (SAR) for RTP. The code was employed for the hot and average channels in the core in order to calculate of fuel’s center and surface, cladding, coolant temperatures as well as DNBR’s values. In this study, it was found that the results obtained from the PARET-code showed that the thermal hydraulic parameters related to safety for initial core which was cooled by natural convection was in agreement with the designed values and safety limit in SAR.
Fundamental approaches for analysis thermal hydraulic parameter for Puspati Research Reactor
The 1-MW PUSPATI Research Reactor (RTP) is the one and only nuclear pool type research reactor developed by General Atomic (GA) in Malaysia. It was installed at Malaysian Nuclear Agency and has reached the first criticality on 8 June 1982. Based on the initial core which comprised of 80 standard TRIGA fuel elements, the very fundamental thermal hydraulic model was investigated during steady state operation using the PARET-code. The main objective of this paper is to determine the variation of temperature profiles and Departure of Nucleate Boiling Ratio (DNBR) of RTP at full power operation. The second objective is to confirm that the values obtained from PARET-code are in agreement with Safety Analysis Report (SAR) for RTP. The code was employed for the hot and average channels in the core in order to calculate of fuel’s center and surface, cladding, coolant temperatures as well as DNBR’s values. In this study, it was found that the results obtained from the PARET-code showed that the thermal hydraulic parameters related to safety for initial core which was cooled by natural convection was in agreement with the designed values and safety limit in SAR
The Baade-Becker-Wesselink technique and the fundamental astrophysical parameters of Cepheids
Rastorguev, Alexey S; Zabolotskikh, Marina V; Berdnikov, Leonid N; Gorynya, Natalia A
2012-01-01
The BBW method remains one of most demanded tool to derive full set of Cepheid astrophysical parameters. Surface brightness version of the BBW technique was preferentially used during last decades to calculate Cepheid radii and to improve PLC relations. Its implementation requires a priory knowledge of Cepheid reddening value. We propose a new version of the Baade--Becker--Wesselink technique, which allows one to independently determine the colour excess and the intrinsic colour of a radially pulsating star, in addition to its radius, luminosity, and distance. It is considered to be a generalization of the Balona light curve modelling approach. The method also allows the function F(CI_0) = BC + 10 log Teff for the class of pulsating stars considered to be calibrated. We apply this technique to a number of classical Cepheids with very accurate light and radial-velocity curves. The new technique can also be applied to other pulsating variables, e.g. RR Lyraes. We discuss also possible dependence of the projecti...
Thygesen, A O; Andrievsky, S; Korotin, S; Yong, D; Zaggia, S; Ludwig, H -G; Collet, R; Asplund, M; D'Antona, F; Meléndez, J; D'Ercole, A
2014-01-01
Context: The study of chemical abundance patterns in globular clusters is of key importance to constrain the different candidates for intra-cluster pollution of light elements. Aims: We aim at deriving accurate abundances for a large range of elements in the globular cluster 47 Tucanae (NGC 104) to add new constraints to the pollution scenarios for this particular cluster, expanding the range of previously derived element abundances. Methods: Using tailored 1D LTE atmospheric models together with a combination of equivalent width measurements, LTE, and NLTE synthesis we derive stellar parameters and element abundances from high-resolution, high signal-to-noise spectra of 13 red giant stars near the tip of the RGB. Results: We derive abundances of a total 27 elements (O, Na, Mg, Al, Si, Ca, Sc, Ti, V, Cr, Mn, Fe, Co, Ni, Cu, Zn, Y, Zr, Mo, Ru, Ba, La, Ce, Pr, Nd, Eu, Dy). Departures from LTE were taken into account for Na, Al and Ba. We find a mean [Fe/H] = $-0.78\\pm0.07$ and $[\\alpha/{\\rm Fe}]=0.34\\pm0.03$ in...
Accurate determination of crystal structures based on averaged local bond order parameters
Lechner, Wolfgang; Dellago, Christoph
2008-01-01
Local bond order parameters based on spherical harmonics, also known as Steinhardt order parameters, are often used to determine crystal structures in molecular simulations. Here we propose a modification of this method in which the complex bond order vectors are averaged over the first neighbor shell of a given particle and the particle itself. As demonstrated using soft particle systems, this averaging procedure considerably improves the accuracy with which different crystal structures can ...
Jamadi, Mohammad; Merrikh-Bayat, Farshad
2014-01-01
This paper proposes an effective method for estimating the parameters of double-cage induction motors by using Artificial Bee Colony (ABC) algorithm. For this purpose the unknown parameters in the electrical model of asynchronous machine are calculated such that the sum of the square of differences between full load torques, starting torques, maximum torques, starting currents, full load currents, and nominal power factors obtained from model and provided by manufacturer is minimized. In orde...
An Accurate and Generic Testing Approach to Vehicle Stability Parameters Based on GPS and INS
Zhibin Miao; Hongtian Zhang; Jinzhu Zhang
2015-01-01
With the development of the vehicle industry, controlling stability has become more and more important. Techniques of evaluating vehicle stability are in high demand. As a common method, usually GPS sensors and INS sensors are applied to measure vehicle stability parameters by fusing data from the two system sensors. Although prior model parameters should be recognized in a Kalman filter, it is usually used to fuse data from multi-sensors. In this paper, a robust, intelligent and precise meth...
Studies of Galactic chemical, and dynamical evolution in the solar neighborhood depend on the availability of precise atmospheric parameters (effective temperature T eff, metallicity [Fe/H], and surface gravity log g) for solar-type stars. Many large-scale spectroscopic surveys operate at low to moderate spectral resolution for efficiency in observing large samples, which makes the stellar characterization difficult due to the high degree of blending of spectral features. Therefore, most surveys employ spectral synthesis, which is a powerful technique, but relies heavily on the completeness and accuracy of atomic line databases and can yield possibly correlated atmospheric parameters. In this work, we use an alternative method based on spectral indices to determine the atmospheric parameters of a sample of nearby FGK dwarfs and subgiants observed by the MARVELS survey at moderate resolving power (R ∼ 12,000). To avoid a time-consuming manual analysis, we have developed three codes to automatically normalize the observed spectra, measure the equivalent widths of the indices, and, through a comparison of those with values calculated with predetermined calibrations, estimate the atmospheric parameters of the stars. The calibrations were derived using a sample of 309 stars with precise stellar parameters obtained from the analysis of high-resolution FEROS spectra, permitting the low-resolution equivalent widths to be directly related to the stellar parameters. A validation test of the method was conducted with a sample of 30 MARVELS targets that also have reliable atmospheric parameters derived from the high-resolution spectra and spectroscopic analysis based on the excitation and ionization equilibria method. Our approach was able to recover the parameters within 80 K for T eff, 0.05 dex for [Fe/H], and 0.15 dex for log g, values that are lower than or equal to the typical external uncertainties found between different high-resolution analyses. An additional test
Ghezzi, Luan; Da Costa, Luiz N.; Maia, Marcio A. G.; Ogando, Ricardo L. C. [Observatório Nacional, Rua Gal. José Cristino 77, Rio de Janeiro, RJ 20921-400 (Brazil); Dutra-Ferreira, Letícia; Lorenzo-Oliveira, Diego; Porto de Mello, Gustavo F.; Santiago, Basílio X. [Laboratório Interinstitucional de e-Astronomia - LIneA, Rua Gal. José Cristino 77, Rio de Janeiro, RJ 20921-400 (Brazil); De Lee, Nathan [Department of Physics and Geology, Northern Kentucky University, Highland Heights, KY 41099 (United States); Lee, Brian L.; Ge, Jian [Department of Astronomy, University of Florida, 211 Bryant Space Science Center, Gainesville, FL 32611-2055 (United States); Wisniewski, John P. [H. L. Dodge Department of Physics and Astronomy, University of Oklahoma, 440 West Brooks St Norman, OK 73019 (United States); González Hernández, Jonay I. [Instituto de Astrofísica de Canarias (IAC), E-38205 La Laguna, Tenerife (Spain); Stassun, Keivan G.; Cargile, Phillip; Pepper, Joshua [Department of Physics and Astronomy, Vanderbilt University, Nashville, TN 37235 (United States); Fleming, Scott W. [Space Telescope Science Institute - STScI, 3700 San Martin Drive, Baltimore, MD 21218 (United States); Schneider, Donald P.; Mahadevan, Suvrath [Department of Astronomy and Astrophysics, The Pennsylvania State University, University Park, PA 16802 (United States); Wang, Ji, E-mail: luan@linea.gov.br [Department of Astronomy, Yale University, New Haven, CT 06511 (United States); and others
2014-12-01
Studies of Galactic chemical, and dynamical evolution in the solar neighborhood depend on the availability of precise atmospheric parameters (effective temperature T {sub eff}, metallicity [Fe/H], and surface gravity log g) for solar-type stars. Many large-scale spectroscopic surveys operate at low to moderate spectral resolution for efficiency in observing large samples, which makes the stellar characterization difficult due to the high degree of blending of spectral features. Therefore, most surveys employ spectral synthesis, which is a powerful technique, but relies heavily on the completeness and accuracy of atomic line databases and can yield possibly correlated atmospheric parameters. In this work, we use an alternative method based on spectral indices to determine the atmospheric parameters of a sample of nearby FGK dwarfs and subgiants observed by the MARVELS survey at moderate resolving power (R ∼ 12,000). To avoid a time-consuming manual analysis, we have developed three codes to automatically normalize the observed spectra, measure the equivalent widths of the indices, and, through a comparison of those with values calculated with predetermined calibrations, estimate the atmospheric parameters of the stars. The calibrations were derived using a sample of 309 stars with precise stellar parameters obtained from the analysis of high-resolution FEROS spectra, permitting the low-resolution equivalent widths to be directly related to the stellar parameters. A validation test of the method was conducted with a sample of 30 MARVELS targets that also have reliable atmospheric parameters derived from the high-resolution spectra and spectroscopic analysis based on the excitation and ionization equilibria method. Our approach was able to recover the parameters within 80 K for T {sub eff}, 0.05 dex for [Fe/H], and 0.15 dex for log g, values that are lower than or equal to the typical external uncertainties found between different high-resolution analyses. An
Ghezzi, Luan; Lorenzo-Oliveira, Diego; de Mello, Gustavo F Porto; Santiago, Basílio X; De Lee, Nathan; Lee, Brian L; da Costa, Luiz N; Maia, Marcio A G; Ogando, Ricardo L C; Wisniewski, John P; Hernández, Jonay I González; Stassun, Keivan G; Fleming, Scott W; Schneider, Donald P; Mahadevan, Suvrath; Cargile, Phillip; Ge, Jian; Pepper, Joshua; Wang, Ji
2014-01-01
Studies of Galactic chemical and dynamical evolution in the solar neighborhood depend on the availability of precise atmospheric parameters (Teff, [Fe/H] and log g) for solar-type stars. Many large-scale spectroscopic surveys operate at low to moderate spectral resolution for efficiency in observing large samples, which makes the stellar characterization difficult due to the high degree of blending of spectral features. While most surveys use spectral synthesis, in this work we employ an alternative method based on spectral indices to determine the atmospheric parameters of a sample of nearby FGK dwarfs and subgiants observed by the MARVELS survey at moderate resolving power (R~12,000). We have developed three codes to automatically normalize the observed spectra, measure the equivalent widths of the indices and, through the comparison of those with values calculated with pre-determined calibrations, derive the atmospheric parameters of the stars. The calibrations were built using a sample of 309 stars with p...
Thygesen, A. O.; Sbordone, L.; Andrievsky, S.; Korotin, S.; Yong, D.; Zaggia, S.; Ludwig, H.-G.; Collet, R.; Asplund, M.; Ventura, P.; D'Antona, F.; Meléndez, J.; D'Ercole, A.
2014-12-01
Context. The study of chemical abundance patterns in globular clusters is key importance to constraining the different candidates for intracluster pollution of light elements. Aims: We aim at deriving accurate abundances for a wide range of elements in the globular cluster 47 Tucanae (NGC 104) to add new constraints to the pollution scenarios for this particular cluster, expanding the range of previously derived element abundances. Methods: Using tailored 1D local thermodynamic equilibrium (LTE) atmospheric models, together with a combination of equivalent width measurements, LTE, and NLTE synthesis, we derive stellar parameters and element abundances from high-resolution, high signal-to-noise spectra of 13 red giant stars near the tip of the RGB. Results: We derive abundances of a total 27 elements (O, Na, Mg, Al, Si, Ca, Sc, Ti, V, Cr, Mn, Fe, Co, Ni, Cu, Zn, Y, Zr, Mo, Ru, Ba, La, Ce, Pr, Nd, Eu, Dy). Departures from LTE were taken into account for Na, Al, and Ba. We find a mean [Fe/H] = -0.78 ± 0.07 and [ α/ Fe ] = 0.34 ± 0.03 in good agreement with previous studies. The remaining elements show good agreement with the literature, but including NLTE for Al has a significant impact on the behavior of this key element. Conclusions: We confirm the presence of an Na-O anti-correlation in 47 Tucanae found by several other works. Our NLTE analysis of Al shifts the [Al/Fe] to lower values, indicating that this may be overestimated in earlier works. No evidence of an intrinsic variation is found in any of the remaining elements. Based on observations made with the ESO Very Large Telescope at Paranal Observatory, Chile (Programmes 084.B-0810 and 086.B-0237).Full Tables 2, 5, and 9 are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/572/A108Appendix A is available in electronic form at http://www.aanda.org
Accurate parameters of the oldest known rocky-exoplanet hosting system: Kepler-10 revisited
Since the discovery of Kepler-10, the system has received considerable interest because it contains a small, rocky planet which orbits the star in less than a day. The system's parameters, announced by the Kepler team and subsequently used in further research, were based on only five months of data. We have reanalyzed this system using the full span of 29 months of Kepler photometric data, and obtained improved information about its star and the planets. A detailed asteroseismic analysis of the extended time series provides a significant improvement on the stellar parameters: not only can we state that Kepler-10 is the oldest known rocky-planet-harboring system at 10.41 ± 1.36 Gyr, but these parameters combined with improved planetary parameters from new transit fits gives us the radius of Kepler-10b to within just 125 km. A new analysis of the full planetary phase curve leads to new estimates on the planetary temperature and albedo, which remain degenerate in the Kepler band. Our modeling suggests that the flux level during the occultation is slightly lower than at the transit wings, which would imply that the nightside of this planet has a non-negligible temperature.
Lower bound on reliability for Weibull distribution when shape parameter is not estimated accurately
Huang, Zhaofeng; Porter, Albert A.
1990-01-01
The mathematical relationships between the shape parameter Beta and estimates of reliability and a life limit lower bound for the two parameter Weibull distribution are investigated. It is shown that under rather general conditions, both the reliability lower bound and the allowable life limit lower bound (often called a tolerance limit) have unique global minimums over a range of Beta. Hence lower bound solutions can be obtained without assuming or estimating Beta. The existence and uniqueness of these lower bounds are proven. Some real data examples are given to show how these lower bounds can be easily established and to demonstrate their practicality. The method developed here has proven to be extremely useful when using the Weibull distribution in analysis of no-failure or few-failures data. The results are applicable not only in the aerospace industry but anywhere that system reliabilities are high.
TANG Yi; FANG Yong-li; YANG Luo; SUN Yu-xin; YU Zheng-hua
2012-01-01
A new accurate calculation method of electric power harmonic parameters was presented.Based on the delay time theorem of Fourier transform,the frequency of the electric power was calculated,and then,suing interpolation in the frequency domain of the windows,the parameters (amplitude and phase) of each harmonic frequency signals were calculated accurately.In the paper,the effect of the delay time and the windows on the electric power harmonic calculation accuracy was analysed.The digital simulation and the physical measurement tests show that the proposed method is effective and has more advantages than other methods which are based on multipoint interpolation especially in calculation time cost; therefore,it is very suitable to be used in the single chip DSP micro-processor.
Schwob, C
2006-12-15
This document reviews the theoretical and experimental achievements of the author concerning highly accurate atomic spectroscopy applied for the determination of fundamental constants. A pure optical frequency measurement of the 2S-12D 2-photon transitions in atomic hydrogen and deuterium has been performed. The experimental setting-up is described as well as the data analysis. Optimized values for the Rydberg constant and Lamb shifts have been deduced (R = 109737.31568516 (84) cm{sup -1}). An experiment devoted to the determination of the fine structure constant with an aimed relative uncertainty of 10{sup -9} began in 1999. This experiment is based on the fact that Bloch oscillations in a frequency chirped optical lattice are a powerful tool to transfer coherently many photon momenta to the atoms. We have used this method to measure accurately the ratio h/m(Rb). The measured value of the fine structure constant is {alpha}{sub -1} = 137.03599884 (91) with a relative uncertainty of 6.7*10{sup -9}. The future and perspectives of this experiment are presented. This document presented before an academic board will allow his author to manage research work and particularly to tutor thesis students. (A.C.)
An Accurate and Generic Testing Approach to Vehicle Stability Parameters Based on GPS and INS
Zhibin Miao
2015-12-01
Full Text Available With the development of the vehicle industry, controlling stability has become more and more important. Techniques of evaluating vehicle stability are in high demand. As a common method, usually GPS sensors and INS sensors are applied to measure vehicle stability parameters by fusing data from the two system sensors. Although prior model parameters should be recognized in a Kalman filter, it is usually used to fuse data from multi-sensors. In this paper, a robust, intelligent and precise method to the measurement of vehicle stability is proposed. First, a fuzzy interpolation method is proposed, along with a four-wheel vehicle dynamic model. Second, a two-stage Kalman filter, which fuses the data from GPS and INS, is established. Next, this approach is applied to a case study vehicle to measure yaw rate and sideslip angle. The results show the advantages of the approach. Finally, a simulation and real experiment is made to verify the advantages of this approach. The experimental results showed the merits of this method for measuring vehicle stability, and the approach can meet the design requirements of a vehicle stability controller.
Accurate solutions, parameter studies and comparisons for the Euler and potential flow equations
Anderson, W. Kyle; Batina, John T.
1988-01-01
Parameter studies are conducted using the Euler and potential flow equation models for steady and unsteady flows in both two and three dimensions. The Euler code is an implicit, upwind, finite volume code which uses the Van Leer method of flux vector splitting which has been recently extended for use on dynamic meshes and maintain all the properties of the original splitting. The potential flow code is an implicit, finite difference method for solving the transonic small disturbance equations and incorporates both entropy and vorticity corrections into the solution procedures thereby extending its applicability into regimes where shock strength normally precludes its use. Parameter studies resulting in benchmark type calculations include the effects of spatial and temporal refinement, spatial order of accuracy, far field boundary conditions for steady flow, frequency of oscillation, and the use of subiterations at each time step to reduce linearization and factorization errors. Comparisons between Euler and potential flow results are made, as well as with experimental data where available.
Legaye, J; Duval-Beaupère, G; Hecquet, J; Marty, C
1998-01-01
This paper proposes an anatomical parameter, the pelvic incidence, as the key factor for managing the spinal balance. Pelvic and spinal sagittal parameters were investigated for normal and scoliotic adult subjects. The relation between pelvic orientation, and spinal sagittal balance was examined by statistical analysis. A close relationship was observed, for both normal and scoliotic subjects, between the anatomical parameter of pelvic incidence and the sacral slope, which strongly determines lumbar lordosis. Taking into account the Cobb angle and the apical vertebral rotation confers a three-dimensional aspect to this chain of relations between pelvis and spine. A predictive equation of lordosis is postulated. The pelvic incidence appears to be the main axis of the sagittal balance of the spine. It controls spinal curves in accordance with the adaptability of the other parameters. PMID:9629932
Operational definition of (brane-induced) space-time and constraints on the fundamental parameters
First we contemplate the operational definition of space-time in four dimensions in light of basic principles of quantum mechanics and general relativity and consider some of its phenomenological consequences. The quantum gravitational fluctuations of the background metric that comes through the operational definition of space-time are controlled by the Planck scale and are therefore strongly suppressed. Then we extend our analysis to the braneworld setup with low fundamental scale of gravity. It is observed that in this case the quantum gravitational fluctuations on the brane may become unacceptably large. The magnification of fluctuations is not linked directly to the low quantum gravity scale but rather to the higher-dimensional modification of Newton's inverse square law at relatively large distances. For models with compact extra dimensions the shape modulus of extra space can be used as a most natural and safe stabilization mechanism against these fluctuations
Kinetic parameters for dynamic study of two different configurations, 8 and 9, both with standard fuel, 20% enrichment and Flip (Fuel Life Improvement Program with 70% enrichment) fuel, for TRIGA Mark-III reactor from Mexico Nuclear Center, are obtained. A calculation method using both WIMS-D4 and DTF-IV and DAC1 was established, to decide which of those two configurations has the best safety and operational conditions. Validation of this methodology is done by calculate those parameters for a reactor core with new standard fuel. Configuration 9 is recommended to be use. (Author)
Schoenrich, Ralph
2013-01-01
We present a novel method based on Bayesian inference to derive physical parameters of stars from any observed data. Here we focus on surface gravity, effective temperature, metallicity, age, mass, and distance of a star. As an input, we use spectra, photometric magnitudes, parallaxes, and stellar evolution models. The method simultaneously computes full probability distributions for all desired parameters and delivers the first comprehensive and objective error estimates. The classical spectroscopic analysis, in the form of global spectrum synthesis, is directly integrated into the Bayesian framework, which also allows to include chemical abundances in the scheme. We lay out the mathematical framework and apply it to high-resolution spectra (UVES, HARPS, NARVAL instruments), as well as low-resolution spectra from SDSS/SEGUE survey. The method is flexible and can be applied to the analysis of single stars, large stellar datasets, or unresolved stellar populations. By its flexibility and the simultaneous analy...
Gargiulo, A; Merluzzi, P; Smith, R J; La Barbera, F; Busarello, G; Lucey, J R; Mercurio, A; Capaccioli, M
2009-01-01
We present a fundamental plane (FP) analysis of 141 early-type galaxies in the Shapley supercluster at z=0.049 based on spectroscopy from the AAOmega spectrograph at the AAT and photometry from the WFI on the ESO/MPI 2.2m telescope. The key feature of the survey is its coverage of low-mass galaxies down to sigma_0~50km/s. We obtain a best-fitting FP relation log r_e=1.06 log sigma_0 - 0.82 log _e + cst in the R band. The shallow exponent of sigma_0 is a result of the extension of our sample to low velocity dispersions. We investigate the origin of the intrinsic FP scatter, using estimates of age, metallicity and alpha/Fe. We find that the FP residuals anti-correlate (>3sigma) with the mean stellar age in agreement with previous work. However, a stronger (>4sigma) correlation with alpha/Fe is also found. These correlations indicate that galaxies with effective radii smaller than those predicted by the FP have stellar populations systematically older and with alpha over-abundances larger than average, for their...
Shallow geothermal energy application in buildings and civil engineering works (tunnels, diaphragm walls, bridge decks, roads, and train/metro stations) are spreading rapidly all around the world. the dual role of these energy geostructures makes their design challenging and more complex with respect to conventional projects. Besides the geotechnical parameters, thermal behavior parameters are needed in the design and dimensioning to warrantee the thermo-mechanical stability of the geothermal structural element. As for obtaining any soil thermal parameter, both in situ and laboratory methods can be used. The present study focuses on a lab test known the need ke method to measure the thermal conductivity of soils (λ). Through this research work, different variables inherent to the test procedure, as well as external factors that may have an impact on thermal conductivity measurements were studied. Samples extracted from the cores obtained from a geothermal drilling conducted on the campus of the Polytechnic University of Valencia, showing different mineralogical and nature composition (granular and clayey) were studied different (moisture and density) compacting conditions. 550 thermal conductivity measurements were performed, from which the influence of factors such as the degree of saturation-moisture, dry density and type of material was verified. Finally, a stratigraphic profile with thermal conductivities ranges of each geologic level was drawn, considering the degree of saturation ranges evaluated in lab tests, in order to be compared and related to thermal response test, currently in progress. Finally, a test protocol is set and proposed, for both remolded and undisturbed samples, under different saturation conditions. Together with this test protocol, a set of recommendations regarding the configuration of the measuring equipment, treatment of samples and other variables, are posed in order to reduce errors in the final results. (Author)
The Solar Twin Planet Search. I. Fundamental parameters of the stellar sample
Ramirez, I; Bean, J; Asplund, M; Bedell, M; Monroe, T; Casagrande, L; Schirbel, L; Dreizler, S; Teske, J; Maia, M Tucci; Alves-Brito, A; Baumann, P
2014-01-01
We are carrying out a search for planets around a sample of solar twin stars using the HARPS spectrograph. The goal of this project is to exploit the advantage offered by solar twins to obtain chemical abundances of unmatched precision. This survey will enable new studies of the stellar composition -- planet connection. Here we used the MIKE spectrograph on the Magellan Clay Telescope to acquire high resolution, high signal-to-noise ratio spectra of our sample stars. We measured the equivalent widths of iron lines and used strict differential excitation/ionization balance analysis to determine atmospheric parameters of unprecedented internal precision (DTeff=7K, Dlogg=0.019, D[Fe/H]=0.006dex, Dvt=0.016km/s). Reliable relative ages and highly precise masses were then estimated using theoretical isochrones. The spectroscopic parameters we derived are in good agreement with those measured using other independent techniques. The root-mean-square scatter of the differences seen is fully compatible with the observa...
VizieR Online Data Catalog: Fundamental stellar parameters from PolarBase (Paletou+, 2015)
Paletou, F.; Boehm, T.; Watson, V.; Trouilhet, J.-F.
2015-02-01
Our reference spectra are taken from the Elodie stellar library (Prugniel et al. 2007, astro-ph/0703658, Cat. III/251; Prugniel & Soubiran 2001A&A...369.1048P, Cat. III/218). Our main purpose is inverting of stellar parameters from high-resolution spectra coming from Narval and ESPaDOnS spectropolarimeters. These data are now available from the public database PolarBase (Petit et al., 2014PASP..126..469P, Cat. J/PASP/126/469). Narval is a modern spectropolarimeter operating in the 380-1000nm spectral domain, with a spectral resolution of 65000 in its polarimetric mode. It is an improved copy, adapted to the 2m TBL telescope, of the ESPaDOnS spectropolarimeter, which is in operations since 2004 at the 3.6m aperture CFHT telescope. (1 data file).
von Braun, K; Brummelaar, T A ten; van Belle, G T; Kane, S R; Ciardi, D R; Lopez-Morales, M; McAlister, H A; Schaefer, G; Ridgway, S T; Sturmann, L; Sturmann, J; White, R; Turner, N H; Farrington, C; Goldfinger, P J
2011-01-01
The bright star 55 Cancri is known to host five planets, including a transiting super-Earth. We use the CHARA Array to directly determine the following of 55 Cnc's stellar astrophysical parameters: $R=0.943 \\pm 0.010 R_{\\odot}$, $T_{\\rm EFF} = 5196 \\pm 24$ K. Planet 55 Cnc f ($M \\sin i = 0.155 M_{Jupiter}$) spends the majority of the duration of its elliptical orbit in the circumstellar habitable zone (0.67--1.32 AU) where, with moderate greenhouse heating, it could harbor liquid water. Our determination of 55 Cancri's stellar radius allows for a model-independent calculation of the physical diameter of the transiting super-Earth 55 Cnc e ($\\simeq 2.1 R_{\\earth}$), which, depending on the assumed literature value of planetary mass, implies a bulk density of 0.76 $\\rho_{\\earth}$ or 1.07 $\\rho_{\\earth}$.
Computational finite element models are commonly used for the simulation of radiofrequency ablation (RFA) treatments. However, the accuracy of these simulations is limited by the lack of precise knowledge of tissue parameters. In this technical note, an inverse solver based on the unscented Kalman filter (UKF) is proposed to optimize values for specific heat, thermal conductivity, and electrical conductivity resulting in accurately simulated temperature elevations. A total of 15 RFA treatments were performed on ex vivo bovine liver tissue. For each RFA treatment, 15 finite-element simulations were performed using a set of deterministically chosen tissue parameters to estimate the mean and variance of the resulting tissue ablation. The UKF was implemented as an inverse solver to recover the specific heat, thermal conductivity, and electrical conductivity corresponding to the measured area of the ablated tissue region, as determined from gross tissue histology. These tissue parameters were then employed in the finite element model to simulate the position- and time-dependent tissue temperature. Results show good agreement between simulated and measured temperature. (note)
Fundamental stellar parameters of $\\zeta$ Pup and $\\gamma^2$ Vel from HIPPARCOS data
Schärer, D; Grenon, Michel; Schaerer, Daniel; Schmutz, Werner; Grenon, Michel
1997-01-01
We report parallax measurements by the HIPPARCOS satellite of zeta Puppis and gamma^2 Velorum. The distance of zeta Pup is d=429 (+120/ -77) pc, in agreement with the commonly adopted value to Vela OB2. However, a significantly smaller distance is found for the gamma^2 Vel system: d=258 (+41/-31) pc. The total mass of gamma^2 Vel derived from its parallax, the angular size of the semi-major axis as measured with intensity interferometry, and the period is M(WR+O)=29.5 (+/-15.9) Msun. This result favors the orbital solution of Pike et al. (1983) over that of Moffat et al. (1986). The stellar parameters for the O star companion derived from line blanketed non-LTE atmosphere models are: Teff=34000 (+/-1500) K, log L/Lsun=5.3 (+/-0.15) from which an evolutionary mass of M=29 (+/-4) Msun and an age of 4.0 (+0.8/-0.5) Myr is obtained from single star evolutionary models. With non-LTE model calculations including He and C we derive a luminosity log L/Lsun~4.7 (+/-0.2) for the WR star. The mass-luminosity relation of...
Lefever, K; Morel, T; Aerts, C; Decin, L; Briquet, M
2009-01-01
We aim to determine the fundamental parameters of a sample of B stars with apparent visual magnitudes below 8 in the field-of-view of the CoRoT space mission, from high-resolution spectroscopy. We developed an automatic procedure for the spectroscopic analysis of B-type stars with winds, based on an extensive grid of FASTWIND model atmospheres. We use the equivalent widths and/or the line profile shapes of continuum normalized hydrogen, helium and silicon line profiles to determine the fundamental properties of these stars in an automated way. After thorough tests, both on synthetic datasets and on very high-quality, high-resolution spectra of B stars for which we already had accurate values of their physical properties from alternative analyses, we applied our method to 66 B-type stars contained in the ground-based archive of the CoRoT space mission. We discuss the statistical properties of the sample and compare them with those predicted by evolutionary models of B stars. Our spectroscopic results provide a...
Iorio, Lorenzo
2016-01-01
By using the most recently published Doppler tomography measurements and accurate theoretical modeling of the oblateness-driven orbital precessions, we tightly constrain some of the physical and orbital parameters of the planetary system hosted by the fast rotating star WASP-33. In particular, the measurements of the orbital inclination $i_{\\rm p}$ to the plane of the sky and of the sky-projected spin-orbit misalignment $\\lambda$ at two epochs six years apart allowed for the determination of the longitude of the ascending node $\\Omega$ and of the orbital inclination $I$ to the apparent equatorial plane at the same epochs. As a consequence, average rates of change $\\dot\\Omega_{\\rm exp},~\\dot I_{\\rm exp}$ of this two orbital elements, accurate to a $\\approx 10^{-2}~\\textrm{deg}~\\textrm{yr}^{-1}$ level, were calculated as well. By comparing them to general theoretical expressions $\\dot\\Omega_{J_2},~\\dot I_{J_2}$ for their precessions induced by an arbitrarily oriented quadrupole mass moment, we were able to dete...
Coskun, Orhan
For ≥10-Gbit/s bit rates that are transmitted over ≥100 km, it is essential that chromatic The traditional method of sending a training signal to identify a channel, followed by data, may be viewed as a simple code for the unknown channel. Results in blind sequence detection suggest that performance similar to this traditional approach can be obtained without training. However, for short packets and/or time-recursive algorithms, significant error floors exist due to the existence of sequences that are indistinguishable without knowledge of the channel. In this work, we first reconsider training signal design in light of recent results in blind sequence detection. We design training codes which combine modulation and training. In order to design these codes, we find an expression for the pairwise error probability of the joint maximum likelihood (JML) channel and sequence estimator. This expression motivates a pairwise distance for the JML receiver based on principal angles between the range spaces of data matrices. The general code design problem (generalized sphere packing) is formulated as the clique problem associated with an unweighted, undirected graph. We provide optimal and heuristic algorithms for this clique problem. For short packets, we demonstrate that significant improvements are possible by jointly considering the design of the training, modulation, and receiver processing. As a practical blind data detection example, data reception in a fiber optical channel is investigated. To get the most out of the data detection methods, auxiliary algorithms such as sampling phase adjustment, decision threshold estimation algorithms are suggested. For the parallel implementation of detectors, semiring structure is introduced both for decision feedback equalizer (DFE) and maximum likelihood sequence detection (MLSD). Timing jitter is another parameter that affects the BER performance of the system. A data-aided clock recovery algorithm reduces the jitter of
COHN William S.; LU Guo Zhen
2002-01-01
We derive the explicit fundamental solutions for a class of degenerate (or singular) oneparameter subelliptic differential operators on groups of Heisenberg (H) type. This extends the result of Kaplan for the sub-Laplacian on H-type groups, which in turn generalizes Folland's result on the Heisenberg group. As an application, we obtain a one-parameter representation formula for Sobolev functions of compact support on H-type groups. By choosing the parameter equal to the homogeneous dimension Q and using the Moser-Trudinger inequality for the convolutional type operator on stratified groups obtained in [18], we get the following theorem which gives the best constant for the MoserTrudinger inequality for Sobolev functions on H-type groups.Let G be any group of Heisenberg type whose Lie algebra is generated by m left invariant vectorfields and with a q-dimensional center. Let Q = m + 2q, Q′= Q/Q-1 andAQ= [(1/4)q-1/2πq+m/2Γ(Q+m/4)/ QΓ(m/2)Γ(Q/2)] 1/Q-1Then,F∈sup C∞U(Ω) { 1/|Ω|∫Ωexp (AQ(F(u)/‖ GF‖Q)Q′)du}＜∞,with AQ as the sharp constant, where G denotes the subelliptic gradient on G.This continues the research originated in our earlier study of the best constants in Moser-Teudinger inequalities and fundamental solutions for one-parameter subelliptic operators on the Heisenberg group[18].
Lalita Gupta; S Rath; S C Abbi; F C Jain
2003-10-01
Thin ﬁlms of ternary ZnCd1-Se were deposited on GaAs (100) substrate using metal-organic-chemical-vapour-deposition (MOCVD) technique. Temperature dependence of the near-band-edge emission from these Cd-rich Zn Cd1-Se (for = 0.025, 0.045) ﬁlms has been studied using photoluminescence spectroscopy. Relevant parameters that describe temperature variation of the energy and broadening of the fundamental band gap have been evaluated using various models including the two-oscillator model, the Bose–Einstein model and the Varshni model. While all these models sufﬁce to explain spectra at higher temperatures, the two-oscillator model not only explains low temperature spectra adequately but also provides additional information concerning phonon dispersion in these materials.
Fundamental parameters and time evolution of mass loss are investigated for post-main-sequence stars in the Galactic globular cluster 47 Tucanae (NGC 104). This is accomplished by fitting spectral energy distributions (SEDs) to existing optical and infrared photometry and spectroscopy, to produce a true Hertzsprung-Russell diagram. We confirm the cluster's distance as d = 4611+213-200 pc and age as 12 ± 1 Gyr. Horizontal branch models appear to confirm that no more red giant branch mass loss occurs in 47 Tuc than in the more metal-poor ω Centauri, though difficulties arise due to inconsistencies between the models. Using our SEDs, we identify those stars that exhibit infrared excess, finding excess only among the brightest giants: dusty mass loss begins at a luminosity of ∼1000 Lsun, becoming ubiquitous above L = 2000 Lsun. Recent claims of dust production around lower-luminosity giants cannot be reproduced, despite using the same archival Spitzer imagery.
Arroyo-Torres, B.; Wittkowski, M.; Marcaide, J. M.; Abellan, F. J.; Chiavassa, A.; Fabregat, J.; Freytag, B.; Guirado, J. C.; Hauschildt, P. H.; Marti-Vidal, I.; Quirrenbach, A.; Scholz, M.; Wood, P. R.
2015-08-01
We present recent near-IR interferometric studies of red giant and supergiant stars, which are aimed at obtaining information on the structure of the atmospheric layers and constraining the fundamental parameters of these objects. The observed visibilities of six red supergiants (RSGs), and also of one of the five red giants observed, indicate large extensions of the molecular layers, as previously observed for Mira stars. These extensions are not predicted by hydrostatic PHOENIX model atmospheres, hydrodynamical (RHD) simulations of stellar convection, or self-excited pulsation models. All these models based on parameters of RSGs lead to atmospheric structures that are too compact compared to our observations. We discuss how alternative processes might explain the atmospheric extensions for these objects. As the continuum appears to be largely free of contamination by molecular layers, we can estimate reliable Rosseland angular radii for our stars. Together with distances and bolometric fluxes, we estimate the effective temperatures and luminosities of our targets, locate them in the HR diagram, and compare their positions to recent evolutionary tracks.
Monteiro, Hektor; Hickel, Gabriel R; Caetano, Thiago C
2016-01-01
In the second paper of the series we continue the investigation of open cluster fundamental parameters using a robust global optimization method to fit model isochrones to photometric data. We present optical UBVRI CCD photometry (Johnsons-Cousins system) observations for 24 neglected open clusters, of which 14 have high quality data in the visible obtained for the first time, as a part of our ongoing survey being carried out in the 0.6m telescope of the Pico dos Dias Observatory in Brazil. All objects were then analyzed with a global optimization tool developed by our group which estimates the membership likelihood of the observed stars and fits an isochrone from which a distance, age, reddening, total to selective extinction ratio $R_{V}$ (included in this work as a new free parameter) and metallicity are estimated. Based on those estimates and their associated errors we analyzed the status of each object as real clusters or not, finding that two are likely to be asterisms. We also identify important discre...
According to fundamental parameter method theory, gained the key parameters such as mass-absorption coefficients and excitation factors and wrote the calculation procedure. In experiment, a series of lead brass alloy standard samples were analyzed with an energy dispersive X-ray fluorescence spectrometer and calculated the contents of Cu, Zn of the samples via the fundamental parameter calculation procedure. The result shows the fundamental parameter method can overcome the absorption-enhancement effect between Cu and Zn elements well. The measured contents of Cu and Zn is respectively 54%-64% and 33%-45%, the precision of the method is 0.10% and 0.15% respectively for Cu and Zn. (authors)
Cruzalèbes, P; Rabbia, Y; Sacuto, S; Chiavassa, A; Pasquato, E; Plez, B; Eriksson, K; Spang, A; Chesneau, O
2013-01-01
Thanks to their large angular dimension and brightness, red giants and supergiants are privileged targets for optical long-baseline interferometers. Sixteen red giants and supergiants have been observed with the VLTI/AMBER facility over a two-years period, at medium spectral resolution (R=1500) in the K band. The limb-darkened angular diameters are derived from fits of stellar atmospheric models on the visibility and the triple product data. The angular diameters do not show any significant temporal variation, except for one target: TX Psc, which shows a variation of 4% using visibility data. For the eight targets previously measured by Long-Baseline Interferometry (LBI) in the same spectral range, the difference between our diameters and the literature values is less than 5%, except for TX Psc, which shows a difference of 11%. For the 8 other targets, the present angular diameters are the first measured from LBI. Angular diameters are then used to determine several fundamental stellar parameters, and to loca...
Research in core physics or atomic and condensed matter science is increasingly relevant for diverse fields and are finding application in chemistry, engineering and biological sciences, linking to experimental research at synchrotrons, reactors and specialised facilities. Over recent synchrotron experiments and publications we have developed methods for measuring the absorption coefficient far from the edge and in the XAFS (X-ray absorption fine structure) region in neutral atoms, simple compounds and organometallics reaching accuracies of below 0.02%. This is 50-500 times more accurate than earlier methods, and 50-250 times more accurate than claimed uncertainties in theoretical computations for these systems. The data and methodology are useful for a wide range of applications, including major synchrotron and laboratory techniques relating to fine structure, near-edge analysis and standard crystallography. Experiments are sensitive to theoretical and computational issues, including correlation between convergence of electronic and atomic orbitals and wavefunctions. Hence, particularly in relation to the popular techniques of XAFS and XANES (X-ray absorption near-edge structure), this development calls for strong theoretical involvement but has great applications in solid state structural determination, catalysis and enzyme environments, active centres of biomolecules and organometallics, phase changes and fluorescence investigations and others. We discuss key features of the X-ray extended range technique (XERT) and illustrate applications.
A new normalization criterion has recently been proposed for constructing the localization function in Hardy's atomistic-to-continuum thermomechanical theory. The modification involves changing the normalization integral into a summation over the discrete volumes occupied by the atoms. The resulting thermomechanical quantities such as stress and heat flux have been shown to obey the conservation equations more accurately for some non-equilibrium thermomechanical processes in numerical tests. However, it is still uncertain if this modification is valid in ductile fractures, where more complex deformation mechanisms exist such as dislocation emission, slip band formation and twinning. As such, molecular dynamics simulation is conducted to investigate crack in bcc iron (Fe) crystal under tensile and shearing loading and the validity of Hardy's formulas are tested at the defective region. Stress fields are constructed around the defective crack-tip region using the ensemble averaged Hardy's formulas and are used to explain the fracture mechanism in the cracked Fe system. The validity of the conservation equations using Hardy's quantities based on the proposed modification can be established at the crack tip where slip bands and other defects exist, but some discrepancy between two sides of the balance of energy can be observed at the crack region. (paper)
Moore, Christopher; Hopkins, Matthew; Moore, Stan; Boerner, Jeremiah; Cartwright, Keith
2015-09-01
Simulation of breakdown is important for understanding and designing a variety of applications such as mitigating undesirable discharge events. Such simulations need to be accurate through early time arc initiation to late time stable arc behavior. Here we examine constraints on the timestep and mesh size required for arc simulations using the particle-in-cell (PIC) method with direct simulation Monte Carlo (DMSC) collisions. Accurate simulation of electron avalanche across a fixed voltage drop and constant neutral density (reduced field of 1000 Td) was found to require a timestep ~ 1/100 of the mean time between collisions and a mesh size ~ 1/25 the mean free path. These constraints are much smaller than the typical PIC-DSMC requirements for timestep and mesh size. Both constraints are related to the fact that charged particles are accelerated by the external field. Thus gradients in the electron energy distribution function can exist at scales smaller than the mean free path and these must be resolved by the mesh size for accurate collision rates. Additionally, the timestep must be small enough that the particle energy change due to the fields be small in order to capture gradients in the cross sections versus energy. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. DOE's National Nuclear Security Administration under Contract DE-AC04-94AL85000.
LIU Yongqiang; HE Qing; ZHANG Hongsheng; Ali MAMTIMIN
2012-01-01
Improving and validating land surface models based on integrated observations in deserts is one of the challenges in land modeling.Particularly,key parameters and parameterization schemes in desert regions need to be evaluated in-situ to improve the models. In this study,we calibrated the land-surface key parameters and evaluated several formulations or schemes for thermal roughness length (z0h) in the common land model (CoLM).Our parameter calibration and scheme evaluation were based on the observed data during a torrid summer (29 July to 11 September 2009) over the Taklimakan Desert hinterland.First,the importance of the key parameters in the experiment was evaluated based on their physics principles and the significance of these key parameters were further validated using sensitivity test.Second,difference schemes (or physics-based formulas) of z0h were adopted to simulate the variations of energy-related variables (e.g.,sensible heat flux and surface skin temperature) and the simulated variations were then compared with the observed data.Third,the z0h scheme that performed best (i.e.,Y07) was then selected to replace the defaulted one (i.e.,Z98); the revised scheme and the superiority of Y07 over Z98 was further demonstrated by comparing the simulated results with the observed data.Admittedly,the revised model did a relatively poor job of simulating the diurnal variations of surface soil heat flux,and nighttime soil temperature was also underestimated,calling for further improvement of the model for desert regions.
Francisco-Javier Mesas-Carrascosa
2015-09-01
Full Text Available This article describes the technical specifications and configuration of a multirotor unmanned aerial vehicle (UAV to acquire remote images using a six-band multispectral sensor. Several flight missions were programmed as follows: three flight altitudes (60, 80 and 100 m, two flight modes (stop and cruising modes and two ground control point (GCP settings were considered to analyze the influence of these parameters on the spatial resolution and spectral discrimination of multispectral orthomosaicked images obtained using Pix4Dmapper. Moreover, it is also necessary to consider the area to be covered or the flight duration according to any flight mission programmed. The effect of the combination of all these parameters on the spatial resolution and spectral discrimination of the orthomosaicks is presented. Spectral discrimination has been evaluated for a specific agronomical purpose: to use the UAV remote images for the detection of bare soil and vegetation (crop and weeds for in-season site-specific weed management. These results show that a balance between spatial resolution and spectral discrimination is needed to optimize the mission planning and image processing to achieve every agronomic objective. In this way, users do not have to sacrifice flying at low altitudes to cover the whole area of interest completely.
Techniques for constructing metamodels of device parameters at BSIM3v3 level accuracy are presented to improve knowledge-based circuit sizing optimization. Based on the analysis of the prediction error of analytical performance expressions, operating point driven (OPD) metamodels of MOSFETs are introduced to capture the circuit's characteristics precisely. In the algorithm of metamodel construction, radial basis functions are adopted to interpolate the scattered multivariate data obtained from a well tailored data sampling scheme designed for MOSFETs. The OPD metamodels can be used to automatically bias the circuit at a specific DC operating point. Analytical-based performance expressions composed by the OPD metamodels show obvious improvement for most small-signal performances compared with simulation-based models. Both operating-point variables and transistor dimensions can be optimized in our nesting-loop optimization formulation to maximize design flexibility. The method is successfully applied to a low-voltage low-power amplifier. (semiconductor integrated circuits)
Sun, Ying; Gu, Lianhong; Dickinson, Robert E; Pallardy, Stephen G; Baker, John; Cao, Yonghui; DaMatta, Fábio Murilo; Dong, Xuejun; Ellsworth, David; Van Goethem, Davina; Jensen, Anna M; Law, Beverly E; Loos, Rodolfo; Martins, Samuel C Vitor; Norby, Richard J; Warren, Jeffrey; Weston, David; Winter, Klaus
2014-04-01
Worldwide measurements of nearly 130 C3 species covering all major plant functional types are analysed in conjunction with model simulations to determine the effects of mesophyll conductance (g(m)) on photosynthetic parameters and their relationships estimated from A/Ci curves. We find that an assumption of infinite g(m) results in up to 75% underestimation for maximum carboxylation rate V(cmax), 60% for maximum electron transport rate J(max), and 40% for triose phosphate utilization rate T(u) . V(cmax) is most sensitive, J(max) is less sensitive, and T(u) has the least sensitivity to the variation of g(m). Because of this asymmetrical effect of g(m), the ratios of J(max) to V(cmax), T(u) to V(cmax) and T(u) to J(max) are all overestimated. An infinite g(m) assumption also limits the freedom of variation of estimated parameters and artificially constrains parameter relationships to stronger shapes. These findings suggest the importance of quantifying g(m) for understanding in situ photosynthetic machinery functioning. We show that a nonzero resistance to CO2 movement in chloroplasts has small effects on estimated parameters. A non-linear function with gm as input is developed to convert the parameters estimated under an assumption of infinite gm to proper values. This function will facilitate gm representation in global carbon cycle models. PMID:24117476
Sun, Ying [University of Texas at Austin; Gu, Lianhong [ORNL
2013-01-01
Worldwide measurements of nearly 130 C3 species covering all major plant functional types are analyzed in conjunction with model simulations to determine the effects of mesophyll conductance (gm) on photosynthetic parameters and their relationships estimated from A/Ci curves. We find that an assumption of infinite gm results in up to 75% underestimation for maximum carboxylation rate Vcmax, 60% for maximum electron transport rate Jmax, and 40% for triose phosphate utilization rate Tu. Vcmax is most sensitive, Jmax is less sensitive, and Tu has the least sensitivity to the variation of gm. Due to this asymmetrical effect of gm, the ratios of Jmax to Vcmax, Tu to Vcmax, and Tu to Jmax are all overestimated. An infinite gm assumption also limits the freedom of variation of estimated parameters and artificially constrains parameter relationships to stronger shapes. These findings suggest the importance of quantifying gm for understanding in-situ photosynthetic machinery functioning. We show that a nonzero resistance to CO2 movement in chloroplasts has small effects on estimated parameters. A nonlinear function with gm as input is developed to convert the parameters estimated under an assumption of infinite gm to proper values. This function will facilitate gm representation in global carbon cycle models.
Kharchenko, N V; Schilbach, E; Röser, S; Scholz, R -D
2012-01-01
Aims: On the basis of the PPMXL star catalogue we performed a survey of star clusters in the second quadrant of the Milky Way. Methods: From the PPMXL catalogue of positions and proper motions we took the subset of stars with near-infrared photometry from 2MASS and added the remaining 2MASS stars without proper motions (called 2MAst, i.e. 2MASS with astrometry). We developed a data-processing pipeline including interactive human control of a standardised set of multi-dimensional diagrams to determine kinematic and photometric membership probabilities for stars in a cluster region. The pipeline simultaneously produced the astrophysical parameters of a cluster. From literature we compiled a target list of presently known open and globular clusters, cluster candidates, associations, and moving groups. From established member stars we derived spatial parameters (coordinates of centres and radii of the main morphological parts of clusters) and cluster kinematics (average proper motions and sometimes radial velocit...
AKARI OBSERVATIONS OF BROWN DWARFS. III. CO, CO2, AND CH4 FUNDAMENTAL BANDS AND PHYSICAL PARAMETERS
We investigate variations in the strengths of three molecular bands, CH4 at 3.3 μm, CO at 4.6 μm, and CO2 at 4.2 μm, in 16 brown dwarf spectra obtained by AKARI. Spectral features are examined along the sequence of source classes from L1 to T8. We find that the CH4 3.3 μm band is present in the spectra of brown dwarfs later than L5, and the CO 4.6 μm band appears in all spectral types. The CO2 absorption band at 4.2 μm is detected in late-L and T-type dwarfs. To better understand brown dwarf atmospheres, we analyze the observed spectra using the Unified Cloudy Model. The physical parameters of the AKARI sample, i.e., atmospheric effective temperature T eff, surface gravity log g, and critical temperature T cr, are derived. We also model IRTF/SpeX and UKIRT/CGS4 spectra in addition to the AKARI data in order to derive the most probable physical parameters. Correlations between the spectral type and the modeled parameters are examined. We confirm that the spectral-type sequence of late-L dwarfs is not related to T eff, but instead originates as a result of the effect of dust.
Gao, Hongying; Obach, R Scott
2014-03-01
We previously developed an high-performance LC-MS peak area ratio approach to demonstrate whether an animal species used in a toxicology study has greater exposures to drug metabolites relative to humans, meeting regulatory guidances regarding safety assessment of drug metabolites. Herein we explain the underlying bioanalytical principals, how to establish all fundamental bioanalytical parameters, and how to evaluate data quality in sample analysis, in the absence of authentic standards of analyte(s). A data-driven tiered approach was used in which data from the peak area ratio method can stand based on statistical analysis, as well as assuring that fundamental elements of bioanalytical method and bioanalysis are met. This strategy offers considerable time- and resource-saving advantage while providing high confidence in the safety assessment of human metabolites in drug development. PMID:24620806
Takeda, Y; Sato, B; Liu, Y -J; Chen, Y -Q; Zhao, G
2016-01-01
Spectroscopic parameters (effective temperature, metallicity, etc) were determined for a large sample of ~100 red giants in the Kepler field, for which mass, radius, and evolutionary status had already been asteroseismologically established. These two kinds of spectroscopic and seismic information suffice to define the position on the "luminosity versus effective temperature" diagram and to assign an appropriate theoretical evolutionary track to each star. Making use of this advantage, we examined whether the stellar location on this diagram really matches the assigned track, which would make an interesting consistency check between theory and observation. It turned out that satisfactory agreement was confirmed in most cases (~90%, though appreciable discrepancies were seen for some stars such as higher-mass red-clump giants), suggesting that recent stellar evolution calculations are practically reliable. Since the relevant stellar age could also be obtained by this comparison, we derived the age-metallicity ...
Ogawa, Emiyu; Ito, Arisa; Arai, Tsunenori
2012-03-01
We studied necrotic cell death effect on myocardial cells with photosensitizer existed outside the cells varying photosensitization reaction parameters widely in vitro. We have developed non-thermal ablator with the application of photosensitization reaction for atrial fibrillation. Since laser irradiation is applied shortly after photosensitizer injection, the photosensitization reaction is induced outside the cells. The interaction for the myocardial cells by the photosensitization reaction is not well understood yet on various photosensitization reaction parameters. Rat myocardial cells were cultured in 96 well plates for 7 days. The photosensitization reaction was applied with talaporfin sodium (NPe6) and the semiconductor laser of 663nm wavelength. The average drug light interval was set 8 mins. The photosensitizer concentration and radiant exposure were varied from 5 to 40 μg/ml and 1.2 to 60 J/cm2, respectively. The well bottom was irradiated by the red laser with irradiance of 293 mW/cm2. The photosensitizer fluorescence was monitored during the photosensitization reaction. Alive cell rate was measured by WST assay after 2 hours from the irradiation. In the case of the photosensitizer concentration of 10 μg/ml, the myocardial cells were almost alive even thought 60 J/cm2 in the radiant exposure was applied. In the 15 μg/ml case, the alive cell rate was almost linear relation to the photosensitizer concentration and radiant exposure. We obtained that the threshold for myocardial cell necrosis on the photosensitizer concentration was around 15 μg/ml with 20 J/cm2 in the radiant exposure. This threshold on the photosensitizer concentration was similar to the reported threshold for cancer therapy.
Takeda, Y.; Tajitsu, A.; Sato, B.; Liu, Y.-J.; Chen, Y.-Q.; Zhao, G.
2016-04-01
Spectroscopic parameters (effective temperature, metallicity, etc) were determined for a large sample of ˜100 red giants in the Kepler field, for which mass, radius, and evolutionary status had already been asteroseismologically established. These two kinds of spectroscopic and seismic information suffice to define the position on the `luminosity versus effective temperature' diagram and to assign an appropriate theoretical evolutionary track to each star. Making use of this advantage, we examined whether the stellar location on this diagram really matches the assigned track, which would make an interesting consistency check between theory and observation. It turned out that satisfactory agreement was confirmed in most cases (˜90 per cent, though appreciable discrepancies were seen for some stars such as higher mass red-clump giants), suggesting that recent stellar evolution calculations are practically reliable. Since the relevant stellar age could also be obtained by this comparison, we derived the age-metallicity relation for these Kepler giants and found the following characteristics: (1) the resulting distribution is quite similar to what was previously concluded for F-, G-, and K-type stars dwarfs; (2) the dispersion of metallicity progressively increases as the age becomes older; (3) nevertheless, the maximum metallicity at any stellar age remains almost flat, which means the existence of super/near-solar metallicity stars in a considerably wide age range from ˜(2-3) × 108 to ˜1010 yr.
We have developed a new method for fitting spectral energy distributions (SEDs) to identify and constrain the physical properties of high-redshift (4 2'. It allows us to compare observations to arbitrarily complex models and to compute 95% credible intervals that provide robust constraints for the model parameters. The work is presented in two sections. In the first, we test πMC2 using simulated SEDs to not only confirm the recovery of the known inputs but to assess the limitations of the method and identify potential hazards of SED fitting when applied specifically to high-redshift (z > 4) galaxies. In the second part of the paper we apply πMC2 to thirty-three 4 6 galaxies. Using πMC2, we are able to constrain the stellar mass of these objects and in some cases their stellar age and find no evidence that any of these sources formed at a redshift larger than z = 8, a time when the universe was ≈0.6 Gyr old.
Thoul, Anne; Catala, Claude; Aerts, Conny; Morel, Thierry; Briquet, Maryline; Hillen, Michel; Raskin, Gert; Van Winckel, Hans; Auvergne, Michel; Baglin, Annie; Baudin, Frédéric; Michel, Eric
2012-01-01
We present the CoRoT light curve of the bright B2.5V star HD 48977 observed during a short run of the mission in 2008, as well as a high-resolution spectrum gathered with the HERMES spectrograph at the Mercator telescope. We use several time series analysis tools to explore the nature of the variations present in the light curve. We perform a detailed analysis of the spectrum of the star to determine its fundamental parameters and its element abundances. We find a large number of high-order g-modes, and one rotationally induced frequency. We find stable low-amplitude frequencies in the p-mode regime as well. We conclude that HD 48977 is a new Slowly Pulsating B star with fundamental parameters found to be Teff = 20000 $\\pm$ 1000 K and log(g)=4.2 $/pm$ 0.1. The element abundances are similar to those found for other B stars in the solar neighbourhood. HD 48977 was observed during a short run of the CoRoT satellite implying that the frequency precision is insufficient to perform asteroseismic modelling of the s...
Manara, C F; Da Rio, N; De Marchi, G; Natta, A; Ricci, L; Robberto, M; Testi, L
2013-01-01
Current planet formation models are largely based on the observational constraint that protoplanetary disks have lifetime 3Myr. Recent studies, however, report the existence of PMS stars with signatures of accretion (strictly connected with the presence of circumstellar disks)and photometrically determined ages of 30 Myr, or more. Here we present a spectroscopic study of two major age outliers in the ONC. We use broad band, intermediate resolution VLT/X-Shooter spectra combined with an accurate method to determine the stellar parameters and the related age of the targets to confirm their peculiar age estimates and the presence of ongoing accretion.The analysis is based on a multi-component fitting technique, which derives simultaneously SpT, extinction, and accretion properties of the objects. With this method we confirm and quantify the ongoing accretion. From the photospheric parameters of the stars we derive their position on the HRD, and the age given by evolutionary models. Together with other age indica...
Harmanec, Petr; Prša, Andrej
2011-08-01
The increasing precision of astronomical observations of stars and stellar systems is gradually getting to a level where the use of slightly different values of the solar mass, radius, and luminosity, as well as different values of fundamental physical constants, can lead to measurable systematic differences in the determination of basic physical properties. An equivalent issue with an inconsistent value of the speed of light was resolved by adopting a nominal value that is constant and has no error associated with it. Analogously, we suggest that the systematic error in stellar parameters may be eliminated by (1) replacing the solar radius Rsolar and luminosity Lsolar by the nominal values that are by definition exact and expressed in SI units: 1 R⊙^N =6.95508 × 108 m and 1 L⊙^N =3.846 × 1026 W (2) computing stellar masses in terms of Msolar by noting that the measurement error of the product GMsolar is 5 orders of magnitude smaller than the error in G; (3) computing stellar masses and temperatures in SI units by using the derived values M⊙2010 =1.988547 × 1030 kg and T⊙2010 =5779.57 K and (4) clearly stating the reference for the values of the fundamental physical constants used. We discuss the need and demonstrate the advantages of such a paradigm shift.
Nickel based alloys play important role in nuclear, mechanical and chemical industry. Two semi-absolute standardless methods, k0-instrumental neutron activation analysis (k0-INAA) and fundamental parameter X-ray fluorescence spectrometry (FP-XRF) were used for the characterization of certified nickel based alloys. The optimized experimental conditions for NAA provided results for 18 and XRF for 15 elements. Both techniques were unable to quantify some important alloy making elements. However, both reported results of other elements as information values. The techniques were analyzed for their sensitivity and accuracy. Sensitivity was evaluated by the number of elements determined by each technique. Accuracy was ascertained by using the linear regression analysis and the average root mean squared error.
Higton, Sam; Walsh, Rory
2015-04-01
X-Ray Fluorescence (XRF) is an important technique for measuring the concentrations of geochemical elements and inorganic contaminants adsorbed to sediments as an input to sediment tracing methods used to evaluate sediment transport dynamics in river catchments. In addition to traditional laboratory-based XRF instruments, the advent of increasingly advanced portable handheld XRF devices now mean that samples of fluvial sediment can be analysed in the field or in the laboratory following appropriate sample preparation procedures. There are limitations and sources of error associated with XRF sample preparation and analysis, however. It is therefore important to understand how fundamental parameters involved in sample preparation and analysis, such as sample compression and measurement exposure duration, affect observed variability in measurement results. Such considerations become important if the resulting measurement variability is high relative to the natural variability in element concentrations at a sample site. This paper deployed a simple experimental design to assess the impacts of varying a number of sample preparation and XRF analysis parameters on recorded measurements of elemental concentrations of the fine fraction (machine versus a handheld Niton XL3t-900 XRF elemental analyser. Helium purging was used on both machines to enable measurement of lighter geochemical elements. Sediment sub-samples were taken from a larger homogenised sample from a sediment core taken from an in-channel lateral bench deposit of the Brantian river in Sabah, Borneo; the core site is being used for research into multi-proxy sediment fingerprinting as part of the Stability of Altered Forest Ecosystems (SAFE) project. Some fundamental sample preparation procedures consistent with US EPA Method 6200 were applied to all sediment samples in order to explore key variables of interest. All sediment samples were air-dried to constant weight and sample quantity was sufficient to
Fundamental stellar properties from asteroseismology
Silva Aguirre, V.; Casagrande, L.; Miglio, A.
2013-01-01
different evolutionary phases. We present our results on determinations of masses, radii, and distances of stars in the CoRoT and Kepler fields, showing that we can map and date different regions of the galactic disk and distinguish gradients in the distribution of stellar properties at different heights......Accurate characterization of stellar populations is of prime importance to correctly understand the formation and evolution process of our Galaxy. The field of asteroseismology has been particularly successful in such an endeavor providing fundamental parameters for large samples of stars in...
How fundamental are fundamental constants?
Duff, M. J.
2015-01-01
I argue that the laws of physics should be independent of one's choice of units or measuring apparatus. This is the case if they are framed in terms of dimensionless numbers such as the fine structure constant, ?. For example, the standard model of particle physics has 19 such dimensionless parameters whose values all observers can agree on, irrespective of what clock, rulers or scales? they use to measure them. Dimensional constants, on the other hand, such as ?, c, G, e and k ?, are merely human constructs whose number and values differ from one choice of units to the next. In this sense, only dimensionless constants are 'fundamental'. Similarly, the possible time variation of dimensionless fundamental 'constants' of nature is operationally well defined and a legitimate subject of physical enquiry. By contrast, the time variation of dimensional constants such as ? or ? on which a good many (in my opinion, confusing) papers have been written, is a unit-dependent phenomenon on which different observers might disagree depending on their apparatus. All these confusions disappear if one asks only unit-independent questions. We provide a selection of opposing opinions in the literature and respond accordingly.
How fundamental are fundamental constants?
Duff, M J
2014-01-01
I argue that the laws of physics should be independent of one's choice of units or measuring apparatus. This is the case if they are framed in terms of dimensionless numbers such as the fine structure constant, alpha. For example, the Standard Model of particle physics has 19 such dimensionless parameters whose values all observers can agree on, irrespective of what clock, rulers, scales... they use to measure them. Dimensional constants, on the other hand, such as h, c, G, e, k..., are merely human constructs whose number and values differ from one choice of units to the next. In this sense only dimensionless constants are "fundamental". Similarly, the possible time variation of dimensionless fundamental "constants" of nature is operationally well-defined and a legitimate subject of physical enquiry. By contrast, the time variation of dimensional constants such as c or G on which a good many (in my opinion, confusing) papers have been written, is a unit-dependent phenomenon on which different observers might...
Whitlock, C. H., III
1977-01-01
Constituents with linear radiance gradients with concentration may be quantified from signals which contain nonlinear atmospheric and surface reflection effects for both homogeneous and non-homogeneous water bodies provided accurate data can be obtained and nonlinearities are constant with wavelength. Statistical parameters must be used which give an indication of bias as well as total squared error to insure that an equation with an optimum combination of bands is selected. It is concluded that the effect of error in upwelled radiance measurements is to reduce the accuracy of the least square fitting process and to increase the number of points required to obtain a satisfactory fit. The problem of obtaining a multiple regression equation that is extremely sensitive to error is discussed.
Caetano, T. C.; Dias, W. S.; Lépine, J. R. D.; Monteiro, H. S.; Moitinho, A.; Hickel, G. R.; Oliveira, A. F.
2015-07-01
Open clusters are considered valuable objects for the investigation of galactic structure and dynamics since their distances, ages and velocities can be determined with good precision. According to the New Catalog of Optically Visible Open Clusters and Candidates (Dias et al., 2002) about 10% of the optically revealed open clusters remain unstudied. However, previous analysis (Moitinho, 2010) has indicated that not considering this unstudied population introduces significant biases in the study of the structure and evolution of the Milky Way. In addition, a systematic revision of the data contained in the catalog, collected from the literature, is needed, due to its inhomogeneity. In this first paper of a series, we present the observational strategy, data reduction and analysis procedures of a UBRVI photometric survey of southern open star clusters carried out at Pico dos Dias Observatory (Brazil). The aim of the program is to contribute to an unbiased, homogenous collection of cluster fundamental parameters. We show that the implementation of a sequence of systematic procedures considerably improves the quality of the results. To illustrate the methods we present the first results based on one night of observations. The parameters, reddening, distance, age and metallicity, were obtained by fitting theoretical isochrones to cluster color-color and multidimensional color-magnitude diagrams, applying a cross-entropy optimization algorithm developed by our group, which takes into account UBVRI photometric data weighted using a membership-likelihood estimation.
Based on high-resolution optical spectra obtained with ESPaDOnS at Canada-France-Hawaii Telescope, we determine fundamental parameters (T eff, R, L bol, log g, and metallicity) for 59 candidate members of nearby young kinematic groups. The candidates were identified through the BANYAN Bayesian inference method of Malo et al., which takes into account the position, proper motion, magnitude, color, radial velocity, and parallax (when available) to establish a membership probability. The derived parameters are compared to Dartmouth magnetic evolutionary models and field stars with the goal of constraining the age of our candidates. We find that, in general, low-mass stars in our sample are more luminous and have inflated radii compared to older stars, a trend expected for pre-main-sequence stars. The Dartmouth magnetic evolutionary models show a good fit to observations of field K and M stars, assuming a magnetic field strength of a few kG, as typically observed for cool stars. Using the low-mass members of the β Pictoris moving group, we have re-examined the age inconsistency problem between lithium depletion age and isochronal age (Hertzspring-Russell diagram). We find that the inclusion of the magnetic field in evolutionary models increases the isochronal age estimates for the K5V-M5V stars. Using these models and field strengths, we derive an average isochronal age between 15 and 28 Myr and we confirm a clear lithium depletion boundary from which an age of 26 ± 3 Myr is derived, consistent with previous age estimates based on this method.
Luz, Anthony L.; Rooney, John P.; Kubik, Laura L.; Gonzalez, Claudia P.; Song, Dong Hoon; Meyer, Joel N.
2015-01-01
Mitochondrial dysfunction has been linked to myriad human diseases and toxicant exposures, highlighting the need for assays capable of rapidly assessing mitochondrial health in vivo. Here, using the Seahorse XFe24 Analyzer and the pharmacological inhibitors dicyclohexylcarbodiimide and oligomycin (ATP-synthase inhibitors), carbonyl cyanide 4-(trifluoromethoxy) phenylhydrazone (mitochondrial uncoupler) and sodium azide (cytochrome c oxidase inhibitor), we measured the fundamental parameters of mitochondrial respiratory chain function: basal oxygen consumption, ATP-linked respiration, maximal respiratory capacity, spare respiratory capacity and proton leak in the model organism Caenhorhabditis elegans. Since mutations in mitochondrial homeostasis genes cause mitochondrial dysfunction and have been linked to human disease, we measured mitochondrial respiratory function in mitochondrial fission (drp-1)-, fusion (fzo-1)-, mitophagy (pdr-1, pink-1)-, and electron transport chain complex III (isp-1)-deficient C. elegans. All showed altered function, but the nature of the alterations varied between the tested strains. We report increased basal oxygen consumption in drp-1; reduced maximal respiration in drp-1, fzo-1, and isp-1; reduced spare respiratory capacity in drp-1 and fzo-1; reduced proton leak in fzo-1 and isp-1; and increased proton leak in pink-1 nematodes. As mitochondrial morphology can play a role in mitochondrial energetics, we also quantified the mitochondrial aspect ratio for each mutant strain using a novel method, and for the first time report increased aspect ratios in pdr-1- and pink-1-deficient nematodes. PMID:26106885
2015-01-01
Diffusion Fundamentals is a peer-reviewed interdisciplinary open-access online journal published as a part of the website Diffusion-Fundamentals.org. It publishes original research articles in the field of diffusion and transport. Main research areas include theory, experiments applications, methods and diffusion-like phenomena. The readers of Diffusion Fundamentals are academic or industrial scientists in all research disciplines. The journal aims at providing a broad forum for their c...
Markova, N; Simón-Díaz, S; Herrero, A; Markov, H; Langer, N
2013-01-01
Rotation is of key importance for the evolution of hot massive stars, however, the rotational velocities of these stars are difficult to determine. Based on our own data for 31 Galactic O stars and incorporating similar data for 86 OB supergiants from the literature, we aim at investigating the properties of rotational and extra line-broadening as a function of stellar parameters and at testing model predictions about the evolution of stellar rotation. Fundamental stellar parameters were determined by means of the code FASTWIND. Projected rotational and extra broadening velocities originate from a combined Ft + GOF method. Model calculations published previously were used to estimate the initial evolutionary masses. The sample O stars with Minit > 50 Msun rotate with less that 26% of their break-up velocity, and they also lack objects with v sin i 35 Msun on the hotter side of the bi-stability jump, the observed and predicted rotational rates agree quite well; for those on the cooler side of the jump, the me...
Knowledge of X-ray tube spectral distribution is necessary in theoretical methods of matrix correction, i.e. in both fundamental parameter (FP) methods and theoretical influence coefficient algorithms. Thus, the influence of X-ray tube distribution on the accuracy of the analysis of thin films and bulk samples is presented. The calculations are performed using experimental X-ray tube spectra taken from the literature and theoretical X-ray tube spectra evaluated by three different algorithms proposed by Pella et al. (X-Ray Spectrom. 14 (1985) 125-135), Ebel (X-Ray Spectrom. 28 (1999) 255-266), and Finkelshtein and Pavlova (X-Ray Spectrom. 28 (1999) 27-32). In this study, Fe-Cr-Ni system is selected as an example and the calculations are performed for X-ray tubes commonly applied in X-ray fluorescence analysis (XRF), i.e., Cr, Mo, Rh and W. The influence of X-ray tube spectra on FP analysis is evaluated when quantification is performed using various types of calibration samples. FP analysis of bulk samples is performed using pure-element bulk standards and multielement bulk standards similar to the analyzed material, whereas for FP analysis of thin films, the bulk and thin pure-element standards are used. For the evaluation of the influence of X-ray tube spectra on XRF analysis performed by theoretical influence coefficient methods, two algorithms for bulk samples are selected, i.e. Claisse-Quintin (Can. Spectrosc. 12 (1967) 129-134) and COLA algorithms (G.R. Lachance, Paper Presented at the International Conference on Industrial Inorganic Elemental Analysis, Metz, France, June 3, 1981) and two algorithms (constant and linear coefficients) for thin films recently proposed by Sitko (X-Ray Spectrom. 37 (2008) 265-272)
Yu Xie
2015-08-01
Full Text Available This paper is an attempt to elucidate the effects of the important spray characteristics on the surface morphology and light absorbance of spray-on P3HT:PCBM thin-films, used as an active layer in polymer solar cells (PSCs. Spray coating or deposition is a viable scalable technique for the large-scale, fast, and low-cost fabrication of solution-processed solar cells, and has been widely used for device fabrication, although the fundamental understanding of the underlying and controlling parameters, such as spray characteristics, droplet dynamics, and surface wettability, is still limited, making the results on device fabrication not reproducible and unreliable. In this paper, following the conventional PSC architecture, a PEDOT:PSS layer is first spin-coated on glass substrates, followed by the deposition of P3HT:PCBM using an automatic ultrasonic spray coating system, with a movable nozzle tip, to mimic an industrial manufacturing process. To gain insight, the effects of the spray carrier air pressure, the number of spray passes, the precursor flow rate, and precursor concentration are studied on the surface topography and light absorbance spectra of the spray-on films. Among the results, it is found that despite the high roughness of spray-on films, the light absorbance of the film is satisfactory. It is also found that the absorbance of spray-on films is a linear function of the number of spray passes or deposition layers, based on which an effective film thickness is defined for rough spray-on films. The effective thickness of a rough spray-on P3HT:PCBM film was found to be one-quarter of that of a flat film predicted by a simple mass balance.
Recent events show the need for constant attention on the operator fundamentals, in the commercial nuclear industry. The first report about decline in the application of operator fundamentals during plant operational activities and transient situations was issued in July 2005. Analyses of the events recorded during 18 month period between 2010 and 2011 show similar causes and contributors like it was before July 2005. Due to that fact, the WANO issued SOER 2013-1 Operator Fundamentals Weaknesses with proposed suggestions how to analyse area of operator fundamentals and gives recommendations for effective and sustainable corrective actions. Operator fundamentals are the essential knowledge, skills, behaviours, and practices that operating crews need to apply to operate the plant effectively. These fundamentals are as follows: · Monitoring plant indications and conditions closely · Controlling plant evolutions precisely · Operating the plant with a conservative bias · Working effectively as a team · Having a solid understanding of plant design, engineering principles, and sciences. NEK analysed area of operator fundamentals and verified how consistently the basic principles in the plant control are followed in practice. Some opportunities for improvement were recognized for the training area, operational procedures format improvement and improvement in process of preparation of the planned activities during power operation or during plant shutdown. Among other measures, stability in operation with a sufficient safety margin can be achieved only through continuous monitoring of the operational practice and by constant highlighting of the operational standards. (author)
Fundamental Constants and Conservation Laws
Roh, Heui-Seol
2001-01-01
This work describes underlying features of the universe such as fundamental constants and cosmological parameters, conservation laws, baryon and lepton asymmetries, etc. in the context of local gauge theories for fundamental forces under the constraint of the flat universe. Conservation laws for fundamental forces are related to gauge theories for fundamental forces, their resulting fundamental constants are quantitatively analyzed, and their possible violations at different energy scales are...
Singh, Harjit
2011-01-01
""Radiology Fundamentals"" is a concise introduction to the dynamic field of radiology for medical students, non-radiology house staff, physician assistants, nurse practitioners, radiology assistants, and other allied health professionals. The goal of the book is to provide readers with general examples and brief discussions of basic radiographic principles and to serve as a curriculum guide, supplementing a radiology education and providing a solid foundation for further learning. Introductory chapters provide readers with the fundamental scientific concepts underlying the medical use of imag
Song, Young Kyu; Cho, Gyung Goo; Suh, Ji Yeon; Lee, Chang Kyung; Kim, Jeong Kon [Division of Magnetic Resonance, Korea Basic Science Institute, Cheongwon (Korea, Republic of); Kim, Young Ro [Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown (United States); Kim, Yoon Jae [Asan Medical Center, University of Ulsan College of Medicine, Seoul (Korea, Republic of)
2013-08-15
To determine the reliable perfusion parameters in dynamic contrast-enhanced MRI (DCE-MRI) for the monitoring antiangiogenic treatment in mice. Mice, with U-118 MG tumor, were treated with either saline (n = 3) or antiangiogenic agent (sunitinib, n = 8). Before (day 0) and after (days 2, 8, 15, 25) treatment, DCE examinations using correlations of perfusion parameters (K{sub ep}, K{sub el}, and A{sup H} from two compartment model; time to peak, initial slope and % enhancement from time-intensity curve analysis) were evaluated. Tumor growth rate was found to be 129% ± 28 in control group, -33% ± 11 in four mice with sunitinib-treatment (tumor regression) and 47% ± 15 in four with sunitinib-treatment (growth retardation). K{sub ep} (r 0.80) and initial slope (r = 0.84) showed strong positive correlation to the initial tumor volume (p < 0.05). In control mice, tumor regression group and growth retardation group animals, K{sub ep} (r : 0.75, 0.78, 0.81, 0.69) and initial slope (r : 0.79, 0.65, 0.67, 0.84) showed significant correlation with tumor volume (p < 0.01). In four mice with tumor re-growth, K{sub ep} and initial slope increased 20% or greater at earlier (n = 2) than or same periods (n = 2) to when the tumor started to re-grow with 20% or greater growth rate. K{sub ep} and initial slope may a reliable parameters for monitoring the response of antiangiogenic treatment.
Accurate determination of pull-in voltage and pull-in position is crucial in the design of electrostatically actuated microbeam-based devices. In the past, there have been many works on analytical modeling of the static pull-in of microbeams. However, unlike the static pull-in of microbeams where the analytical models have been well established, there are few works on analytical modeling of the dynamic pull-in of microbeam actuated by a step voltage. This paper presents two analytical approximate models for calculating the dynamic pull-in voltage and pull-in position of a cantilever beam and a clamped–clamped beam, respectively. The effects of the fringing field are included in the two models. The two models are derived based on the energy balance method. An N-order algebraic equation for the dynamic pull-in position is derived. An approximate solution of the N-order algebraic equation yields the dynamic pull-in position and voltage. The accuracy of the present models is verified by comparing their results with the experimental results and the published models available in the literature. (paper)
Tamandl, Dietmar; Fueger, Barbara; Kinsperger, Patrick; Haug, Alexander; Ba-Ssalamah, Ahmed [Medical University of Vienna, Department of Biomedical Imaging and Image-Guided Therapy, Comprehensive Cancer Center GET-Unit, Vienna (Austria); Gore, Richard M. [University of Chicago Pritzker School of Medicine, Department of Radiology, Chicago, IL (United States); Hejna, Michael [Medical University of Vienna, Department of Internal Medicine, Division of Medical Oncology, Comprehensive Cancer Center GET-Unit, Vienna (Austria); Paireder, Matthias; Schoppmann, Sebastian F. [Medical University of Vienna, Department of Surgery, Upper-GI-Service, Comprehensive Cancer Center GET-Unit, Vienna (Austria)
2016-02-15
To assess the prognostic value of volumetric parameters measured with CT and PET/CT in patients with neoadjuvant chemotherapy (NACT) and resection for oesophageal cancer (EC). Patients with locally advanced EC, who were treated with NACT and resection, were retrospectively analysed. Data from CT volumetry and {sup 18} F-FDG PET/CT (maximum standardized uptake [SUVmax], metabolic tumour volume [MTV], and total lesion glycolysis [TLG]) were recorded before and after NACT. The impact of volumetric parameter changes induced by NACT (MTV{sub RATIO}, TLG{sub RATIO}, etc.) on overall survival (OS) was assessed using a Cox proportional hazards model. Eighty-four patients were assessed using CT volumetry; of those, 50 also had PET/CT before and after NACT. Low post-treatment CT volume and thickness, MTV, TLG, and SUVmax were all associated with longer OS (p < 0.05), as were CTthickness{sub RATIO}, MTV{sub RATIO}, TLG{sub RATIO}, and SUVmax{sub RATIO} (p < 0.05). In the multivariate analysis, only MTV{sub RATIO} (Hazard ratio, HR 2.52 [95 % Confidence interval, CI 1.33-4.78], p = 0.005), TLG{sub RATIO} (HR 3.89 [95%CI 1.46-10.34], p = 0.006), and surgical margin status (p < 0.05), were independent predictors of OS. MTV{sub RATIO} and TLG{sub RATIO} are independent prognostic factors for survival in patients after NACT and resection for EC. (orig.)
Suchomska, K; Smolec, R; Pietrzyński, G; Gieren, W; Stȩpień, K; Konorski, P; Pilecki, B; Villanova, S; Thompson, I B; Górski, M; Karczmarek, P; Wielgórski, P
2015-01-01
We have analyzed the double-lined eclipsing binary system ASAS J180057-2333.8 from the All Sky Automated Survey (ASAS) catalogue . We measure absolute physical and orbital parameters for this system based on archival $V$-band and $I$-band ASAS photometry, as well as on high-resolution spectroscopic data obtained with ESO 3.6m/HARPS and CORALIE spectrographs. The physical and orbital parameters of the system were derived with an accuracy of about 0.5 - 3%. The system is a very rare configuration of two bright well-detached giants of spectral types K1 and K4 and luminosity class II. The radii of the stars are $R_1$ = 52.12 $\\pm$ 1.38 and $R_2$ = 67.63 $\\pm$ 1.40 R$_\\odot$ and their masses are $M_1$ = 4.914 $\\pm$ 0.021 and $M_2$ = 4.875$\\pm$ 0.021 M$_\\odot$ . The exquisite accuracy of 0.5% obtained for the masses of the components is one of the best mass determinations for giants. We derived a precise distance to the system of 2.14 $\\pm$ 0.06 kpc (stat.) $\\pm$ 0.05 (syst.) which places the star in the Sagittariu...
Redmond, W H
2001-01-01
This chapter outlines current marketing practice from a managerial perspective. The role of marketing within an organization is discussed in relation to efficiency and adaptation to changing environments. Fundamental terms and concepts are presented in an applied context. The implementation of marketing plans is organized around the four P's of marketing: product (or service), promotion (including advertising), place of delivery, and pricing. These are the tools with which marketers seek to better serve their clients and form the basis for competing with other organizations. Basic concepts of strategic relationship management are outlined. Lastly, alternate viewpoints on the role of advertising in healthcare markets are examined. PMID:11401791
Yu Xie; Siyi Gao; Morteza Eslamian
2015-01-01
This paper is an attempt to elucidate the effects of the important spray characteristics on the surface morphology and light absorbance of spray-on P3HT:PCBM thin-films, used as an active layer in polymer solar cells (PSCs). Spray coating or deposition is a viable scalable technique for the large-scale, fast, and low-cost fabrication of solution-processed solar cells, and has been widely used for device fabrication, although the fundamental understanding of the underlying and controlling par...
Shi, Deheng; Li, Peiling; Sun, Jinfeng; Zhu, Zunlue
2014-01-01
The potential energy curves (PECs) of 28 Ω states generated from 9 Λ-S states (X(2)Π, 1(4)Π, 1(6)Π, 1(2)Σ(+), 1(4)Σ(+), 1(6)Σ(+), 1(4)Σ(-), 2(4)Π and 1(4)Δ) are studied for the first time using an ab initio quantum chemical method. All the 9 Λ-S states correlate to the first two dissociation limits, N((4)Su)+Se((3)Pg) and N((4)Su)+Se((3)Dg), of NSe radical. Of these Λ-S states, the 1(6)Σ(+), 1(4)Σ(+), 1(6)Π, 2(4)Π and 1(4)Δ are found to be rather weakly bound states. The 1(2)Σ(+) is found to be unstable and has double wells. And the 1(6)Σ(+), 1(4)Σ(+), 1(4)Π and 1(6)Π are found to be the inverted ones with the SO coupling included. The PEC calculations are made by the complete active space self-consistent field method, which is followed by the internally contracted multireference configuration interaction approach with the Davidson modification. The spin-orbit coupling is accounted for by the state interaction approach with the Breit-Pauli Hamiltonian. The convergence of the present calculations is discussed with respect to the basis set and the level of theory. Core-valence correlation corrections are included with a cc-pCVTZ basis set. Scalar relativistic corrections are calculated by the third-order Douglas-Kroll Hamiltonian approximation at the level of a cc-pV5Z basis set. All the PECs are extrapolated to the complete basis set limit. The variation with internuclear separation of spin-orbit coupling constants is discussed in brief for some Λ-S states with one shallow well on each PEC. The spectroscopic parameters of 9 Λ-S and 28 Ω states are determined by fitting the first ten vibrational levels whenever available, which are calculated by solving the rovibrational Schrödinger equation with Numerov's method. The splitting energy in the X(2)Π Λ-S state is determined to be about 864.92 cm(-1), which agrees favorably with the measurements of 891.80 cm(-1). Moreover, other spectroscopic parameters of Λ-S and Ω states involved here are
Arbuzov, Boris A
2015-01-01
The problem of a mutual dependence of parameters of the Standard Model is considered in the framework of the compensation approach. Conditions for a spontaneous generation of four electro-weak boson effective interactions are shown to lead to a set of equations for parameters of the interaction. In case of a realization of a non-trivial solution of a set of compensation equations, parameter $\\sin^2\\theta_W$ is defined. The existence of non-trivial solutions is demonstrated, which provide a satisfactory value for the electromagnetic fine structure constant $\\alpha$ at scale $M_Z$: $\\alpha(M_Z) = 0.007756$. Within the range of experimental limitations we demonstrate the existence of two solutions for the problem. There is a solution with high effective cut-off being close to the Planck mass by the order of magnitude. Another solution corresponds to the effective cut-off in $10^2$ TeV range and leads to prediction of observable effects in the production of a top pair accompanied by an electro-weak boson. The res...
Testing Our Fundamental Assumptions
Kohler, Susanna
2016-06-01
fundamental assumptions.A recent focus set in the Astrophysical Journal Letters, titled Focus on Exploring Fundamental Physics with Extragalactic Transients, consists of multiple published studies doing just that.Testing General RelativitySeveral of the articles focus on the 4th point above. By assuming that the delay in photon arrival times is only due to the gravitational potential of the Milky Way, these studies set constraints on the deviation of our galaxys gravitational potential from what GR would predict. The study by He Gao et al. uses the different photon arrival times from gamma-ray bursts to set constraints at eVGeV energies, and the study by Jun-Jie Wei et al. complements this by setting constraints at keV-TeV energies using photons from high-energy blazar emission.Photons or neutrinos from different extragalactic transients each set different upper limits on delta gamma, the post-Newtonian parameter, vs. particle energy or frequency. This is a test of Einsteins equivalence principle: if the principle is correct, delta gamma would be exactly zero, meaning that photons of different energies move at the same velocity through a vacuum. [Tingay Kaplan 2016]S.J. Tingay D.L. Kaplan make the case that measuring the time delay of photons from fast radio bursts (FRBs; transient radio pulses that last only a few milliseconds) will provide even tighter constraints if we are able to accurately determine distances to these FRBs.And Adi Musser argues that the large-scale structure of the universe plays an even greater role than the Milky Way gravitational potential, allowing for even stricter testing of Einsteins equivalence principle.The ever-narrower constraints from these studies all support GR as a correct set of rules through which to interpret our universe.Other Tests of Fundamental PhysicsIn addition to the above tests, Xue-Feng Wu et al. show that FRBs can be used to provide severe constraints on the rest mass of the photon, and S. Croft et al. even touches on what we
Vogt, Nicole P.; Haynes, Martha P.; Giovanelli, Riccardo; Herter, Terry
2004-06-01
We have conducted a study of optical and H I properties of spiral galaxies (size, luminosity, Hα flux distribution, circular velocity, H I gas mass) to investigate causes (e.g., nature vs. nurture) for variation within the cluster environment. We find H I-deficient cluster galaxies to be offset in fundamental plane space, with disk scale lengths decreased by a factor of 25%. This may be a relic of early galaxy formation, caused by the disk coalescing out of a smaller, denser halo (e.g., higher concentration index) or by truncation of the hot gas envelope due to the enhanced local density of neighbors, although we cannot completely rule out the effect of the gas stripping process. The spatial extent of Hα flux and the B-band radius also decreases, but only in early-type spirals, suggesting that gas removal is less efficient within steeper potential wells (or that stripped late-type spirals are quickly rendered unrecognizable). We find no significant trend in stellar mass-to-light ratios or circular velocities with H I gas content, morphological type, or clustercentric radius, for star-forming spiral galaxies throughout the clusters. These data support the findings of a companion paper that gas stripping promotes a rapid truncation of star formation across the disk and could be interpreted as weak support for dark matter domination over baryons in the inner regions of spiral galaxies.
Wang, Yong; Goh, Wang Ling; Chai, Kevin T.-C.; Mu, Xiaojing; Hong, Yan; Kropelnicki, Piotr; Je, Minkyu
2016-04-01
The parasitic effects from electromechanical resonance, coupling, and substrate losses were collected to derive a new two-port equivalent-circuit model for Lamb wave resonators, especially for those fabricated on silicon technology. The proposed model is a hybrid π-type Butterworth-Van Dyke (PiBVD) model that accounts for the above mentioned parasitic effects which are commonly observed in Lamb-wave resonators. It is a combination of interdigital capacitor of both plate capacitance and fringe capacitance, interdigital resistance, Ohmic losses in substrate, and the acoustic motional behavior of typical Modified Butterworth-Van Dyke (MBVD) model. In the case studies presented in this paper using two-port Y-parameters, the PiBVD model fitted significantly better than the typical MBVD model, strengthening the capability on characterizing both magnitude and phase of either Y11 or Y21. The accurate modelling on two-port Y-parameters makes the PiBVD model beneficial in the characterization of Lamb-wave resonators, providing accurate simulation to Lamb-wave resonators and oscillators.
Rackwitz, Vanessa
2012-05-30
For a decade X-ray sources have been commercially available for the microfocus X-ray fluorescence analysis ({mu}-XRF) and offer the possibility of extending the analytics at a scanning electron microscope (SEM) with an attached energy dispersive X-ray spectrometer (EDS). By using the {mu}-XRF it is possible to determine the content of chemical elements in a microscopic sample volume in a quantitative, reference-free and non-destructive way. For the reference-free quantification with the XRF the Sherman equation is referred to. This equation deduces the intensity of the detected X-ray intensity of a fluorescence peak to the content of the element in the sample by means of fundamental parameters. The instrumental fundamental parameters of the {mu}-XRF at a SEM/EDS system are the excitation spectrum consisting of X-ray tube spectrum and the transmission of the X-ray optics, the geometry and the spectrometer efficiency. Based on a calibrated instrumentation the objectives of this work are the development of procedures for the characterization of all instrumental fundamental parameters as well as the evaluation and reduction of their measurement uncertainties: The algorithms known from the literature for the calculation of X-ray tube spectrum are evaluated with regard to their deviations in the spectral distribution. Within this work a novel semi-empirical model is improved with respect to its uncertainties and enhanced in the low energy range as well as extended for another three anodes. The emitted X-ray tube spectrum is calculated from the detected one, which is measured at an especially developed setup for the direct measurement of X-ray tube spectra. This emitted X-ray tube spectrum is compared to the one calculated on base of the model of this work. A procedure for the determination of the most important parameters of an X-ray semi-lens in parallelizing mode is developed. The temporal stability of the transmission of X-ray full lenses, which have been in regular
Wasim, Mohammad [Pakistan Institute of Nuclear Science and Technology, Islamabad (Pakistan). Chemistry Div.; Ahmad, Sajjad [Quaid-i-Azam Univ., Islamabad (Pakistan). Dept. of Chemistry
2015-07-01
Nickel based alloys play important role in nuclear, mechanical and chemical industry. Two semi-absolute standardless methods, k{sub 0}-instrumental neutron activation analysis (k{sub 0}-INAA) and fundamental parameter X-ray fluorescence spectrometry (FP-XRF) were used for the characterization of certified nickel based alloys. The optimized experimental conditions for NAA provided results for 18 and XRF for 15 elements. Both techniques were unable to quantify some important alloy making elements. However, both reported results of other elements as information values. The techniques were analyzed for their sensitivity and accuracy. Sensitivity was evaluated by the number of elements determined by each technique. Accuracy was ascertained by using the linear regression analysis and the average root mean squared error.
DOE fundamentals handbook: Chemistry
This handbook was developed to assist nuclear facility operating contractors in providing operators, maintenance personnel, and the technical staff with the necessary fundamentals training to ensure a basic understanding of chemistry. This volume contains the following modules: reactor water chemistry (effects of radiation on water chemistry, chemistry parameters), principles of water treatment (purpose; treatment processes [ion exchange]; dissolved gases, suspended solids, and pH control; water purity), and hazards of chemicals and gases (corrosives [acids, alkalies], toxic compounds, compressed gases, flammable/combustible liquids)
Cardone, V F; Diaferio, A; Tortora, C; Molinaro, R
2010-01-01
Modified Newtonian Dynamics (MOND) has been shown to be able to fit spiral galaxy rotation curves as well as giving a theoretical foundation for empirically determined scaling relations, such as the Tully - Fisher law, without the need for a dark matter halo. As a complementary analysis, one should investigate whether MOND can also reproduce the dynamics of early - type galaxies (ETGs) without dark matter. As a first step, we here show that MOND can indeed fit the observed central velocity dispersion $\\sigma_0$ of a large sample of ETGs assuming a simple MOND interpolating functions and constant anisotropy. We also show that, under some assumptions on the luminosity dependence of the Sersic n parameter and the stellar M/L ratio, MOND predicts a fundamental plane for ETGs : a log - linear relation among the effective radius $R_{eff}$, $\\sigma_0$ and the mean effective intensity $\\langle I_e \\rangle$. However, we predict a tilt between the observed and the MOND fundamental planes.
Digital Fourier analysis fundamentals
Kido, Ken'iti
2015-01-01
This textbook is a thorough, accessible introduction to digital Fourier analysis for undergraduate students in the sciences. Beginning with the principles of sine/cosine decomposition, the reader walks through the principles of discrete Fourier analysis before reaching the cornerstone of signal processing: the Fast Fourier Transform. Saturated with clear, coherent illustrations, "Digital Fourier Analysis - Fundamentals" includes practice problems and thorough Appendices for the advanced reader. As a special feature, the book includes interactive applets (available online) that mirror the illustrations. These user-friendly applets animate concepts interactively, allowing the user to experiment with the underlying mathematics. For example, a real sine signal can be treated as a sum of clockwise and counter-clockwise rotating vectors. The applet illustration included with the book animates the rotating vectors and the resulting sine signal. By changing parameters such as amplitude and frequency, the reader ca...
Fundamentals of Structural Engineering
Connor, Jerome J
2013-01-01
Fundamentals of Structural Engineering provides a balanced, seamless treatment of both classic, analytic methods and contemporary, computer-based techniques for conceptualizing and designing a structure. The book’s principle goal is to foster an intuitive understanding of structural behavior based on problem solving experience for students of civil engineering and architecture who have been exposed to the basic concepts of engineering mechanics and mechanics of materials. Making it distinct from many other undergraduate textbooks, the authors of this text recognize the notion that engineers reason about behavior using simple models and intuition they acquire through problem solving. The approach adopted in this text develops this type of intuition by presenting extensive, realistic problems and case studies together with computer simulation, which allows rapid exploration of how a structure responds to changes in geometry and physical parameters. This book also: Emphasizes problem-based understanding of...
Hareter, M.; Weiss, W. [Department of Astronomy, University of Vienna, Tuerkenschanzstrasse 17, A-1180 Wien (Austria); Fossati, L. [Department of Physics and Astronomy, Open University, Walton Hall, Milton Keynes MK6 7AA (United Kingdom); Suarez, J. C. [Instituto de Astrofisica de Andalucia (CSIC), Glorieta de la Astronomia S/N, 18008, Granada (Spain); Uytterhoeven, K. [Kiepenheuer-Institut fuer Sonnenphysik, Schoeneckstr. 6, D-79104 Freiburg (Germany); Rainer, M.; Poretti, E., E-mail: hareter@astro.univie.ac.at [INAF-Osservatorio Astronomico di Brera, Via E. Bianchi 46, 23807, Merate (Saint Lucia) (Italy)
2011-12-20
{delta} Sct-{gamma} Dor hybrids pulsate simultaneously in p- and g-modes, which carry information on the structure of the envelope as well as to the core. Hence, they are key objects for investigating A and F type stars with asteroseismic techniques. An important requirement for seismic modeling is small errors in temperature, gravity, and chemical composition. Furthermore, we want to investigate the existence of an abundance indicator typical for hybrids, something that is well established for the roAp stars. Previous to the present investigation, the abundance pattern of only one hybrid and another hybrid candidate has been published. We obtained high-resolution spectra of HD 114839 and BD +18 4914 using the SOPHIE spectrograph of the Observatoire de Haute-Provence and the HARPS spectrograph at ESO La Silla. For each star we determined fundamental parameters and photospheric abundances of 16 chemical elements by comparing synthetic spectra with the observations. We compare our results to that of seven {delta} Sct and nine {gamma} Dor stars. For the evolved BD +18 4914 we found an abundance pattern typical for an Am star, but could not confirm this peculiarity for the less evolved star HD 114839, which is classified in the literature as uncertain Am star. Our result supports the concept of evolved Am stars being unstable. With our investigation we nearly doubled the number of spectroscopically analyzed {delta} Sct-{gamma} Dor hybrid stars, but did not yet succeed in identifying a spectroscopic signature for this group of pulsating stars. A statistically significant spectroscopic investigation of {delta} Sct- {gamma} Dor hybrid stars is still missing, but would be rewarding considering the asteroseismological potential of this group.
Nuclear and fundamental physics instrumentation for the ANS project
Robinson, S.J. [Tennessee Technological Univ., Cookeville, TN (United States). Dept. of Physics; Raman, S.; Arterburn, J.; McManamy, T.; Peretz, F.J. [Oak Ridge National Lab., TN (United States); Faust, H. [Institut Laue-Langevin, 38 - Grenoble (France); Piotrowski, A.E. [Soltan Inst. for Nuclear Studies, Otwock-Swierk (Poland)
1996-05-01
This report summarizes work carried out during the period 1991-1995 in connection with the refinement of the concepts and detailed designs for nuclear and fundamental physics research instrumentation at the proposed Advanced Neutron source at Oak Ridge National Laboratory. Initially, emphasis was placed on refining the existing System Design Document (SDD-43) to detail more accurately the needs and interfaces of the instruments that are identified in the document. The conceptual designs of these instruments were also refined to reflect current thinking in the field of nuclear and fundamental physics. In particular, the on-line isotope separator (ISOL) facility design was reconsidered in the light of the development of interest in radioactive ion beams within the nuclear physics community. The second stage of this work was to define those instrument parameters that would interface directly with the reactor systems so that these parameters could be considered for the ISOL facility and particularly for its associated ion source. Since two of these options involved ion sources internal to the long slant beam tube, these were studied in detail. In addition, preliminary work was done to identify the needs for the target holder and changing facility to be located in the tangential through-tube. Because many of the planned nuclear and fundamental physics instruments have similar needs in terms of detection apparatus, some progress was also made in defining the parameters for these detectors. 21 refs., 32 figs., 2 tabs.
Exchange Rates and Fundamentals.
Engel, Charles; West, Kenneth D.
2005-01-01
We show analytically that in a rational expectations present-value model, an asset price manifests near-random walk behavior if fundamentals are I (1) and the factor for discounting future fundamentals is near one. We argue that this result helps explain the well-known puzzle that fundamental variables such as relative money supplies, outputs,…
Derivation of a fundamental object associated with internal symmetry is discussed. The form of the fundamental lagrangian is supposed to be known. The fundamental object is similar to an energy-momentum tensor having external space-time symmetry as a source
Deridder, Sander; Desmet, Gert
2012-02-01
Using computational fluid dynamics (CFD), the effective B-term diffusion constant γ(eff) has been calculated for four different random sphere packings with different particle size distributions and packing geometries. Both fully porous and porous-shell sphere packings are considered. The obtained γ(eff)-values have subsequently been used to determine the value of the three-point geometrical constant (ζ₂) appearing in the 2nd-order accurate effective medium theory expression for γ(eff). It was found that, whereas the 1st-order accurate effective medium theory expression is accurate to within 5% over most part of the retention factor range, the 2nd-order accurate expression is accurate to within 1% when calculated with the best-fit ζ₂-value. Depending on the exact microscopic geometry, the best-fit ζ₂-values typically lie in the range of 0.20-0.30, holding over the entire range of intra-particle diffusion coefficients typically encountered for small molecules (0.1 ≤ D(pz)/D(m) ≤ 0.5). These values are in agreement with the ζ₂-value proposed by Thovert et al. for the random packing they considered. PMID:22236565
Atmospheric parameters of 82 red giants in the Kepler field
Overaa Thygesen, Anders; Frandsen, Søren; Bruntt, Hans;
2012-01-01
Context. Accurate fundamental parameters of stars are essential for the asteroseismic analysis of data from the NASA Kepler mission. Aims. We aim at determining accurate atmospheric parameters and the abundance pattern for a sample of 82 red giants that are targets for the Kepler mission. Methods...... parameters and element abundances of 82 red giants. The large discrepancies between the spectroscopic log g and [Fe/H] and values in the Kepler Input Catalogue emphasize the need for further detailed spectroscopic follow-up of the Kepler targets in order to produce reliable results from the asteroseismic...
Clustering Assisted Fundamental Matrix Estimation
Hao Wu
2015-03-01
Full Text Available In computer vision, the estimation of the fundament al matrix is a basic problem that has been extensively studied. The accuracy of the estimation imposes a significant influence on subsequent tasks such as the camera trajectory dete rmination and 3D reconstruction. In this paper we propose a new method for fundamental matri x estimation that makes use of clustering a group of 4D vectors. The key insight is the obser vation that among the 4D vectors constructed from matching pairs of points obtained from the SIF T algorithm, well-defined cluster points tend to be reliable inliers suitable for fundamenta l matrix estimation. Based on this, we utilizes a recently proposed efficient clustering method thr ough density peaks seeking and propose a new clustering assisted method. Experimental resul ts show that the proposed algorithm is faster and more accurate than currently commonly us ed methods.
Islamic fundamentalism in Indonesia
Nagy, Sandra L.
1996-01-01
This is a study of Islamic fundamentalism in Indonesia. Islamic fundamentalism is defined as the return to the foundations and principles of Islam including all movements based on the desire to create a more Islamic society. After describing the practices and beliefs of Islam, this thesis examines the three aspects of universal Islamic fundamentalism: revivalism, resurgence, and radicalism. It analyzes the role of Islam in Indonesia under Dutch colonial rule, an alien Christian imperialist po...
Babu, V
2014-01-01
Fundamentals of Gas Dynamics, Second Edition isa comprehensively updated new edition and now includes a chapter on the gas dynamics of steam. It covers the fundamental concepts and governing equations of different flows, and includes end of chapter exercises based on the practical applications. A number of useful tables on the thermodynamic properties of steam are also included.Fundamentals of Gas Dynamics, Second Edition begins with an introduction to compressible and incompressible flows before covering the fundamentals of one dimensional flows and normal shock wav
Fundamental physics experiments of merit can be conducted at the proposed intense neutron sources. Areas of interest include: neutron particle properties, neutron wave properties, and fundamental physics utilizing reactor produced γ-rays. Such experiments require intense, full-time utilization of a beam station for periods ranging from several months to a year or more
Otacílio Gomes da Silva Neto
2014-04-01
Full Text Available The history involves diverse types of fundamentalism. This article highlights a variety of ethical fundamentalist thoughts that marked humanity and were challenged by thinkers and intellectuals. The fundamentalism originates in the interpretation of doctrines isolated from their historical context and without room for criticism. As understood in the entry in Voltaire´s Dictionnaire philosophique_(1752, fundamentalism is closely related to fanaticism. The practice of interpreting any one doctrine as containing a single fundamental truth can result in a type of blindness that impedes the ability to observe reality with a critical spirit. Certain modern thinkers generally associate fundamentalism with religion and hold it responsible for great human tragedy._ However, fundamentalism unrelated to religion was also spread and likewise caused insurmountable damage to human life. Fundamentalism is defined in the following terms: philosophical, scientific, totalitarian and economic. In as much as one tries to identify and denounce fundamentalism, it seems that it continues to appear in the course of human relations. Whenever critics stand against determined fanaticisms, others will arise to be denounced._ This discussion might be considered trivial if the current state of affairs did not threaten human life, and if predictions were favorable for the life of our species on this planet.
How fundamental is the fundamental assumption?
Kurbis, Nils
2012-01-01
The fundamental assumption of Dummett’s and Prawitz’ proof-theoretic justification of deduction is that ‘if we have a valid argument for a complex statement, we can construct a valid argument for it which finishes with an application of one of the introduction rules governing its principal operator’. I argue that the assumption is flawed in this general version, but should be restricted, not to apply to arguments in general, but only to proofs. I also argue that Dummett’s and Prawitz’ project...
Schubert, Thomas F
2015-01-01
This book, Electronic Devices and Circuit Application, is the first of four books of a larger work, Fundamentals of Electronics. It is comprised of four chapters describing the basic operation of each of the four fundamental building blocks of modern electronics: operational amplifiers, semiconductor diodes, bipolar junction transistors, and field effect transistors. Attention is focused on the reader obtaining a clear understanding of each of the devices when it is operated in equilibrium. Ideas fundamental to the study of electronic circuits are also developed in the book at a basic level to
Fundamental Aspects of Biosensors
K.Sowjanya
2016-06-01
Full Text Available A biosensor is an analytical device which converts a biological response into an electrical signal. The term 'biosensor' is often used to cover sensor devices used in order to determine the concentration of substances and other parameters of biological interest even where they do not utilize a biological system directly. This very broad definition is used by some scientific journals (e.g. Biosensors, Elsevier Applied Science but will not be applied to the coverage here. The emphasis of this Chapter concerns enzymes as the biologically responsive material, but it should be recognized that other biological systems may be utilized by biosensors, for example, whole cell metabolism, ligand binding and the antibody-antigen reaction. Biosensors represent a rapidly expanding field, at the present time, with an estimated 60% annual growth rate; the major impetus coming from the health-care industry (e.g. 6% of the western world are diabetic and would benefit from the availability of a rapid, accurate and simple biosensor for glucose but with some pressure from other areas, such as food quality appraisal and environmental monitoring. The estimated world analytical market is about 12,000,000,000 year- 1 of which 30% is in the health care area. There is clearly a vast market expansion potential as less than 0.1% of this market is currently using biosensors. Research and development in this field is wide and multidisciplinary, spanning biochemistry, bioreactor science, physical chemistry, electrochemistry, electronics and software engineering. Most of this current endeavour concerns potentiometric and amperometric biosensors and colorimetric paper enzyme strips. However, all the main transducer types are likely to be thoroughly examined, for use in biosensors, over the next few years.
Accurate Finite Difference Algorithms
Goodrich, John W.
1996-01-01
Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.
Fundamentals of electrochemical science
Oldham, Keith
1993-01-01
Key Features* Deals comprehensively with the basic science of electrochemistry* Treats electrochemistry as a discipline in its own right and not as a branch of physical or analytical chemistry* Provides a thorough and quantitative description of electrochemical fundamentals
Fundamentals of crystallography
2011-01-01
Crystallography is a basic tool for scientists in many diverse disciplines. This text offers a clear description of fundamentals and of modern applications. It supports curricula in crystallography at undergraduate level.
Bonora, L.; Maccaferri, C.; Santos, R. J. Scherer; Tolla, D. D.
2005-01-01
In this letter we show that vacuum string field theory contains exact solutions that can be interpreted as macroscopic fundamental strings. They are formed by a condensate of infinitely many completely space-localized solutions (D0-branes).
Information security fundamentals
Peltier, Thomas R
2013-01-01
Developing an information security program that adheres to the principle of security as a business enabler must be the first step in an enterprise's effort to build an effective security program. Following in the footsteps of its bestselling predecessor, Information Security Fundamentals, Second Edition provides information security professionals with a clear understanding of the fundamentals of security required to address the range of issues they will experience in the field.The book examines the elements of computer security, employee roles and r
Fundamental Equation of Economics
Wayne, James J.
2013-01-01
Recent experience of the great recession of 2008 has renewed one of the oldest debates in economics: whether economics could ever become a scientific discipline like physics. This paper proves that economics is truly a branch of physics by establishing for the first time a fundamental equation of economics (FEOE), which is similar to many fundamental equations governing other subfields of physics, for example, Maxwell’s Equations for electromagnetism. From recently established physics laws of...
Cassandra Christina Rausch
2015-01-01
Citizens worldwide are becoming all too familiar with the accelerated frequency of terrorist attacks in the 21st century, particularly with those involving a religious underpinning. Why, though, have religiously-affiliated acts of terrorism become such a common occurrence? By examining how religious fundamentalism has accelerated and intensified terrorism within the modern world, scholars can focus on determining the “why”. By historically defining terrorism and fundamentalism and then placin...
Fundamentals of structural dynamics
Craig, Roy R
2006-01-01
From theory and fundamentals to the latest advances in computational and experimental modal analysis, this is the definitive, updated reference on structural dynamics.This edition updates Professor Craig's classic introduction to structural dynamics, which has been an invaluable resource for practicing engineers and a textbook for undergraduate and graduate courses in vibrations and/or structural dynamics. Along with comprehensive coverage of structural dynamics fundamentals, finite-element-based computational methods, and dynamic testing methods, this Second Edition includes new and e
Massimo Pigliucci
2006-06-01
Full Text Available The many facets of fundamentalism. There has been much talk about fundamentalism of late. While most people's thought on the topic go to the 9/11 attacks against the United States, or to the ongoing war in Iraq, fundamentalism is affecting science and its relationship to society in a way that may have dire long-term consequences. Of course, religious fundamentalism has always had a history of antagonism with science, and – before the birth of modern science – with philosophy, the age-old vehicle of the human attempt to exercise critical thinking and rationality to solve problems and pursue knowledge. “Fundamentalism” is defined by the Oxford Dictionary of the Social Sciences1 as “A movement that asserts the primacy of religious values in social and political life and calls for a return to a 'fundamental' or pure form of religion.” In its broadest sense, however, fundamentalism is a form of ideological intransigence which is not limited to religion, but includes political positions as well (for example, in the case of some extreme forms of “environmentalism”.
The stopping powers of thin Al foils for H+ and 4He+ ions have been measured over the energy range E=(206.03–2680.05) keV/amu with an overall relative uncertainty better than 1% using the transmission method. The derived S(E) experimental data are compared to previous ones from the literature, to values derived by the SRIM-2008 code or compiled in the ICRU-49 report, and to the predictions of Sigmund–Schinner binary collision stopping theory. Besides, the S(E) data for H+ ions together with those for He2+ ions reported by Andersen et al. (1977) have been analyzed over the energy interval E>1.0 MeV using the modified Bethe–Bloch stopping theory. The following sets of values have been inferred for the mean excitation potential, I, and the Barkas–Andersen parameter, b, for H+ and He+ projectiles, respectively: {(I=164±3)) eV, (b=1.40} and {(I=163±2.5)) eV, (b=1.38}. As expected, the I parameter is found to be independent of the projectile electronic structure presumably indicating that the contribution of charge exchange effects becomes negligible as the projectile velocity increases. Therefore, the I parameter must be determined from precise stopping power measurements performed at high projectile energies where the Bethe stopping theory is fully valid
Moussa, D.; Damache, S.; Ouichaoui, S.
2015-01-01
The stopping powers of thin Al foils for H+ and 4He+ ions have been measured over the energy range E = (206.03- 2680.05) keV/amu with an overall relative uncertainty better than 1% using the transmission method. The derived S (E) experimental data are compared to previous ones from the literature, to values derived by the SRIM-2008 code or compiled in the ICRU-49 report, and to the predictions of Sigmund-Schinner binary collision stopping theory. Besides, the S (E) data for H+ ions together with those for He2+ ions reported by Andersen et al. (1977) have been analyzed over the energy interval E > 1.0 MeV using the modified Bethe-Bloch stopping theory. The following sets of values have been inferred for the mean excitation potential, I, and the Barkas-Andersen parameter, b, for H+ and He+ projectiles, respectively: { (I = 164 ± 3) eV, b = 1.40 } and { (I = 163 ± 2.5) eV, b = 1.38 } . As expected, the I parameter is found to be independent of the projectile electronic structure presumably indicating that the contribution of charge exchange effects becomes negligible as the projectile velocity increases. Therefore, the I parameter must be determined from precise stopping power measurements performed at high projectile energies where the Bethe stopping theory is fully valid.
Moussa, D., E-mail: djamelmoussa@gmail.com [Université des Sciences et Technologie H. Boumediene (USTHB), Laboratoire SNIRM, Faculté de Physique, B.P. 32, 16111 Bab-Ezzouar, Algiers (Algeria); Damache, S. [Division de Physique, CRNA, 02 Bd. Frantz Fanon, B.P. 399 Alger-gare, Algiers (Algeria); Ouichaoui, S., E-mail: souichaoui@gmail.com [Université des Sciences et Technologie H. Boumediene (USTHB), Laboratoire SNIRM, Faculté de Physique, B.P. 32, 16111 Bab-Ezzouar, Algiers (Algeria)
2015-01-15
The stopping powers of thin Al foils for H{sup +} and {sup 4}He{sup +} ions have been measured over the energy range E=(206.03–2680.05) keV/amu with an overall relative uncertainty better than 1% using the transmission method. The derived S(E) experimental data are compared to previous ones from the literature, to values derived by the SRIM-2008 code or compiled in the ICRU-49 report, and to the predictions of Sigmund–Schinner binary collision stopping theory. Besides, the S(E) data for H{sup +} ions together with those for He{sup 2+} ions reported by Andersen et al. (1977) have been analyzed over the energy interval E>1.0 MeV using the modified Bethe–Bloch stopping theory. The following sets of values have been inferred for the mean excitation potential, I, and the Barkas–Andersen parameter, b, for H{sup +} and He{sup +} projectiles, respectively: {(I=164±3)) eV, (b=1.40} and {(I=163±2.5)) eV, (b=1.38}. As expected, the I parameter is found to be independent of the projectile electronic structure presumably indicating that the contribution of charge exchange effects becomes negligible as the projectile velocity increases. Therefore, the I parameter must be determined from precise stopping power measurements performed at high projectile energies where the Bethe stopping theory is fully valid.
Fast, accurate standardless XRF analysis with IQ+
Full text: Due to both chemical and physical effects, the most accurate XRF data are derived from calibrations set up using in-type standards, necessitating some prior knowledge of the samples being analysed. Whilst this is often the case for routine samples, particularly in production control, for completely unknown samples the identification and availability of in-type standards can be problematic. Under these circumstances standardless analysis can offer a viable solution. Successful analysis of completely unknown samples requires a complete chemical overview of the speciemen together with the flexibility of a fundamental parameters (FP) algorithm to handle wide-ranging compositions. Although FP algorithms are improving all the time, most still require set-up samples to define the spectrometer response to a particular element. Whilst such materials may be referred to as standards, the emphasis in this kind of analysis is that only a single calibration point is required per element and that the standard chosen does not have to be in-type. The high sensitivities of modern XRF spectrometers, together with recent developments in detector counting electronics that possess a large dynamic range and high-speed data processing capacity bring significant advances to fast, standardless analysis. Illustrated with a tantalite-columbite heavy-mineral concentrate grading use-case, this paper will present the philosophy behind the semi-quantitative IQ+ software and the required hardware. This combination can give a rapid scan-based overview and quantification of the sample in less than two minutes, together with the ability to define channels for specific elements of interest where higher accuracy and lower levels of quantification are required. The accuracy, precision and limitations of standardless analysis will be assessed using certified reference materials of widely differing chemical and physical composition. Copyright (2002) Australian X-ray Analytical Association Inc
Fundamentals in Java Programming
2006-01-01
This interactive tutorial reviews the following in Java programming: Building blocks, Appropriate and accurate definition of componentsThe interactions in this tutorial include mouse over information, and a matching exercise. OA2200 Computation Methods for Operational Research
Dick, Erik
2015-01-01
This book explores the working principles of all kinds of turbomachines. The same theoretical framework is used to analyse the different machine types. Fundamentals are first presented and theoretical concepts are then elaborated for particular machine types, starting with the simplest ones.For each machine type, the author strikes a balance between building basic understanding and exploring knowledge of practical aspects. Readers are invited through challenging exercises to consider how the theory applies to particular cases and how it can be generalised. The book is primarily meant as a course book. It teaches fundamentals and explores applications. It will appeal to senior undergraduate and graduate students in mechanical engineering and to professional engineers seeking to understand the operation of turbomachines. Readers will gain a fundamental understanding of turbomachines. They will also be able to make a reasoned choice of turbomachine for a particular application and to understand its operation...
Pragmatic electrical engineering fundamentals
Eccles, William
2011-01-01
Pragmatic Electrical Engineering: Fundamentals introduces the fundamentals of the energy-delivery part of electrical systems. It begins with a study of basic electrical circuits and then focuses on electrical power. Three-phase power systems, transformers, induction motors, and magnetics are the major topics.All of the material in the text is illustrated with completely-worked examples to guide the student to a better understanding of the topics. This short lecture book will be of use at any level of engineering, not just electrical. Its goal is to provide the practicing engineer with a practi
Homeschooling and religious fundamentalism
Robert KUNZMAN
2010-10-01
Full Text Available This article considers the relationship between homeschooling and religious fundamentalism by focusing on their intersection in the philosophies and practices of conservative Christian homeschoolers in the United States. Homeschooling provides an ideal educational setting to support several core fundamentalist principles: resistance to contemporary culture; suspicion of institutional authority and professional expertise; parental control and centrality of the family; and interweaving of faith and academics. It is important to recognize, however, that fundamentalism exists on a continuum; conservative religious homeschoolers resist liberal democratic values to varying degrees, and efforts to foster dialogue and accommodation with religious homeschoolers can ultimately helpstrengthen the broader civic fabric.
Smith, Peter
2013-01-01
Written for the piping engineer and designer in the field, this two-part series helps to fill a void in piping literature,since the Rip Weaver books of the '90s were taken out of print at the advent of the Computer Aid Design(CAD) era. Technology may have changed, however the fundamentals of piping rules still apply in the digitalrepresentation of process piping systems. The Fundamentals of Piping Design is an introduction to the designof piping systems, various processes and the layout of pipe work connecting the major items of equipment forthe new hire, the engineering student and the vetera
Antennas fundamentals, design, measurement
Long, Maurice
2009-01-01
This comprehensive revision (3rd Edition) is a senior undergraduate or first-year graduate level textbook on antenna fundamentals, design, performance analysis, and measurements. In addition to its use as a formal course textbook, the book's pragmatic style and emphasis on the fundamentals make it especially useful to engineering professionals who need to grasp the essence of the subject quickly but without being mired in unnecessary detail. This new edition was prepared for a first year graduate course at Southern Polytechnic State University in Georgia. It provides broad coverage of antenna
Aggarwal, Vibha
2009-01-01
Fundamentals of physics is a general introduction designed to present a comprehensive, logical and unified treatment of the fundamentals of physics based on different theories, with applications to a variety of important phenomena. Its clarity and completeness makes the text suitable for self-learning and for self-paced courses. Throughout the text the emphasis is on clarity, rather than formality, the various derivations are explained in detail and wherever possible, the physical interpretations are emphasised. The mathematical treatment is set out in great detail, carrying out the steps whic
Infosec management fundamentals
Dalziel, Henry
2015-01-01
Infosec Management Fundamentals is a concise overview of the Information Security management concepts and techniques, providing a foundational template for both experienced professionals and those new to the industry. This brief volume will also appeal to business executives and managers outside of infosec who want to understand the fundamental concepts of Information Security and how it impacts their business decisions and daily activities. Teaches ISO/IEC 27000 best practices on information security management Discusses risks and controls within the context of an overall information securi
Fundamentals of continuum mechanics
Rudnicki, John W
2014-01-01
A concise introductory course text on continuum mechanics Fundamentals of Continuum Mechanics focuses on the fundamentals of the subject and provides the background for formulation of numerical methods for large deformations and a wide range of material behaviours. It aims to provide the foundations for further study, not just of these subjects, but also the formulations for much more complex material behaviour and their implementation computationally. This book is divided into 5 parts, covering mathematical preliminaries, stress, motion and deformation, balance of mass, momentum and energ
Fundamentals of reactor chemistry
In the Nuclear Engineering School of JAERI, many courses are presented for the people working in and around the nuclear reactors. The curricula of the courses contain also the subject material of chemistry. With reference to the foreign curricula, a plan of educational subject material of chemistry in the Nuclear Engineering School of JAERI was considered, and the fundamental part of reactor chemistry was reviewed in this report. Since the students of the Nuclear Engineering School are not chemists, the knowledge necessary in and around the nuclear reactors was emphasized in order to familiarize the students with the reactor chemistry. The teaching experience of the fundamentals of reactor chemistry is also given. (author)
Accurate Modeling of Advanced Reflectarrays
Zhou, Min
of the incident field, the choice of basis functions, and the technique to calculate the far-field. Based on accurate reference measurements of two offset reflectarrays carried out at the DTU-ESA Spherical NearField Antenna Test Facility, it was concluded that the three latter factors are particularly important...... to the conventional phase-only optimization technique (POT), the geometrical parameters of the array elements are directly optimized to fulfill the far-field requirements, thus maintaining a direct relation between optimization goals and optimization variables. As a result, better designs can be obtained compared...... using the GDOT to demonstrate its capabilities. To verify the accuracy of the GDOT, two offset contoured beam reflectarrays that radiate a high-gain beam on a European coverage have been designed and manufactured, and subsequently measured at the DTU-ESA Spherical Near-Field Antenna Test Facility...
Addressing the Crisis in Fundamental Physics
Stubbs, Christopher W
2007-01-01
I present the case for fundamental physics experiments in space playing an important role in addressing the current "dark energy'' crisis. If cosmological observations continue to favor a value of the dark energy equation of state parameter w=-1, with no change over cosmic time, then we will have difficulty understanding this new fundamental physics. We will then face a very real risk of stagnation unless we detect some other experimental anomaly. The advantages of space-based experiments could prove invaluable in the search for the a more complete understanding of dark energy. This talk was delivered at the start of the Fundamental Physics Research in Space Workshop in May 2006.
姜俊泽; 张伟明
2012-01-01
分析了水平管内气液两相段塞流的运动特性和形态特征以及段塞单元内部的速度分布规律,建立了水平管路气液两相段塞单元的物理模型.将一个完整的段塞流单元分为液相段塞区和液层/气泡区,建立了液相段塞区的质量和动量守恒方程,计算了其压降和持液率;对液层区,模型考虑液层厚度分布不均(坡状液层)对参数计算的影响,通过建立局部控制方程,推导了液层高度随流动方向坐标变化的表达式,并将持液率和湿周写成液层高度的函数.通过与实验和其他模型的计算结果对比,本文建立的模型可以对压降和持液率有更准确的预测结果.%The paper analyzes the flow mechanism,shape characteristics and inner velocity distribution of gas-liquid slug flow in a horizontal pipe,and develops a physical model for the horizontal pipe. In a steady slug unit,lift velocity is always equal to shedding velocity,so the slug can keep an established shape. The model divides the slug into two parts: liquid-slug and film/bubble zone. For the liquid-slug zone,mass conservation and momentum conservation equations are built up to compute the local liquid-holdup and pressure-drop. For the film area,the model takes film height variation into consideration,assuming the height is only varying along the flow direction but not in the radial direction. The paper deduces a film height variation expression by building local control equations,and then develops a film height via liquid-holdup function and a film height via wet-diameter function. By comparing with test data and some other models,this model presents more accurate prediction results of liquid-holdup and pressure-drop.
A laboratory scale fundamental time?
Mendes, R.V. [Instituto para a Investigacao Interdisciplinar, CMAF, Lisboa (Portugal); Instituto Superior Tecnico, IPFN - EURATOM/IST Association, Lisboa (Portugal)
2012-11-15
The existence of a fundamental time (or fundamental length) has been conjectured in many contexts. However, the ''stability of physical theories principle'' seems to be the one that provides, through the tools of algebraic deformation theory, an unambiguous derivation of the stable structures that Nature might have chosen for its algebraic framework. It is well-known that c and {Dirac_h} are the deformation parameters that stabilize the Galilean and the Poisson algebra. When the stability principle is applied to the Poincare-Heisenberg algebra, two deformation parameters emerge which define two time (or length) scales. In addition there are, for each of them, a plus or minus sign possibility in the relevant commutators. One of the deformation length scales, related to non-commutativity of momenta, is probably related to the Planck length scale but the other might be much larger and already detectable in laboratory experiments. In this paper, this is used as a working hypothesis to look for physical effects that might settle this question. Phase-space modifications, resonances, interference, electron spin resonance and non-commutative QED are considered. (orig.)
A laboratory scale fundamental time?
The existence of a fundamental time (or fundamental length) has been conjectured in many contexts. However, the ''stability of physical theories principle'' seems to be the one that provides, through the tools of algebraic deformation theory, an unambiguous derivation of the stable structures that Nature might have chosen for its algebraic framework. It is well-known that c and ℎ are the deformation parameters that stabilize the Galilean and the Poisson algebra. When the stability principle is applied to the Poincare-Heisenberg algebra, two deformation parameters emerge which define two time (or length) scales. In addition there are, for each of them, a plus or minus sign possibility in the relevant commutators. One of the deformation length scales, related to non-commutativity of momenta, is probably related to the Planck length scale but the other might be much larger and already detectable in laboratory experiments. In this paper, this is used as a working hypothesis to look for physical effects that might settle this question. Phase-space modifications, resonances, interference, electron spin resonance and non-commutative QED are considered. (orig.)
Homeschooling and Religious Fundamentalism
Kunzman, Robert
2010-01-01
This article considers the relationship between homeschooling and religious fundamentalism by focusing on their intersection in the philosophies and practices of conservative Christian homeschoolers in the United States. Homeschooling provides an ideal educational setting to support several core fundamentalist principles: resistance to…
Homeschooling and religious fundamentalism
KUNZMAN, Robert
2010-01-01
This article considers the relationship between homeschooling and religious fundamentalism by focusing on their intersection in the philosophies and practices of conservative Christian homeschoolers in the United States. Homeschooling provides an ideal educational setting to support several core fundamentalist principles: resistance to contemporary culture; suspicion of institutional authority and professional expertise; parental control and centrality of the family; and interweaving of faith...
Fundamentals of convolutional coding
Johannesson, Rolf
2015-01-01
Fundamentals of Convolutional Coding, Second Edition, regarded as a bible of convolutional coding brings you a clear and comprehensive discussion of the basic principles of this field * Two new chapters on low-density parity-check (LDPC) convolutional codes and iterative coding * Viterbi, BCJR, BEAST, list, and sequential decoding of convolutional codes * Distance properties of convolutional codes * Includes a downloadable solutions manual
Grenoble Fundamental Research Department
A summary of the various activities of the Fundamental Research Institute, Grenoble, France is given. The following fields are covered: Nuclear physics, solid state physics, physical chemistry, biology and advanced techniques. Fore more detailed descriptions readers are referred to scientific literature
Unification of Fundamental Forces
Salam, Abdus; Taylor, Foreword by John C.
2005-10-01
Foreword John C. Taylor; 1. Unification of fundamental forces Abdus Salam; 2. History unfolding: an introduction to the two 1968 lectures by W. Heisenberg and P. A. M. Dirac Abdus Salam; 3. Theory, criticism, and a philosophy Werner Heisenberg; 4. Methods in theoretical physics Paul Adrian Maurice Dirac.
The Fundamental Property Relation.
Martin, Joseph J.
1983-01-01
Discusses a basic equation in thermodynamics (the fundamental property relation), focusing on a logical approach to the development of the relation where effects other than thermal, compression, and exchange of matter with the surroundings are considered. Also demonstrates erroneous treatments of the relation in three well-known textbooks. (JN)
Fundamental Metallurgy of Solidification
Tiedje, Niels
2004-01-01
The text takes the reader through some fundamental aspects of solidification, with focus on understanding the basic physics that govern solidification in casting and welding. It is described how the first solid is formed and which factors affect nucleation. It is described how crystals grow from ...
Division I: Fundamental astronomy
Vondrák, Jan; McCarthy, D.D.; Fukushima, T.; Brzezinski, A.; Burns, J.A.; Defraigne, P.; Evans, D.W.; Kaplan, G.H.; Klioner, S.; Kneževic, Z.; Kumkova, I.I.; Ma, Ch.; Manchester, R.N.; Petite, G.
Cambridge : Cambridge University Press, 2009 - (van der Hucht, K.), s. 1-4 ISBN 978-0-521-85605-8. - (Proceedings of the IAU. IAU Transactions. 27A) Institutional research plan: CEZ:AV0Z10030501 Keywords : fundamental astronomy Subject RIV: BN - Astronomy, Celestial Mechanics, Astrophysics
This study guide provides comments and references for professional soil scientists who are studying for the soil science fundamentals exam needed as the first step for certification. The performance objectives were determined by the Soil Science Society of America's Council of Soil Science Examiners...
Fundamentals of plasma physics
Bittencourt, J A
1986-01-01
A general introduction designed to present a comprehensive, logical and unified treatment of the fundamentals of plasma physics based on statistical kinetic theory. Its clarity and completeness make it suitable for self-learning and self-paced courses. Problems are included.
Frohlich, Cliff
Choosing an intermediate-level geophysics text is always problematic: What should we teach students after they have had introductory courses in geology, math, and physics, but little else? Fundamentals of Geophysics is aimed specifically at these intermediate-level students, and the author's stated approach is to construct a text “using abundant diagrams, a simplified mathematical treatment, and equations in which the student can follow each derivation step-by-step.” Moreover, for Lowrie, the Earth is round, not flat—the “fundamentals of geophysics” here are the essential properties of our Earth the planet, rather than useful techniques for finding oil and minerals. Thus this book is comparable in both level and approach to C. M. R. Fowler's The Solid Earth (Cambridge University Press, 1990).
Variation of fundamental constants
Flambaum, V V
2006-01-01
We present a review of recent works devoted to the variation of the fine structure constant alpha, strong interaction and fundamental masses. There are some hints for the variation in quasar absorption spectra, Big Bang nucleosynthesis, and Oklo natural nuclear reactor data. A very promising method to search for the variation of the fundamental constants consists in comparison of different atomic clocks. Huge enhancement of the variation effects happens in transition between accidentally degenerate atomic and molecular energy levels. A new idea is to build a ``nuclear'' clock based on the ultraviolet transition between very low excited state and ground state in Thorium nucleus. This may allow to improve sensitivity to the variation up to 10 orders of magnitude! Huge enhancement of the variation effects is also possible in cold atomic and molecular collisions near Feschbach resonance.
2004-01-01
Discussing what is fundamental in a variety of fields, biologist Richard Dawkins, physicist Gerardus 't Hooft, and mathematician Alain Connes spoke to a packed Main Auditorium at CERN 15 October. Dawkins, Professor of the Public Understanding of Science at Oxford University, explained simply the logic behind Darwinian natural selection, and how it would seem to apply anywhere in the universe that had the right conditions. 't Hooft, winner of the 1999 Physics Nobel Prize, outlined some of the main problems in physics today, and said he thinks physics is so fundamental that even alien scientists from another planet would likely come up with the same basic principles, such as relativity and quantum mechanics. Connes, winner of the 1982 Fields Medal (often called the Nobel Prize of Mathematics), explained how physics is different from mathematics, which he described as a "factory for concepts," unfettered by connection to the physical world. On 16 October, anthropologist Sharon Traweek shared anecdotes from her ...
Fundamentals of differential beamforming
Benesty, Jacob; Pan, Chao
2016-01-01
This book provides a systematic study of the fundamental theory and methods of beamforming with differential microphone arrays (DMAs), or differential beamforming in short. It begins with a brief overview of differential beamforming and some popularly used DMA beampatterns such as the dipole, cardioid, hypercardioid, and supercardioid, before providing essential background knowledge on orthogonal functions and orthogonal polynomials, which form the basis of differential beamforming. From a physical perspective, a DMA of a given order is defined as an array that measures the differential acoustic pressure field of that order; such an array has a beampattern in the form of a polynomial whose degree is equal to the DMA order. Therefore, the fundamental and core problem of differential beamforming boils down to the design of beampatterns with orthogonal polynomials. But certain constraints also have to be considered so that the resulting beamformer does not seriously amplify the sensors’ self noise and the mism...
Radar Fundamentals, Presentation
Jenn, David
2008-01-01
Topics include: introduction, radar functions, antennas basics, radar range equation, system parameters, electromagnetic waves, scattering mechanisms, radar cross section and stealth, and sample radar systems.
Biomedical engineering fundamentals
Bronzino, Joseph D
2014-01-01
Known as the bible of biomedical engineering, The Biomedical Engineering Handbook, Fourth Edition, sets the standard against which all other references of this nature are measured. As such, it has served as a major resource for both skilled professionals and novices to biomedical engineering.Biomedical Engineering Fundamentals, the first volume of the handbook, presents material from respected scientists with diverse backgrounds in physiological systems, biomechanics, biomaterials, bioelectric phenomena, and neuroengineering. More than three dozen specific topics are examined, including cardia
Fundamentals of queueing theory
Gross, Donald; Thompson, James M; Harris, Carl M
2013-01-01
Praise for the Third Edition ""This is one of the best books available. Its excellent organizational structure allows quick reference to specific models and its clear presentation . . . solidifies the understanding of the concepts being presented.""-IIE Transactions on Operations Engineering Thoroughly revised and expanded to reflect the latest developments in the field, Fundamentals of Queueing Theory, Fourth Edition continues to present the basic statistical principles that are necessary to analyze the probabilistic nature of queues. Rather than pre
Fundamental partial compositeness
Sannino, Francesco; Strumia, Alessandro; Tesi, Andrea; Vigiani, Elena
2016-01-01
We construct renormalizable Standard Model extensions, valid up to the Planck scale, that give a composite Higgs from a new fundamental strong force acting on fermions and scalars. Yukawa interactions of these particles with Standard Model fermions realize the partial compositeness scenario. Successful models exist because gauge quantum numbers of Standard Model fermions admit a minimal enough 'square root'. Furthermore, right-handed SM fermions have an SU(2)$_R$-like structure, yielding a cu...
Fundamentals of radiological protection
The basic processes of living cells which are relevant to an understanding of the interaction of ionizing radiation with man are described. Particular reference is made to cell death, cancer induction and genetic effects. This is the second of a series of reports which present the fundamentals necessary for an understanding of the bases of regulatory criteria such as those recommended by the International Commision on Radiological Protection (ICRP). Others consider basic radiation physics and the biological effects of ionizing radiation. (author)
Semantic Web Services Fundamentals
Heymans, Stijn; Hoffmann, Joerg; Marconi, Annapaola; Phlipps, Joshua; Weber, Ingo
2011-01-01
The research area of Semantic Web Services investigates the annotation of services, typically in a SOA, with a precise mathematical meaning in a formal ontology. These annotations allow a higher degree of automation. The last decade has seen a wide proliferation of such approaches, proposing different ontology languages, and paradigms for employing these in practice. The next chapter gives an overview of these approaches. In the present chapter, we provide an understanding of the fundamental ...
Wollaber, Allan Benton [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-06-16
This is a powerpoint presentation which serves as lecture material for the Parallel Computing summer school. It goes over the fundamentals of the Monte Carlo calculation method. The material is presented according to the following outline: Introduction (background, a simple example: estimating π), Why does this even work? (The Law of Large Numbers, The Central Limit Theorem), How to sample (inverse transform sampling, rejection), and An example from particle transport.
High voltage engineering fundamentals
Kuffel, E; Hammond, P
1984-01-01
Provides a comprehensive treatment of high voltage engineering fundamentals at the introductory and intermediate levels. It covers: techniques used for generation and measurement of high direct, alternating and surge voltages for general application in industrial testing and selected special examples found in basic research; analytical and numerical calculation of electrostatic fields in simple practical insulation system; basic ionisation and decay processes in gases and breakdown mechanisms of gaseous, liquid and solid dielectrics; partial discharges and modern discharge detectors; and over
Davis, A. -C.; Kibble, T. W. B.
2005-01-01
Cosmic strings are linear concentrations of energy that may be formed at phase transitions in the very early universe. At one time they were thought to provide a possible origin for the density inhomogeneities from which galaxies eventually develop, though this idea has been ruled out, primarily by observations of the cosmic microwave background (CMB). Fundamental strings are the supposed building blocks of all matter in superstring theory or its modern version, M-theory. These two concepts w...
Fundamental concepts on energy
The fundamental concepts on energy and the different forms in which it is manifested are presented. Since it is possible to transform energy in a way to other, the laws that govern these transformations are discussed. The energy transformation processes are an essential compound in the capacity humanizes to survive and be developed. The energy use brings important economic aspects, technical and political. Because this, any decision to administer energy system will be key for our future life
Ashtekar, Abhay
2013-01-01
The first three sections of this article contain a broad brush summary of the profound changes in the notion of time in fundamental physics that were brought about by three revolutions: the foundations of mechanics distilled by Newton in his Principia, the discovery of special relativity by Einstein and its reformulation by Minkowski, and, finally, the fusion of geometry and gravity in Einstein's general relativity. The fourth section discusses two aspects of yet another deep revision that wa...
Fundamentals of linear algebra
Dash, Rajani Ballav
2008-01-01
FUNDAMENTALS OF LINEAR ALGEBRA is a comprehensive Text Book, which can be used by students and teachers of All Indian Universities. The Text has easy, understandable form and covers all topics of UGC Curriculum. There are lots of worked out examples which helps the students in solving the problems without anybody's help. The Problem sets have been designed keeping in view of the questions asked in different examinations.
Modelling and extraction of fundamental frequency in speech signals
Pawi, Alipah
2014-01-01
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University. One of the most important parameters of speech is the fundamental frequency of vibration of voiced sounds. The audio sensation of the fundamental frequency is known as the pitch. Depending on the tonal/non-tonal category of language, the fundamental frequency conveys intonation, pragmatics and meaning. In addition the fundamental frequency and intonation carry speaker gender, age, identity, s...
Burov, Alexey
Fundamental science is a hard, long-term human adventure that has required high devotion and social support, especially significant in our epoch of Mega-science. The measure of this devotion and this support expresses the real value of the fundamental science in public opinion. Why does fundamental science have value? What determines its strength and what endangers it? The dominant answer is that the value of science arises out of curiosity and is supported by the technological progress. Is this really a good, astute answer? When trying to attract public support, we talk about the ``mystery of the universe''. Why do these words sound so attractive? What is implied by and what is incompatible with them? More than two centuries ago, Immanuel Kant asserted an inseparable entanglement between ethics and metaphysics. Thus, we may ask: which metaphysics supports the value of scientific cognition, and which does not? Should we continue to neglect the dependence of value of pure science on metaphysics? If not, how can this issue be addressed in the public outreach? Is the public alienated by one or another message coming from the face of science? What does it mean to be politically correct in this sort of discussion?
Nope Gomez, F. I.; Santiago, C. de
2014-07-01
Shallow geothermal energy application in buildings and civil engineering works (tunnels, diaphragm walls, bridge decks, roads, and train/metro stations) are spreading rapidly all around the world. the dual role of these energy geostructures makes their design challenging and more complex with respect to conventional projects. Besides the geotechnical parameters, thermal behavior parameters are needed in the design and dimensioning to warrantee the thermo-mechanical stability of the geothermal structural element. As for obtaining any soil thermal parameter, both in situ and laboratory methods can be used. The present study focuses on a lab test known the need ke method to measure the thermal conductivity of soils (λ). Through this research work, different variables inherent to the test procedure, as well as external factors that may have an impact on thermal conductivity measurements were studied. Samples extracted from the cores obtained from a geothermal drilling conducted on the campus of the Polytechnic University of Valencia, showing different mineralogical and nature composition (granular and clayey) were studied different (moisture and density) compacting conditions. 550 thermal conductivity measurements were performed, from which the influence of factors such as the degree of saturation-moisture, dry density and type of material was verified. Finally, a stratigraphic profile with thermal conductivities ranges of each geologic level was drawn, considering the degree of saturation ranges evaluated in lab tests, in order to be compared and related to thermal response test, currently in progress. Finally, a test protocol is set and proposed, for both remolded and undisturbed samples, under different saturation conditions. Together with this test protocol, a set of recommendations regarding the configuration of the measuring equipment, treatment of samples and other variables, are posed in order to reduce errors in the final results. (Author)
Jin, Guoyong; Su, Zhu
2015-01-01
This book develops a uniform accurate method which is capable of dealing with vibrations of laminated beams, plates and shells with arbitrary boundary conditions including classical boundaries, elastic supports and their combinations. It also provides numerous solutions for various configurations including various boundary conditions, laminated schemes, geometry and material parameters, which fill certain gaps in this area of reach and may serve as benchmark solutions for the readers. For each case, corresponding fundamental equations in the framework of classical and shear deformation theory are developed. Following the fundamental equations, numerous free vibration results are presented for various configurations including different boundary conditions, laminated sequences and geometry and material properties. The proposed method and corresponding formulations can be readily extended to static analysis.
Microwave engineering concepts and fundamentals
Khan, Ahmad Shahid
2014-01-01
Detailing the active and passive aspects of microwaves, Microwave Engineering: Concepts and Fundamentals covers everything from wave propagation to reflection and refraction, guided waves, and transmission lines, providing a comprehensive understanding of the underlying principles at the core of microwave engineering. This encyclopedic text not only encompasses nearly all facets of microwave engineering, but also gives all topics—including microwave generation, measurement, and processing—equal emphasis. Packed with illustrations to aid in comprehension, the book: •Describes the mathematical theory of waveguides and ferrite devices, devoting an entire chapter to the Smith chart and its applications •Discusses different types of microwave components, antennas, tubes, transistors, diodes, and parametric devices •Examines various attributes of cavity resonators, semiconductor and RF/microwave devices, and microwave integrated circuits •Addresses scattering parameters and their properties, as well a...
Fundamentals of radiological protection
A brief review is presented of the early and late effects of ionising radiation on man, with particular emphasis on those aspects of importance in radiological protection. The terminology and dose response curves, are explained. Early effects on cells, tissues and whole organs are discussed. Late somatic effects considered include cancer and life-span shortening. Genetic effects are examined. The review is the third of a series of reports which present the fundamentals necessary for an understanding of the basis of regulatory criteria, such as those of the ICRP. (u.K.)
Ashtekar, Abhay
2013-01-01
The first three sections of this article contain a broad brush summary of the profound changes in the notion of time in fundamental physics that were brought about by three revolutions: the foundations of mechanics distilled by Newton in his Principia, the discovery of special relativity by Einstein and its reformulation by Minkowski, and, finally, the fusion of geometry and gravity in Einstein's general relativity. The fourth section discusses two aspects of yet another deep revision that waits in the wings as we attempt to unify general relativity with quantum physics.
Morris, Carla C
2015-01-01
Fundamentals of Calculus encourages students to use power, quotient, and product rules for solutions as well as stresses the importance of modeling skills. In addition to core integral and differential calculus coverage, the book features finite calculus, which lends itself to modeling and spreadsheets. Specifically, finite calculus is applied to marginal economic analysis, finance, growth, and decay. Includes: Linear Equations and FunctionsThe DerivativeUsing the Derivative Exponential and Logarithmic Functions Techniques of DifferentiationIntegral CalculusIntegration TechniquesFunctions
Information security fundamentals
Blackley, John A; Peltier, Justin
2004-01-01
Effective security rules and procedures do not exist for their own sake-they are put in place to protect critical assets, thereby supporting overall business objectives. Recognizing security as a business enabler is the first step in building a successful program.Information Security Fundamentals allows future security professionals to gain a solid understanding of the foundations of the field and the entire range of issues that practitioners must address. This book enables students to understand the key elements that comprise a successful information security program and eventually apply thes
Fundamentals of engineering electromagnetism
It indicates fundamentals of engineering electromagnetism. It mentions electromagnetic field model of introduction and International system of units and universal constant, Vector analysis with summary and orthogonal coordinate systems, electrostatic field on Coulomb's law and Gauss's law, electrostatic energy and strength, steady state current with Ohm's law and Joule's law and calculation of resistance, crystallite field with Vector's electrostatic potential, Biot-Savart law and application and Magnetic Dipole, time-Savart and Maxwell equation with potential function and Faraday law of electromagnetic induction, plane electromagnetic wave, transmission line, a wave guide and cavity resonator and antenna arrangement.
Fundamentals of microwave photonics
Urick, V J; McKinney , Jason D
2015-01-01
A comprehensive resource to designing andconstructing analog photonic links capable of high RFperformanceFundamentals of Microwave Photonics provides acomprehensive description of analog optical links from basicprinciples to applications. The book is organized into fourparts. The first begins with a historical perspective of microwavephotonics, listing the advantages of fiber optic links anddelineating analog vs. digital links. The second section coversbasic principles associated with microwave photonics in both the RFand optical domains. The third focuses on analog modulationformats-starti
Fundamentals of Project Management
Heagney, Joseph
2011-01-01
With sales of more than 160,000 copies, Fundamentals of Project Management has helped generations of project managers navigate the ins and outs of every aspect of this complex discipline. Using a simple step-by-step approach, the book is the perfect introduction to project management tools, techniques, and concepts. Readers will learn how to: ò Develop a mission statement, vision, goals, and objectives ò Plan the project ò Create the work breakdown structure ò Produce a workable schedule ò Understand earned value analysis ò Manage a project team ò Control and evaluate progress at every stage.
Iqbal, Muzaffar
2004-01-01
No mundo contemporâneo existe um nexo fundamental entre a ciência, a religião e as civilizações. A Ciência, como a conhecemos hoje em dia, emergiu na Europa como resultado de processos diversificados e complementares. Ora, a tecnologia produzida pela aplicação da ciência moderna colocou-nos nas margens de um desastre que pode muito bem eliminar toda a raça humana deste planeta. Isto é reconhecido por alguns dos Cientistas mais esclarecidos, e continua a ser uma grande pre...
Nanomachines fundamentals and applications
Wang, Joseph
2013-01-01
This first-hand account by one of the pioneers of nanobiotechnology brings together a wealth of valuable material in a single source. It allows fascinating insights into motion at the nanoscale, showing how the proven principles of biological nanomotors are being transferred to artificial nanodevices.As such, the author provides engineers and scientists with the fundamental knowledge surrounding the design and operation of biological and synthetic nanomotors and the latest advances in nanomachines. He addresses such topics as nanoscale propulsions, natural biomotors, molecular-scale machin
DOE fundamentals handbook: Chemistry
The Chemistry Handbook was developed to assist nuclear facility operating contractors in providing operators, maintenance personnel, and the technical staff with the necessary fundamentals training to ensure a basic understanding of chemistry. The handbook includes information on the atomic structure of matter; chemical bonding; chemical equations; chemical interactions involved with corrosion processes; water chemistry control, including the principles of water treatment; the hazards of chemicals and gases, and basic gaseous diffusion processes. This information will provide personnel with a foundation for understanding the chemical properties of materials and the way these properties can impose limitations on the operation of equipment and systems
Fundamental concepts of mathematics
Goodstein, R L
Fundamental Concepts of Mathematics, 2nd Edition provides an account of some basic concepts in modern mathematics. The book is primarily intended for mathematics teachers and lay people who wants to improve their skills in mathematics. Among the concepts and problems presented in the book include the determination of which integral polynomials have integral solutions; sentence logic and informal set theory; and why four colors is enough to color a map. Unlike in the first edition, the second edition provides detailed solutions to exercises contained in the text. Mathematics teachers and people
Japanese Marketing. Fundamentally Different
Höskuldur Hrafn Guttormsson 1990
2016-01-01
Japan has always had a unique image in the eyes of many westerners and especially when it comes to its unique and whacky commercials. This study is motivated by the question; “Do the Japanese have a fundamentally different way of marketing compared to the western world?” It aims to advance our understanding of how and why Japanese marketing differs from typical western marketing by focusing on the history of Japan, conventional marketing practices of Japanese companies and the differences bet...
Fundamentals of attosecond optics
Chang, Zenghu
2011-01-01
Attosecond optical pulse generation, along with the related process of high-order harmonic generation, is redefining ultrafast physics and chemistry. A practical understanding of attosecond optics requires significant background information and foundational theory to make full use of these cutting-edge lasers and advance the technology toward the next generation of ultrafast lasers. Fundamentals of Attosecond Optics provides the first focused introduction to the field. The author presents the underlying concepts and techniques required to enter the field, as well as recent research advances th
Fundamental of biomedical engineering
Sawhney, GS
2007-01-01
About the Book: A well set out textbook explains the fundamentals of biomedical engineering in the areas of biomechanics, biofluid flow, biomaterials, bioinstrumentation and use of computing in biomedical engineering. All these subjects form a basic part of an engineer''s education. The text is admirably suited to meet the needs of the students of mechanical engineering, opting for the elective of Biomedical Engineering. Coverage of bioinstrumentation, biomaterials and computing for biomedical engineers can meet the needs of the students of Electronic & Communication, Electronic & Instrumenta
Mathematical analysis fundamentals
Bashirov, Agamirza
2014-01-01
The author's goal is a rigorous presentation of the fundamentals of analysis, starting from elementary level and moving to the advanced coursework. The curriculum of all mathematics (pure or applied) and physics programs include a compulsory course in mathematical analysis. This book will serve as can serve a main textbook of such (one semester) courses. The book can also serve as additional reading for such courses as real analysis, functional analysis, harmonic analysis etc. For non-math major students requiring math beyond calculus, this is a more friendly approach than many math-centric o
Franc, Jean-Pierre
2005-01-01
The present book is aimed at providing a comprehensive presentation of cavitation phenomena in liquid flows. It is further backed up by the experience, both experimental and theoretical, of the authors whose expertise has been internationally recognized. A special effort is made to place the various methods of investigation in strong relation with the fundamental physics of cavitation, enabling the reader to treat specific problems independently. Furthermore, it is hoped that a better knowledge of the cavitation phenomenon will allow engineers to create systems using it positively. Examples in the literature show the feasibility of this approach.
Electronic circuits fundamentals & applications
Tooley, Mike
2015-01-01
Electronics explained in one volume, using both theoretical and practical applications.New chapter on Raspberry PiCompanion website contains free electronic tools to aid learning for students and a question bank for lecturersPractical investigations and questions within each chapter help reinforce learning Mike Tooley provides all the information required to get to grips with the fundamentals of electronics, detailing the underpinning knowledge necessary to appreciate the operation of a wide range of electronic circuits, including amplifiers, logic circuits, power supplies and oscillators. The
Verma, Renu
2010-01-01
Fundamentals of Librarianship is written f or professional librarians and is therefore not intended as a mammal to instruct you on how to be a librarian. Instead it focuses on the federal angle of otherwise Standard practices and procedures of good librarianship. A topic was omitted if it was determined not to have anything uniquely federal about it. An exception was made for the chapter on 'copyright' because it remains a challenging and continuously developing topic for all librarians. We opted to produce this handbook in electronic format as a Web document that can be updated as often as ne
A program of coordinated experimental and theoretical research on the fundamental chemistry and physics of VCD is described. The experimental work involves the development and use of laser diagnostic techniques for monitoring chemical species in the gas phase and measuring fluid-flow properties. The theoretical work applies state-of-the-art computational techniques to the coupled fluid mechanical and gas-phase chemical kinetics of CVD. The work has concentrated on the simple model system of silicon deposition from silane, although the concepts should be applicable to CVD in general. Some preliminary work on the chlorosilane and tungsten hexafluoride systems is also described
Fast and Accurate Construction of Confidence Intervals for Heritability.
Schweiger, Regev; Kaufman, Shachar; Laaksonen, Reijo; Kleber, Marcus E; März, Winfried; Eskin, Eleazar; Rosset, Saharon; Halperin, Eran
2016-06-01
Estimation of heritability is fundamental in genetic studies. Recently, heritability estimation using linear mixed models (LMMs) has gained popularity because these estimates can be obtained from unrelated individuals collected in genome-wide association studies. Typically, heritability estimation under LMMs uses the restricted maximum likelihood (REML) approach. Existing methods for the construction of confidence intervals and estimators of SEs for REML rely on asymptotic properties. However, these assumptions are often violated because of the bounded parameter space, statistical dependencies, and limited sample size, leading to biased estimates and inflated or deflated confidence intervals. Here, we show that the estimation of confidence intervals by state-of-the-art methods is inaccurate, especially when the true heritability is relatively low or relatively high. We further show that these inaccuracies occur in datasets including thousands of individuals. Such biases are present, for example, in estimates of heritability of gene expression in the Genotype-Tissue Expression project and of lipid profiles in the Ludwigshafen Risk and Cardiovascular Health study. We also show that often the probability that the genetic component is estimated as 0 is high even when the true heritability is bounded away from 0, emphasizing the need for accurate confidence intervals. We propose a computationally efficient method, ALBI (accurate LMM-based heritability bootstrap confidence intervals), for estimating the distribution of the heritability estimator and for constructing accurate confidence intervals. Our method can be used as an add-on to existing methods for estimating heritability and variance components, such as GCTA, FaST-LMM, GEMMA, or EMMAX. PMID:27259052
Fundamentals of electrokinetics
Kozak, M. W.
The study of electrokinetics is a very mature field. Experimental studies date from the early 1800s, and acceptable theoretical analyses have existed since the early 1900s. The use of electrokinetics in practical field problems is more recent, but it is still quite mature. Most developments in the fundamental understanding of electrokinetics are in the colloid science literature. A significant and increasing divergence between the theoretical understanding of electrokinetics found in the colloid science literature and the theoretical analyses used in interpreting applied experimental studies in soil science and waste remediation has developed. The soil science literature has to date restricted itself to the use of very early theories, with their associated limitations. The purpose of this contribution is to review fundamental aspects of electrokinetic phenomena from a colloid science viewpoint. It is hoped that a bridge can be built between the two branches of the literature, from which both will benefit. Attention is paid to special topics such as the effects of overlapping double layers, applications in unsaturated soils, the influence of dispersivity, and the differences between electrokinetic theory and conductivity theory.
Fundamental Atomtronic Circuit Elements
Lee, Jeffrey; McIlvain, Brian; Lobb, Christopher; Hill, Wendell T., III
2012-06-01
Recent experiments with neutral superfluid gases have shown that it is possible to create atomtronic circuits analogous to existing superconducting circuits. The goals of these experiments are to create complex systems such as Josephson junctions. In addition, there are theoretical models for active atomtronic components analogous to diodes, transistors and oscillators. In order for any of these devices to function, an understanding of the more fundamental atomtronic elements is needed. Here we describe the first experimental realization of these more fundamental elements. We have created an atomtronic capacitor that is discharged through a resistance and inductance. We will discuss a theoretical description of the system that allows us to determine values for the capacitance, resistance and inductance. The resistance is shown to be analogous to the Sharvin resistance, and the inductance analogous to kinetic inductance in electronics. This atomtronic circuit is implemented with a thermal sample of laser cooled rubidium atoms. The atoms are confined using what we call free-space atom chips, a novel optical dipole trap produced using a generalized phase-contrast imaging technique. We will also discuss progress toward implementing this atomtronic system in a degenerate Bose gas.
Lasers Fundamentals and Applications
Thyagarajan, K
2010-01-01
Lasers: Fundamentals and Applications, serves as a vital textbook to accompany undergraduate and graduate courses on lasers and their applications. Ever since their invention in 1960, lasers have assumed tremendous importance in the fields of science, engineering and technology because of their diverse uses in basic research and countless technological applications. This book provides a coherent presentation of the basic physics behind the way lasers work, and presents some of their most important applications in vivid detail. After reading this book, students will understand how to apply the concepts found within to practical, tangible situations. This textbook includes worked-out examples and exercises to enhance understanding, and the preface shows lecturers how to most beneficially match the textbook with their course curricula. The book includes several recent Nobel Lectures, which will further expose students to the emerging applications and excitement of working with lasers. Students who study lasers, ...
Unification of fundamental forces
Abdus Salam, a Fellow of St. John's College, Cambridge, provides an accessible overview of modern particle physics and the quest for the unification of the fundamental forces, the electromagnetic, strong nuclear weak nuclear and gravitational. A major theme of the lecture is the way in which the theoretical physicists approach the task of imposing orders on a seemingly chaotic universe. A secondary theme is that the electroweak force is most likely to be the force of life. The theme of the philosophy behind the work of theorists is continued in two additional lectures by Werner Heisenberg and Paul Dirac which give fascinating insights into the modus operandi and work of two of the founders of quantum mechanics. (author)
Theory of fundamental interactions
In the present article the theory of fundamental interactions is derived in a systematic way from the first principles. In the developed theory there is no separation between space-time and internal gauge space. Main equations for basic fields are derived. In is shown that the theory satisfies the correspondence principle and gives rise to new notions in the considered region. In particular, the conclusion is made about the existence of particles which are characterized not only by the mass, spin, charge but also by the moment of inertia. These are rotating particles, the particles which represent the notion of the rigid body on the microscopical level and give the key for understanding strong interactions. The main concepts and dynamical laws for these particles are formulated. The basic principles of the theory may be examined experimentally not in the distant future. 29 refs
Fundamentals of sustainable neighbourhoods
Friedman, Avi
2015-01-01
This book introduces architects, engineers, builders, and urban planners to a range of design principles of sustainable communities and illustrates them with outstanding case studies. Drawing on the author’s experience as well as local and international case studies, Fundamentals of Sustainable Neighbourhoods presents planning concepts that minimize developments' carbon footprint through compact communities, adaptable and expandable dwellings, adaptable landscapes, and smaller-sized yet quality-designed housing. This book also: Examines in-depth global strategies for minimizing the residential carbon footprint, including district heating, passive solar gain, net-zero residences, as well as preserving the communities' natural assets Reconsiders conceptual approaches in building design and urban planning to promote a better connection between communities and nature Demonstrates practical applications of green architecture Focuses on innovative living spaces in urban environments
Yen, William M; Yamamoto, Hajime
2006-01-01
Drawing from the second edition of the best-selling Handbook of Phosphors, Fundamentals of Phosphors covers the principles and mechanisms of luminescence in detail and surveys the primary phosphor materials as well as their optical properties. The book addresses cutting-edge developments in phosphor science and technology including oxynitride phosphors and the impact of lanthanide level location on phosphor performance.Beginning with an explanation of the physics underlying luminescence mechanisms in solids, the book goes on to interpret various luminescence phenomena in inorganic and organic materials. This includes the interpretation of the luminescence of recently developed low-dimensional systems, such as quantum wells and dots. The book also discusses the excitation mechanisms by cathode-ray and ionizing radiation and by electric fields to produce electroluminescence. The book classifies phosphor materials according to the type of luminescence centers employed or the class of host materials used and inte...
Fundamental partial compositeness
Sannino, Francesco; Tesi, Andrea; Vigiani, Elena
2016-01-01
We construct renormalizable Standard Model extensions, valid up to the Planck scale, that give a composite Higgs from a new fundamental strong force acting on fermions and scalars. Yukawa interactions of these particles with Standard Model fermions realize the partial compositeness scenario. Successful models exist because gauge quantum numbers of Standard Model fermions admit a minimal enough 'square root'. Furthermore, right-handed SM fermions have an SU(2)$_R$-like structure, yielding a custodially-protected composite Higgs. Baryon and lepton numbers arise accidentally. Standard Model fermions acquire mass at tree level, while the Higgs potential and flavor violations are generated by quantum corrections. We further discuss accidental symmetries and other dynamical features stemming from the new strongly interacting scalars. If the same phenomenology can be obtained from models without our elementary scalars, they would reappear as composite states.
Automotive electronics design fundamentals
Zaman, Najamuz
2015-01-01
This book explains the topology behind automotive electronics architectures and examines how they can be profoundly augmented with embedded controllers. These controllers serve as the core building blocks of today’s vehicle electronics. Rather than simply teaching electrical basics, this unique resource focuses on the fundamental concepts of vehicle electronics architecture, and details the wide variety of Electronic Control Modules (ECMs) that enable the increasingly sophisticated "bells & whistles" of modern designs. A must-have for automotive design engineers, technicians working in automotive electronics repair centers and students taking automotive electronics courses, this guide bridges the gap between academic instruction and industry practice with clear, concise advice on how to design and optimize automotive electronics with embedded controllers.
Fundamentals of Fire Phenomena
Quintiere, James
discipline. It covers thermo chemistry including mixtures and chemical reactions; Introduces combustion to the fire protection student; Discusses premixed flames and spontaneous ignition; Presents conservation laws for control volumes, including the effects of fire; Describes the theoretical bases for......Understanding fire dynamics and combustion is essential in fire safety engineering and in fire science curricula. Engineers and students involved in fire protection, safety and investigation need to know and predict how fire behaves to be able to implement adequate safety measures and hazard...... analyses. Fire phenomena encompass everything about the scientific principles behind fire behaviour. Combining the principles of chemistry, physics, heat and mass transfer, and fluid dynamics necessary to understand the fundamentals of fire phenomena, this book integrates the subject into a clear...
Fundamentals of Geometrothermodynamics
Quevedo, Hernando
2011-01-01
We present the basic mathematical elements of geometrothermodynamics which is a formalism developed to describe in an invariant way the thermodynamic properties of a given thermodynamic system in terms of geometric structures. First, in order to represent the first law of thermodynamics and the general Legendre transformations in an invariant way, we define the phase manifold as a Legendre invariant Riemannian manifold with a contact structure. The equilibrium manifold is defined by using a harmonic map which includes the specification of the fundamental equation of the thermodynamic system. Quasi-static thermodynamic processes are shown to correspond to geodesics of the equilibrium manifold which preserve the laws of thermodynamics. We study in detail the equilibrium manifold of the ideal gas and the van der Waals gas as concrete examples of the application of geometrothermodynamics.
Fundamental Physics with the Laser Astrometric Test Of Relativity
Turyshev, S G; Lämmerzahl, C; Theil, S; Ertmer, W; Rasel, E; Foerstner, R; Johann, U; Klioner, S A; Soffel, M H; Dachwald, B; Seboldt, W; Perlick, V; Sandford, M C W; Bingham, R; Kent, B; Sumner, T J; Bertolami, O; Páramos, J; Christophe, B; Foulon, B; Touboul, P; Bouyer, P; Damour, T; Salomon, C; Reynaud, S; Brillet, A; Bondu, F; Mangin, J F; Samain, E; Bertotti, B; Iess, L; Erd, C; Grenouilleau, J C; Izzo, D; Rathke, A; Asmar, S W; Colavita, M; Gursel, Y; Hemmati, H; Shao, M; Williams, J G; Nordtvedt, K L; Shapiro, I; Reasenberg, R; Drever, R W P; Degnan, J; Plowman, J E; Hellings, R; Murphy, T W; Rovisco Pais, A; Copernic, A N; Favata, F; Turyshev, Slava G.; Dittus, Hansjoerg
2005-01-01
The Laser Astrometric Test Of Relativity (LATOR) is a European-U.S. Michelson-Morley-type experiment designed to test the pure tensor metric nature of gravitation - the fundamental postulate of Einstein's theory of general relativity. By using a combination of independent time-series of highly accurate gravitational deflection of light in the immediate proximity to the Sun along with measurements of the Shapiro time delay on the interplanetary scales (to a precision respectively better than 0.1 picoradians and 1 cm), LATOR will significantly improve our knowledge of relativistic gravity. The primary mission objective is to i) measure the key post-Newtonian Eddington parameter \\gamma with accuracy of a part in 10^9. (1-\\gamma) is a direct measure for presence of a new interaction in gravitational theory, and, in its search, LATOR goes a factor 30,000 beyond the present best result, Cassini's 2003 test. The mission will also provide: ii) first measurement of gravity's non-linear effects on light to ~0.01% accur...
Strings and fundamental physics
The basic idea, simple and revolutionary at the same time, to replace the concept of a point particle with a one-dimensional string, has opened up a whole new field of research. Even today, four decades later, its multifaceted consequences are still not fully conceivable. Up to now string theory has offered a new way to view particles as different excitations of the same fundamental object. It has celebrated success in discovering the graviton in its spectrum, and it has naturally led scientists to posit space-times with more than four dimensions - which in turn has triggered numerous interesting developments in fields as varied as condensed matter physics and pure mathematics. This book collects pedagogical lectures by leading experts in string theory, introducing the non-specialist reader to some of the newest developments in the field. The carefully selected topics are at the cutting edge of research in string theory and include new developments in topological strings, AdS/CFT dualities, as well as newly emerging subfields such as doubled field theory and holography in the hydrodynamic regime. The contributions to this book have been selected and arranged in such a way as to form a self-contained, graduate level textbook. (orig.)
Fundamentals of klystron testing
Caldwell, J.W. Jr.
1978-08-01
Fundamentals of klystron testing is a text primarily intended for the indoctrination of new klystron group test stand operators. It should significantly reduce the familiarization time of a new operator, making him an asset to the group sooner than has been experienced in the past. The new employee must appreciate the mission of SLAC before he can rightfully be expected to make a meaningful contribution to the group's effort. Thus, the introductory section acquaints the reader with basic concepts of accelerators in general, then briefly describes major physical aspects of the Stanford Linear Accelerator. Only then is his attention directed to the klystron, with its auxiliary systems, and the rudiments of klystron tube performance checks. It is presumed that the reader is acquainted with basic principles of electronics and scientific notation. However, to preserve the integrity of an indoctrination guide, tedious technical discussions and mathematical analysis have been studiously avoided. It is hoped that the new operator will continue to use the text for reference long after his indoctrination period is completed. Even the more experienced operator should find that particular sections will refresh his understanding of basic principles of klystron testing.
Wollaber, Allan Benton [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-06-16
This is a powerpoint which serves as lecture material for the Parallel Computing summer school. It goes over the fundamentals of Monte Carlo. Welcome to Los Alamos, the birthplace of “Monte Carlo” for computational physics. Stanislaw Ulam, John von Neumann, and Nicholas Metropolis are credited as the founders of modern Monte Carlo methods. The name “Monte Carlo” was chosen in reference to the Monte Carlo Casino in Monaco (purportedly a place where Ulam’s uncle went to gamble). The central idea (for us) – to use computer-generated “random” numbers to determine expected values or estimate equation solutions – has since spread to many fields. "The first thoughts and attempts I made to practice [the Monte Carlo Method] were suggested by a question which occurred to me in 1946 as I was convalescing from an illness and playing solitaires. The question was what are the chances that a Canfield solitaire laid out with 52 cards will come out successfully? After spending a lot of time trying to estimate them by pure combinatorial calculations, I wondered whether a more practical method than “abstract thinking” might not be to lay it out say one hundred times and simply observe and count the number of successful plays... Later [in 1946], I described the idea to John von Neumann, and we began to plan actual calculations." - Stanislaw Ulam.
Revisiting energy efficiency fundamentals
Perez-Lombard, L.; Velazquez, D. [Grupo de Termotecnia, Escuela Superior de Ingenieros, Universidad de Sevilla, Camino de los Descubrimientos s/n, 41092 Seville (Spain); Ortiz, J. [Building Research Establishment (BRE), Garston, Watford, WD25 9XX (United Kingdom)
2013-05-15
Energy efficiency is a central target for energy policy and a keystone to mitigate climate change and to achieve a sustainable development. Although great efforts have been carried out during the last four decades to investigate the issue, focusing into measuring energy efficiency, understanding its trends and impacts on energy consumption and to design effective energy efficiency policies, many energy efficiency-related concepts, some methodological problems for the construction of energy efficiency indicators (EEI) and even some of the energy efficiency potential gains are often ignored or misunderstood, causing no little confusion and controversy not only for laymen but even for specialists. This paper aims to revisit, analyse and discuss some efficiency fundamental topics that could improve understanding and critical judgement of efficiency stakeholders and that could help in avoiding unfounded judgements and misleading statements. Firstly, we address the problem of measuring energy efficiency both in qualitative and quantitative terms. Secondly, main methodological problems standing in the way of the construction of EEI are discussed, and a sequence of actions is proposed to tackle them in an ordered fashion. Finally, two key topics are discussed in detail: the links between energy efficiency and energy savings, and the border between energy efficiency improvement and renewable sources promotion.
This work presents a summary of the IAEA Safety Standards Series publication No. SF-1 entitled FUDAMENTAL Safety PRINCIPLESpublished on 2006. This publication states the fundamental safety objective and ten associated safety principles, and briefly describes their intent and purposes. Safety measures and security measures have in common the aim of protecting human life and health and the environment. These safety principles are: 1) Responsibility for safety, 2) Role of the government, 3) Leadership and management for safety, 4) Justification of facilities and activities, 5) Optimization of protection, 6) Limitation of risks to individuals, 7) Protection of present and future generations, 8) Prevention of accidents, 9)Emergency preparedness and response and 10) Protective action to reduce existing or unregulated radiation risks. The safety principles concern the security of facilities and activities to the extent that they apply to measures that contribute to both safety and security. Safety measures and security measures must be designed and implemented in an integrated manner so that security measures do not compromise safety and safety measures do not compromise security.
Fundamentals of Quantum Mechanics
Tang, C. L.
2005-06-01
Quantum mechanics has evolved from a subject of study in pure physics to one with a wide range of applications in many diverse fields. The basic concepts of quantum mechanics are explained in this book in a concise and easy-to-read manner emphasising applications in solid state electronics and modern optics. Following a logical sequence, the book is focused on the key ideas and is conceptually and mathematically self-contained. The fundamental principles of quantum mechanics are illustrated by showing their application to systems such as the hydrogen atom, multi-electron ions and atoms, the formation of simple organic molecules and crystalline solids of practical importance. It leads on from these basic concepts to discuss some of the most important applications in modern semiconductor electronics and optics. Containing many homework problems and worked examples, the book is suitable for senior-level undergraduate and graduate level students in electrical engineering, materials science and applied physics. Clear exposition of quantum mechanics written in a concise and accessible style Precise physical interpretation of the mathematical foundations of quantum mechanics Illustrates the important concepts and results by reference to real-world examples in electronics and optoelectronics Contains homeworks and worked examples, with solutions available for instructors
Fundamentals of klystron testing
Fundamentals of klystron testing is a text primarily intended for the indoctrination of new klystron group test stand operators. It should significantly reduce the familiarization time of a new operator, making him an asset to the group sooner than has been experienced in the past. The new employee must appreciate the mission of SLAC before he can rightfully be expected to make a meaningful contribution to the group's effort. Thus, the introductory section acquaints the reader with basic concepts of accelerators in general, then briefly describes major physical aspects of the Stanford Linear Accelerator. Only then is his attention directed to the klystron, with its auxiliary systems, and the rudiments of klystron tube performance checks. It is presumed that the reader is acquainted with basic principles of electronics and scientific notation. However, to preserve the integrity of an indoctrination guide, tedious technical discussions and mathematical analysis have been studiously avoided. It is hoped that the new operator will continue to use the text for reference long after his indoctrination period is completed. Even the more experienced operator should find that particular sections will refresh his understanding of basic principles of klystron testing
Fundamental Limits of Cooperation
Lozano, Angel; Andrews, Jeffrey G
2012-01-01
Cooperation is viewed as a key ingredient for interference management in wireless systems. This paper shows that cooperation has fundamental limitations. The main result is that even full cooperation between transmitters cannot in general change an interference-limited network to a noise-limited network. The key idea is that there exists a spectral efficiency upper bound that is independent of the transmit power. First, a spectral efficiency upper bound is established for systems that rely on pilot-assisted channel estimation; in this framework, cooperation is shown to be possible only within clusters of limited size, which are subject to out-of-cluster interference whose power scales with that of the in-cluster signals. Second, an upper bound is also shown to exist when cooperation is through noncoherent communication; thus, the spectral efficiency limitation is not a by-product of the reliance on pilot-assisted channel estimation. Consequently, existing literature that routinely assumes the high-power spect...
High Frequency QRS ECG Accurately Detects Cardiomyopathy
Schlegel, Todd T.; Arenare, Brian; Poulin, Gregory; Moser, Daniel R.; Delgado, Reynolds
2005-01-01
High frequency (HF, 150-250 Hz) analysis over the entire QRS interval of the ECG is more sensitive than conventional ECG for detecting myocardial ischemia. However, the accuracy of HF QRS ECG for detecting cardiomyopathy is unknown. We obtained simultaneous resting conventional and HF QRS 12-lead ECGs in 66 patients with cardiomyopathy (EF = 23.2 plus or minus 6.l%, mean plus or minus SD) and in 66 age- and gender-matched healthy controls using PC-based ECG software recently developed at NASA. The single most accurate ECG parameter for detecting cardiomyopathy was an HF QRS morphological score that takes into consideration the total number and severity of reduced amplitude zones (RAZs) present plus the clustering of RAZs together in contiguous leads. This RAZ score had an area under the receiver operator curve (ROC) of 0.91, and was 88% sensitive, 82% specific and 85% accurate for identifying cardiomyopathy at optimum score cut-off of 140 points. Although conventional ECG parameters such as the QRS and QTc intervals were also significantly longer in patients than controls (P less than 0.001, BBBs excluded), these conventional parameters were less accurate (area under the ROC = 0.77 and 0.77, respectively) than HF QRS morphological parameters for identifying underlying cardiomyopathy. The total amplitude of the HF QRS complexes, as measured by summed root mean square voltages (RMSVs), also differed between patients and controls (33.8 plus or minus 11.5 vs. 41.5 plus or minus 13.6 mV, respectively, P less than 0.003), but this parameter was even less accurate in distinguishing the two groups (area under ROC = 0.67) than the HF QRS morphologic and conventional ECG parameters. Diagnostic accuracy was optimal (86%) when the RAZ score from the HF QRS ECG and the QTc interval from the conventional ECG were used simultaneously with cut-offs of greater than or equal to 40 points and greater than or equal to 445 ms, respectively. In conclusion 12-lead HF QRS ECG employing
FUNDPAR: A program for Deriving Fundamental Parameters from Equivalent Widths
C. Saffe
2011-01-01
Full Text Available Implementamos un programa en Fortran que determina parámetros fundamentales de estrellas de tipo solar, a partir de anchos equivalentes del Fe. La solución debe verificar tres condiciones en el método estándar: equilibrio de ionización, equilibrio de excitación e independencia entre abundancias y anchos equivalentes. Calculamos modelos de atmósfera de Kurucz con opacidades NEWODF. Detalles como el parámetro de longitud de mezcla, el sobre impulso convectivo, etc. se calculan con un programa independiente. FUNDPAR calcula las incertezas por dos métodos: el criterio de Gonzalez & Vanture (1998 y utilizando la función X² . Los resultados derivados con FUNDPAR están de acuerdo con determinaciones previas en la literatura. En particular obtuvimos parámetros fundamentales de 58 estrellas con exoplanetas. El programa está disponible en la red1.
The fundamental scales of structures from first principles
Funkhouser, Scott
2008-01-01
Five fundamental scales of mass follow from holographic limitations, a self-similar law for angular momentum and the basic scaling laws for a fractal universe with dimension 2. The five scales correspond to the observable universe, clusters, galaxies, stars and the nucleon. The fundamental scales form naturally a self-similar hierarchy, generating new relationships among the parameters of the nucleon,the cosmological constant and the Planck scale. There is implied a sixth fundamental scale th...
Fundamental solutions of linear partial differential operators theory and practice
Ortner, Norbert
2015-01-01
This monograph provides the theoretical foundations needed for the construction of fundamental solutions and fundamental matrices of (systems of) linear partial differential equations. Many illustrative examples also show techniques for finding such solutions in terms of integrals. Particular attention is given to developing the fundamentals of distribution theory, accompanied by calculations of fundamental solutions. The main part of the book deals with existence theorems and uniqueness criteria, the method of parameter integration, the investigation of quasihyperbolic systems by means of Fourier and Laplace transforms, and the representation of fundamental solutions of homogeneous elliptic operators with the help of Abelian integrals. In addition to rigorous distributional derivations and verifications of fundamental solutions, the book also shows how to construct fundamental solutions (matrices) of many physically relevant operators (systems), in elasticity, thermoelasticity, hexagonal/cubic elastodynamics...
Heat propagation in waters - physical fundamentals
The physical fundamentals necessary to understand mathematical models of the environment are given. It was found that considerable mathematical and physical effforts are necessary to achieve sufficient accuracy in the calculation of temperature, flow rate, etc. The so-called eco- and transport models are less accurate than purely physical models, due to the fact that they are essentially a quantitative formulation of biological processes. With regard to the given numerical methods of solution, it is interesting to note that a partial differential equation is reduced here to a coupled system of normal first order differential equations. (orig.)
F. TopsÃƒÂ¸e
2001-09-01
Full Text Available Abstract: In its modern formulation, the Maximum Entropy Principle was promoted by E.T. Jaynes, starting in the mid-fifties. The principle dictates that one should look for a distribution, consistent with available information, which maximizes the entropy. However, this principle focuses only on distributions and it appears advantageous to bring information theoretical thinking more prominently into play by also focusing on the "observer" and on coding. This view was brought forward by the second named author in the late seventies and is the view we will follow-up on here. It leads to the consideration of a certain game, the Code Length Game and, via standard game theoretical thinking, to a principle of Game Theoretical Equilibrium. This principle is more basic than the Maximum Entropy Principle in the sense that the search for one type of optimal strategies in the Code Length Game translates directly into the search for distributions with maximum entropy. In the present paper we offer a self-contained and comprehensive treatment of fundamentals of both principles mentioned, based on a study of the Code Length Game. Though new concepts and results are presented, the reading should be instructional and accessible to a rather wide audience, at least if certain mathematical details are left aside at a rst reading. The most frequently studied instance of entropy maximization pertains to the Mean Energy Model which involves a moment constraint related to a given function, here taken to represent "energy". This type of application is very well known from the literature with hundreds of applications pertaining to several different elds and will also here serve as important illustration of the theory. But our approach reaches further, especially regarding the study of continuity properties of the entropy function, and this leads to new results which allow a discussion of models with so-called entropy loss. These results have tempted us to speculate over
Uniform Zariski's Theorem On Fundamental Groups
Kaliman, Shulim
1997-01-01
The Zariski theorem says that for every hypersurface in a complex projective (resp. affine) space of dimension at least 3 and for every generic plane in the projective (resp. affine) space the natural embedding generates an isomorphism of the fundamental groups of the complements to the hypersurface in the plane and in the space. If a family of hypersurfaces depends algebraically on parameters then it is not true in general that there exists a plane such that the natural embedding generates i...
Using fundamental equations to describe basic phenomena
Jakobsen, Arne; Rasmussen, Bjarne D.
1999-01-01
constraining the total charge of refrigerant in the system, which is missing.In traditional mathematical modelling of a refrigeration cycle/system, the influence from the total charge of refrigerant on the system behaviour is normally not modelled explicitly. Instead, parameters such as superheat and......When the fundamental thermodynamic balance equations (mass, energy, and momentum) are used to describe the processes in a simple refrigeration system, then one finds that the resulting equation system will have a degree of freedom equal to one. Further investigations reveal that it is the equation...... before mentioned parameters. In doing so, a systematic use of control volumes for modelling a refrigeration system is outlined....
Communication technology update and fundamentals
Grant, August E
2010-01-01
New communication technologies are being introduced at an astonishing rate. Making sense of these technologies is increasingly difficult. Communication Technology Update and Fundamentals is the single best source for the latest developments, trends, and issues in communication technology. Featuring the fundamental framework along with the history and background of communication technologies, Communication Technology Update and Fundamentals, 12th edition helps you stay ahead of these ever-changing and emerging technologies.As always, every chapter ha
Holst, Gerald C.
2011-05-01
Point-and-shoot, TV studio broadcast, and thermal infrared imaging cameras have significantly different applications. A parameter that applies to all imaging systems is Fλ/d, where F is the focal ratio, λ is the wavelength, and d is the detector size. Fλ/d uniquely defines the shape of the camera modulation transfer function. When Fλ/dcorrupts the imagery. Mathematically, the worst case analysis assumes that the scene contains all spatial frequencies with equal amplitudes. This quantifies the potential for aliasing and is called the spurious response. Digital data cannot be seen; it resides in a computer. Cathode ray tubes, flat panel displays, and printers convert the data into an analog format and are called reconstruction filters. The human visual system is an additional reconstruction filter. Different displays and variable viewing distance affect the perceived image quality. Simulated imagery illustrates different Fλ/d ratios, displays, and sampling artifacts. Since the human visual system is primarily sensitive to intensity variations, aliasing (a spatial frequency phenomenon) is not considered bothersome in most situations.
Fundamentals and Techniques of Nonimaging
O' Gallagher, J. J.; Winston, R.
2003-07-10
other system parameter permits the construction of whole new classes of devices with greatly expanded capabilities compared to conventional approaches. These ''tailored edge-ray'' designs have dramatically broadened the range of geometries in which nonimaging optics can provide a significant performance improvement. Considerable progress continues to be made in furthering the incorporation of nonimaging secondaries into practical high concentration and ultra-high concentration solar collector systems. In parallel with the continuing development of nonimaging geometrical optics, our group has been working to develop an understanding of certain fundamental physical optics concepts in the same context. In particular, our study of the behavior of classical radiance in nonimaging systems, has revealed some fundamentally important new understandings that we have pursued both theoretically and experimentally. The field is still relatively new and is rapidly gaining widespread recognition because it fuels many industrial applications. Because of this, during the final years of the project, our group at Chicago has been working more closely with a team of industrial scientists from Science Applications International Corporation (SAIC) at first informally, and later more formally, beginning in 1998, under a formal program initiated by the Department of Energy and incrementally funded through this existing grant. This collaboration has been very fruitful and has led to new conceptual breakthroughs which have provided the foundation for further exciting growth. Many of these concepts are described in some detail in the report.
Fundamental mechanisms of micromachine reliability
DE BOER,MAARTEN P.; SNIEGOWSKI,JEFFRY J.; KNAPP,JAMES A.; REDMOND,JAMES M.; MICHALSKE,TERRY A.; MAYER,THOMAS K.
2000-01-01
Due to extreme surface to volume ratios, adhesion and friction are critical properties for reliability of Microelectromechanical Systems (MEMS), but are not well understood. In this LDRD the authors established test structures, metrology and numerical modeling to conduct studies on adhesion and friction in MEMS. They then concentrated on measuring the effect of environment on MEMS adhesion. Polycrystalline silicon (polysilicon) is the primary material of interest in MEMS because of its integrated circuit process compatibility, low stress, high strength and conformal deposition nature. A plethora of useful micromachined device concepts have been demonstrated using Sandia National Laboratories' sophisticated in-house capabilities. One drawback to polysilicon is that in air the surface oxidizes, is high energy and is hydrophilic (i.e., it wets easily). This can lead to catastrophic failure because surface forces can cause MEMS parts that are brought into contact to adhere rather than perform their intended function. A fundamental concern is how environmental constituents such as water will affect adhesion energies in MEMS. The authors first demonstrated an accurate method to measure adhesion as reported in Chapter 1. In Chapter 2 through 5, they then studied the effect of water on adhesion depending on the surface condition (hydrophilic or hydrophobic). As described in Chapter 2, they find that adhesion energy of hydrophilic MEMS surfaces is high and increases exponentially with relative humidity (RH). Surface roughness is the controlling mechanism for this relationship. Adhesion can be reduced by several orders of magnitude by silane coupling agents applied via solution processing. They decrease the surface energy and render the surface hydrophobic (i.e. does not wet easily). However, only a molecular monolayer coats the surface. In Chapters 3-5 the authors map out the extent to which the monolayer reduces adhesion versus RH. They find that adhesion is
The stability of fundamental constants
The tests of the constancy of fundamental constants are tests of the local position invariance and thus of the equivalence principle, at the heart of general relativity. After summarising the links between fundamental constants, gravity, cosmology and metrology, a brief overview of the observational and experimental constraints on their variation is proposed. (authors)
Fundamental properties of short-lived subatomic particles
Ceci, S; Osmanović, H; Percan, A; Zauner, B
2016-01-01
Two distinct sets of properties are used to describe short-lived particles: the pole and the Breit-Wigner parameters. There is an ongoing decades-old debate on which of them is fundamental. All resonances, from excited hydrogen nuclei hit by ultra-high energy gamma rays in deep space, to new particles produced in Large Hadron Collider, should be described by the same fundamental physical quantities. In this study of nucleon resonances we discover an intricate interplay of the parameters from the both sets, and realize that neither set is fundamental on its own.
The Fundamental Scale of Descriptions
Febres, Gerardo
2014-01-01
The complexity of a system description is a function of the entropy of its symbolic description. Prior to computing the entropy of the system description, an observation scale has to be assumed. In natural language texts, typical scales are binary, characters, and words. However, considering languages as structures built around certain preconceived set of symbols, like words or characters, is only a presumption. This study depicts the notion of the Description Fundamental Scale as a set of symbols which serves to analyze the essence a language structure. The concept of Fundamental Scale is tested using English and MIDI music texts by means of an algorithm developed to search for a set of symbols, which minimizes the system observed entropy, and therefore best expresses the fundamental scale of the language employed. Test results show that it is possible to find the Fundamental Scale of some languages. The concept of Fundamental Scale, and the method for its determination, emerges as an interesting tool to fac...
Accurate pose estimation for forensic identification
Merckx, Gert; Hermans, Jeroen; Vandermeulen, Dirk
2010-04-01
In forensic authentication, one aims to identify the perpetrator among a series of suspects or distractors. A fundamental problem in any recognition system that aims for identification of subjects in a natural scene is the lack of constrains on viewing and imaging conditions. In forensic applications, identification proves even more challenging, since most surveillance footage is of abysmal quality. In this context, robust methods for pose estimation are paramount. In this paper we will therefore present a new pose estimation strategy for very low quality footage. Our approach uses 3D-2D registration of a textured 3D face model with the surveillance image to obtain accurate far field pose alignment. Starting from an inaccurate initial estimate, the technique uses novel similarity measures based on the monogenic signal to guide a pose optimization process. We will illustrate the descriptive strength of the introduced similarity measures by using them directly as a recognition metric. Through validation, using both real and synthetic surveillance footage, our pose estimation method is shown to be accurate, and robust to lighting changes and image degradation.
Towards accurate emergency response behavior
Nuclear reactor operator emergency response behavior has persisted as a training problem through lack of information. The industry needs an accurate definition of operator behavior in adverse stress conditions, and training methods which will produce the desired behavior. Newly assembled information from fifty years of research into human behavior in both high and low stress provides a more accurate definition of appropriate operator response, and supports training methods which will produce the needed control room behavior. The research indicates that operator response in emergencies is divided into two modes, conditioned behavior and knowledge based behavior. Methods which assure accurate conditioned behavior, and provide for the recovery of knowledge based behavior, are described in detail
How accurately can we calculate thermal systems?
The objective was to determine how accurately simple reactor lattice integral parameters can be determined, considering user input, differences in the methods, source data and the data processing procedures and assumptions. Three simple square lattice test cases with different fuel to moderator ratios were defined. The effect of the thermal scattering models were shown to be important and much bigger than the spread in the results. Nevertheless, differences of up to 0.4% in the K-eff calculated by continuous energy Monte Carlo codes were observed even when the same source data were used. (author)
Accurate determination of antenna directivity
Dich, Mikael
1997-01-01
The derivation of a formula for accurate estimation of the total radiated power from a transmitting antenna for which the radiated power density is known in a finite number of points on the far-field sphere is presented. The main application of the formula is determination of directivity from power...
Fundamental number theory with applications
Mollin, Richard A
2008-01-01
An update of the most accessible introductory number theory text available, Fundamental Number Theory with Applications, Second Edition presents a mathematically rigorous yet easy-to-follow treatment of the fundamentals and applications of the subject. The substantial amount of reorganizing makes this edition clearer and more elementary in its coverage. New to the Second Edition Removal of all advanced material to be even more accessible in scope New fundamental material, including partition theory, generating functions, and combinatorial number theory Expa
Fundamentals of technology project management
Garton, Colleen
2012-01-01
Designed to provide software engineers, students, and IT professionals with an understanding of the fundamentals of project management in the technology/IT field, this book serves as a practical introduction to the subject. Updated with information on how Fundamentals of Project Management integrates with and complements Project Management Institute''s Project Management Body of Knowledge, this collection explains fundamental methodologies and techniques while also discussing new technology, tools, and virtual work environments. Examples and case studies are based on technology projects, and t
Uniform Zariski's Theorem On Fundamental Groups
Kaliman, S
1997-01-01
The Zariski theorem says that for every hypersurface in a complex projective (resp. affine) space of dimension at least 3 and for every generic plane in the projective (resp. affine) space the natural embedding generates an isomorphism of the fundamental groups of the complements to the hypersurface in the plane and in the space. If a family of hypersurfaces depends algebraically on parameters then it is not true in general that there exists a plane such that the natural embedding generates isomorphisms of the fundamental groups of the complements to each hypersurface from this family in the plane and in the space. But we show that in the affine case such plane exists after a polynomial coordinate substitution.
Ablative Thermal Protection System Fundamentals
Beck, Robin A. S.
2013-01-01
This is the presentation for a short course on the fundamentals of ablative thermal protection systems. It covers the definition of ablation, description of ablative materials, how they work, how to analyze them and how to model them.
Fundamental principles of particle detectors
This paper goes through the fundamental physics of particles-matter interactions which is necessary for the detection of these particles with detectors. A listing of 41 concepts and detector principles are given. 14 refs., 11 figs
Fundamental Strings as Noncommutative Solitons
Larsen, Finn
2000-01-01
The interpretation of closed fundamental strings as solitons in open string field theory is reviewed. Noncommutativity is introduced to facilitate an explicit construction. The tension is computed exactly and the correct spectrum is recovered at long wave length.
Fundamental approach to discrete mathematics
Acharjya, DP
2005-01-01
Salient Features Mathematical logic, fundamental concepts, proofs and mathematical induction (Chapter 1) Set theory, fundamental concepts, theorems, proofs, Venn diagrams, product of sets, application of set theory and fundamental products (Chapter 2) An introduction to binary relations and concepts, graphs, arrow diagrams, relation matrix, composition of relations, types of relation, partial order relations, total order relation, closure of relations, poset, equivalence classes and partitions. (Chapter 3) An introduction to functions and basic concepts, graphs, composition of functions, floor and ceiling function, characteristic function, remainder function, signum function and introduction to hash function. (Chapter 4) The algebraic structure includes group theory and ring theory. Group theory includes group, subgroups, cyclic group, cosets, homomorphism, introduction to codes and group codes and error correction for block code. The ring theory includes general definition, fundamental concepts, integra...
Fundamental particles and their interactions
Ananthanarayan, B.
2005-01-01
In this article the current understanding of fundamental particles and their interactions is presented for the interested non-specialist, by adopting a semi-historical path. A discussion on the unresolved problems is also presented.
Quantum mechanics I the fundamentals
Rajasekar, S
2015-01-01
Quantum Mechanics I: The Fundamentals provides a graduate-level account of the behavior of matter and energy at the molecular, atomic, nuclear, and sub-nuclear levels. It covers basic concepts, mathematical formalism, and applications to physically important systems.
Testing for Non-Fundamentalness
Hamidi Sahneh, Mehdi
2016-01-01
Non-fundamentalness arises when observed variables do not contain enough information to recover structural shocks. This paper propose a new test to empirically detect non-fundamentalness, which is robust to the conditional heteroskedasticity of unknown form, does not need information outside of the specified model and could be accomplished with a standard F-test. A Monte Carlo study based on a DSGE model is conducted to examine the finite sample performance of the test. I apply the prop...
Conjugated polyelectrolytes fundamentals and applications
Liu, Bin
2013-01-01
This is the first monograph to specifically focus on fundamentals and applications of polyelectrolytes, a class of molecules that gained substantial interest due to their unique combination of properties. Combining both features of organic semiconductors and polyelectrolytes, they offer a broad field for fundamental research as well as applications to analytical chemistry, optical imaging, and opto-electronic devices. The initial chapters introduce readers to the synthesis, optical and electrical properties of various conjugated polyelectrolytes. This is followed by chapters on the applica
Fundamentals of electronic image processing
Weeks, Arthur R
1996-01-01
This book is directed to practicing engineers and scientists who need to understand the fundamentals of image processing theory and algorithms to perform their technical tasks. It is intended to fill the gap between existing high-level texts dedicated to specialists in the field and the need for a more practical, fundamental text on image processing. A variety of example images are used to enhance reader understanding of how particular image processing algorithms work.
Fundamental units: physics and metrology
Okun, L. B.
2003-01-01
The problem of fundamental units is discussed in the context of achievements of both theoretical physics and modern metrology. On one hand, due to fascinating accuracy of atomic clocks, the traditional macroscopic standards of metrology (second, metre, kilogram) are giving way to standards based on fundamental units of nature: velocity of light $c$ and quantum of action $h$. On the other hand, the poor precision of gravitational constant $G$, which is widely believed to define the ``cube of t...
Puzzarini, Cristina [Dipartimento di Chimica " Giacomo Ciamician," Università di Bologna, Via Selmi 2, I-40126 Bologna (Italy); Ali, Ashraf [NASA Goddard Space Flight Center, Greenbelt, MD 20771 (United States); Biczysko, Malgorzata; Barone, Vincenzo, E-mail: cristina.puzzarini@unibo.it [Scuola Normale Superiore, Piazza dei Cavalieri 7, I-56126 Pisa (Italy)
2014-09-10
An accurate spectroscopic characterization of protonated oxirane has been carried out by means of state-of-the-art computational methods and approaches. The calculated spectroscopic parameters from our recent computational investigation of oxirane together with the corresponding experimental data available were used to assess the accuracy of our predicted rotational and IR spectra of protonated oxirane. We found an accuracy of about 10 cm{sup –1} for vibrational transitions (fundamentals as well as overtones and combination bands) and, in relative terms, of 0.1% for rotational transitions. We are therefore confident that the spectroscopic data provided herein are a valuable support for the detection of protonated oxirane not only in Titan's atmosphere but also in the interstellar medium.
An accurate spectroscopic characterization of protonated oxirane has been carried out by means of state-of-the-art computational methods and approaches. The calculated spectroscopic parameters from our recent computational investigation of oxirane together with the corresponding experimental data available were used to assess the accuracy of our predicted rotational and IR spectra of protonated oxirane. We found an accuracy of about 10 cm–1 for vibrational transitions (fundamentals as well as overtones and combination bands) and, in relative terms, of 0.1% for rotational transitions. We are therefore confident that the spectroscopic data provided herein are a valuable support for the detection of protonated oxirane not only in Titan's atmosphere but also in the interstellar medium.
Accurate calibration of stereo cameras for machine vision
Li, Liangfu; Feng, Zuren; Feng, Yuanjing
2004-01-01
Camera calibration is an important task for machine vision, whose goal is to obtain the internal and external parameters of each camera. With these parameters, the 3D positions of a scene point, which is identified and matched in two stereo images, can be determined by the triangulation theory. This paper presents a new accurate estimation of CCD camera parameters for machine vision. We present a fast technique to estimate the camera center with special arrangement of calibration target and t...
We have analyzed the double-lined eclipsing binary system OGLE-LMC-CEP-1812 in the LMC and demonstrate that it contains a classical fundamental mode Cepheid pulsating with a period of 1.31 days. The secondary star is a stable giant. We derive the dynamical masses for both stars with an accuracy of 1.5%, making the Cepheid in this system the second classical Cepheid with a very accurate dynamical mass determination, following the OGLE-LMC-CEP-0227 system studied by Pietrzyński et al. The measured dynamical mass agrees very well with that predicted by pulsation models. We also derive the radii of both components and accurate orbital parameters for the binary system. This new, very accurate dynamical mass for a classical Cepheid will greatly contribute to the solution of the Cepheid mass discrepancy problem, and to our understanding of the structure and evolution of classical Cepheids.
Fundamental physics in particle traps
The individual topics are covered by leading experts in the respective fields of research. Provides readers with present theory and experiments in this field. A useful reference for researchers. This volume provides detailed insight into the field of precision spectroscopy and fundamental physics with particles confined in traps. It comprises experiments with electrons and positrons, protons and antiprotons, antimatter and highly charged ions, together with corresponding theoretical background. Such investigations represent stringent tests of quantum electrodynamics and the Standard model, antiparticle and antimatter research, test of fundamental symmetries, constants, and their possible variations with time and space. They are key to various aspects within metrology such as mass measurements and time standards, as well as promising to further developments in quantum information processing. The reader obtains a valuable source of information suited for beginners and experts with an interest in fundamental studies using particle traps.
Fundamental physics in particle traps
Quint, Wolfgang; Vogel, Manuel (eds.) [GSI Helmholtz-Zentrum fuer Schwerionenforschung, Darmstadt (Germany)
2014-03-01
The individual topics are covered by leading experts in the respective fields of research. Provides readers with present theory and experiments in this field. A useful reference for researchers. This volume provides detailed insight into the field of precision spectroscopy and fundamental physics with particles confined in traps. It comprises experiments with electrons and positrons, protons and antiprotons, antimatter and highly charged ions, together with corresponding theoretical background. Such investigations represent stringent tests of quantum electrodynamics and the Standard model, antiparticle and antimatter research, test of fundamental symmetries, constants, and their possible variations with time and space. They are key to various aspects within metrology such as mass measurements and time standards, as well as promising to further developments in quantum information processing. The reader obtains a valuable source of information suited for beginners and experts with an interest in fundamental studies using particle traps.
Fundamental physics in particle traps
Vogel, Manuel
2014-01-01
This volume provides detailed insight into the field of precision spectroscopy and fundamental physics with particles confined in traps. It comprises experiments with electrons and positrons, protons and antiprotons, antimatter and highly charged ions, together with corresponding theoretical background. Such investigations represent stringent tests of quantum electrodynamics and the Standard model, antiparticle and antimatter research, test of fundamental symmetries, constants, and their possible variations with time and space. They are key to various aspects within metrology such as mass measurements and time standards, as well as promising to further developments in quantum information processing. The reader obtains a valuable source of information suited for beginners and experts with an interest in fundamental studies using particle traps.
RFID design fundamentals and applications
Lozano-Nieto, Albert
2010-01-01
RFID is an increasingly pervasive tool that is now used in a wide range of fields. It is employed to substantiate adherence to food preservation and safety standards, combat the circulation of counterfeit pharmaceuticals, and verify authenticity and history of critical parts used in aircraft and other machinery-and these are just a few of its uses. Goes beyond deployment, focusing on exactly how RFID actually worksRFID Design Fundamentals and Applications systematically explores the fundamental principles involved in the design and characterization of RFID technologies. The RFID market is expl
Fundamental Composite (Goldstone) Higgs Dynamics
Cacciapaglia, G.; Sannino, Francesco
2014-01-01
We provide a unified description, both at the effective and fundamental Lagrangian level, of models of composite Higgs dynamics where the Higgs itself can emerge, depending on the way the electroweak symmetry is embedded, either as a pseudo-Goldstone boson or as a massive excitation...... transforming according to the fundamental representation of the gauge group. This minimal choice enables us to use recent first principle lattice results to make the first predictions for the massive spectrum for models of composite (Goldstone) Higgs dynamics. These results are of the upmost relevance to guide...
THE FUNDAMENTS OF EXPLANATORY CAUSES
Lavinia Mihaela VLĂDILĂ
2015-01-01
The new Criminal Code in the specter of the legal life the division of causes removing the criminal feature of the offence in explanatory causes and non-attributable causes. This dichotomy is not without legal and factual fundaments and has been subjected to doctrinaire debates even since the period when the Criminal Code of 1969 was still in force. From our perspective, one of the possible legal fundaments of the explanatory causes results from that the offence committed is based on the prot...
Fundamental Research and Developing Countries
Narison, Stéphan
2002-01-01
In the first part of this report, I discuss the sociological role of fundamental research in Developing Countries (DC) and how to realize this program. In the second part, I give a brief and elementary introduction to the field of high-energy physics (HEP), accessible to a large audience not necessary physicists. The aim of this report is to make politicians and financial backers aware on the long-term usefulness of fundamental research in DC and on the possible globalisation of HEP and, in general, of science.
Fundamental approach to discrete mathematics
Acharjya, DP
2009-01-01
About the Book: The book `Fundamental Approach to Discrete Mathematics` is a required part of pursuing a computer science degree at most universities. It provides in-depth knowledge to the subject for beginners and stimulates further interest in the topic. The salient features of this book include: Strong coverage of key topics involving recurrence relation, combinatorics, Boolean algebra, graph theory and fuzzy set theory. Algorithms and examples integrated throughout the book to bring clarity to the fundamental concepts. Each concept and definition is followed by thoughtful examples.
Image restoration fundamentals and advances
Gunturk, Bahadir Kursat
2012-01-01
Image Restoration: Fundamentals and Advances responds to the need to update most existing references on the subject, many of which were published decades ago. Providing a broad overview of image restoration, this book explores breakthroughs in related algorithm development and their role in supporting real-world applications associated with various scientific and engineering fields. These include astronomical imaging, photo editing, and medical imaging, to name just a few. The book examines how such advances can also lead to novel insights into the fundamental properties of image sources. Addr
The fundamentals of mathematical analysis
Fikhtengol'ts, G M
1965-01-01
The Fundamentals of Mathematical Analysis, Volume 1 is a textbook that provides a systematic and rigorous treatment of the fundamentals of mathematical analysis. Emphasis is placed on the concept of limit which plays a principal role in mathematical analysis. Examples of the application of mathematical analysis to geometry, mechanics, physics, and engineering are given. This volume is comprised of 14 chapters and begins with a discussion on real numbers, their properties and applications, and arithmetical operations over real numbers. The reader is then introduced to the concept of function, i
Wang, Liqiang; Liu, Zhen; Zhang, Zhonghua
2014-11-01
Stereo vision is the key in the visual measurement, robot vision, and autonomous navigation. Before performing the system of stereo vision, it needs to calibrate the intrinsic parameters for each camera and the external parameters of the system. In engineering, the intrinsic parameters remain unchanged after calibrating cameras, and the positional relationship between the cameras could be changed because of vibration, knocks and pressures in the vicinity of the railway or motor workshops. Especially for large baselines, even minute changes in translation or rotation can affect the epipolar geometry and scene triangulation to such a degree that visual system becomes disabled. A technology including both real-time examination and on-line recalibration for the external parameters of stereo system becomes particularly important. This paper presents an on-line method for checking and recalibrating the positional relationship between stereo cameras. In epipolar geometry, the external parameters of cameras can be obtained by factorization of the fundamental matrix. Thus, it offers a method to calculate the external camera parameters without any special targets. If the intrinsic camera parameters are known, the external parameters of system can be calculated via a number of random matched points. The process is: (i) estimating the fundamental matrix via the feature point correspondences; (ii) computing the essential matrix from the fundamental matrix; (iii) obtaining the external parameters by decomposition of the essential matrix. In the step of computing the fundamental matrix, the traditional methods are sensitive to noise and cannot ensure the estimation accuracy. We consider the feature distribution situation in the actual scene images and introduce a regional weighted normalization algorithm to improve accuracy of the fundamental matrix estimation. In contrast to traditional algorithms, experiments on simulated data prove that the method improves estimation
Different Variants of Fundamental Portfolio
Tarczyński Waldemar
2014-06-01
Full Text Available The paper proposes the fundamental portfolio of securities. This portfolio is an alternative for the classic Markowitz model, which combines fundamental analysis with portfolio analysis. The method’s main idea is based on the use of the TMAI1 synthetic measure and, in limiting conditions, the use of risk and the portfolio’s rate of return in the objective function. Different variants of fundamental portfolio have been considered under an empirical study. The effectiveness of the proposed solutions has been related to the classic portfolio constructed with the help of the Markowitz model and the WIG20 market index’s rate of return. All portfolios were constructed with data on rates of return for 2005. Their effectiveness in 2006- 2013 was then evaluated. The studied period comprises the end of the bull market, the 2007-2009 crisis, the 2010 bull market and the 2011 crisis. This allows for the evaluation of the solutions’ flexibility in various extreme situations. For the construction of the fundamental portfolio’s objective function and the TMAI, the study made use of financial and economic data on selected indicators retrieved from Notoria Serwis for 2005.
Fundamental Cycles of Cognitive Growth.
Pegg, John
Over recent years, various theories have arisen to explain and predict cognitive development in mathematics education. We focus on an underlying theme that recurs throughout such theories: a fundamental cycle of growth in the learning of specific concepts, which we frame within broader global theories of individual cognitive growth. Our purpose is…
Political Management of Islamic Fundamentalism
Alam, Anwar
2007-01-01
Abstract This article attempts to explain why and how the Indian state has been successful in managing the militant form of Islamic fundamentalism in India, despite favourable internal and external conditions for such militancy. Internally, it includes such factors as the relative material and cultural deprivation of Indian Muslims, the context of Hindutava and the communal riots, and externally, the Islami...
Environmental Law: Fundamentals for Schools.
Day, David R.
This booklet outlines the environmental problems most likely to arise in schools. An overview provides a fundamental analysis of environmental issues rather than comprehensive analysis and advice. The text examines the concerns that surround superfund cleanups, focusing on the legal framework, and furnishes some practical pointers, such as what to…
Composing Europe's Fundamental Rights Area
Storgaard, Louise Halleskov
2015-01-01
The article offers a perspective on how the objective of a strong and coherent European protection standard pursued by the fundamental rights amendments of the Lisbon Treaty can be achieved, as it proposes a discursive pluralistic framework to understand and guide the relationship between the EU...
Fundamentals: IVC and computer science
Gozalvez, Javier; Haerri, Jerome; Hartenstein, Hannes; Heijenk, Geert; Kargl, Frank; Petit, Jonathan; Scheuermann, Björn; Tieler, Tessa; Altintas, O.; Dressler, F.; Hartenstein, H.; Tonguz, O.K.
2013-01-01
The working group on “Fundamentals: IVC and Computer Science” discussed the lasting value of achieved research results as well as potential future directions in the field of inter- vehicular communication. Two major themes ‘with variations’ were the dependence on a specific technology (particularly
Fundamental Concepts in Modern Analysis
Hansen, Vagn Lundsgaard
an opportunity to go into some depth with fundamental notions from mathematical analysis that are not only important from a mathematical point of view butalso occur frequently in the more theoretical parts of the engineering sciences. The book should also appeal to university students in mathematics...
Fundamental composite (Goldstone) Higgs dynamics
We provide a unified description, both at the effective and fundamental Lagrangian level, of models of composite Higgs dynamics where the Higgs itself can emerge, depending on the way the electroweak symmetry is embedded, either as a pseudo-Goldstone boson or as a massive excitation of the condensate. We show that, in general, these states mix with repercussions on the electroweak physics and phenomenology. Our results will help clarify the main differences, similarities, benefits and shortcomings of the different ways one can naturally realize a composite nature of the electroweak sector of the Standard Model. We will analyze the minimal underlying realization in terms of fundamental strongly coupled gauge theories supporting the flavor symmetry breaking pattern SU(4)/Sp(4)∼SO(6)/SO(5). The most minimal fundamental description consists of an SU(2) gauge theory with two Dirac fermions transforming according to the fundamental representation of the gauge group. This minimal choice enables us to use recent first principle lattice results to make the first predictions for the massive spectrum for models of composite (Goldstone) Higgs dynamics. These results are of the utmost relevance to guide searches of new physics at the Large Hadron Collider
Biological Computing Fundamentals and Futures
Akula, Balaji
2009-01-01
The fields of computing and biology have begun to cross paths in new ways. In this paper a review of the current research in biological computing is presented. Fundamental concepts are introduced and these foundational elements are explored to discuss the possibilities of a new computing paradigm. We assume the reader to possess a basic knowledge of Biology and Computer Science
Experimental tests of fundamental symmetries
Jungmann, K. P.
2014-01-01
Ongoing experiments and projects to test our understanding of fundamental inter- actions and symmetries in nature have progressed significantly in the past few years. At high energies the long searched for Higgs boson has been found; tests of gravity for antimatter have come closer to reality; Loren
Lighting Fundamentals. Monograph Number 13.
Locatis, Craig N.; Gerlach, Vernon S.
Using an accompanying, specified film that consists of 10-second pictures separated by blanks, the learner can, with the 203-step, self-correcting questions and answers provided in this program, come to understand the fundamentals of lighting in photography. The learner should, by the end of the program, be able to describe and identify the…
Experiments in Fundamental Neutron Physics
Nico, J. S.; Snow, W. M.
2006-01-01
Experiments using slow neutrons address a growing range of scientific issues spanning nuclear physics, particle physics, astrophysics, and cosmology. The field of fundamental physics using neutrons has experienced a significant increase in activity over the last two decades. This review summarizes some of the recent developments in the field and outlines some of the prospects for future research.
Fundamentals of Welding. Teacher Edition.
Fortney, Clarence; And Others
These instructional materials assist teachers in improving instruction on the fundamentals of welding. The following introductory information is included: use of this publication; competency profile; instructional/task analysis; related academic and workplace skills list; tools, materials, and equipment list; and 27 references. Seven units of…
Analytical Study on Fundamental Frequency Contours of Thai Expressive Speech Using Fujisaki's Model
Suphattharachai Chomphan
2010-01-01
Full Text Available Problem statement: In spontaneous speech communication, prosody is an important factor that must be taken into account, since the prosody effects on not only the naturalness but also the intelligibility of speech. Focusing on synthesis of Thai expressive speech, a number of systems has been developed for years. However, the expressive speech with various speaking styles has not been accomplished. To achieve the generation of expressive speech, we need to model the fundamental frequency (F0 contours accurately to preserve the speech prosody. Approach: Therefore this study proposes an analysis of model parameters for Thai speech prosody with three speaking styles and two genders which is a preliminary work for speech synthesis. Fujisaki's modeling; a powerful tool to model the F0 contour has been adopted, while the speaking styles of happiness, sadness and reading have been considered. Seven derived parameters from the Fujisaki's model are as follows. The first parameter is baseline frequency which is the lowest level of F0 contour. The second and third parameters are the numbers of phrase commands and tone commands which reflect the frequencies of surges of the utterance in global and local levels, respectively. The fourth and fifth parameters are phrase command and tone command durations which reflect the speed of speaking and the length of a syllable, respectively. The sixth and seventh parameters are amplitudes of phrase command and tone command which reflect the energy of the global speech and the energy of local syllable. Results: In the experiments, each speaking style includes 200 samples of one sentence with male and female speech. Therefore our speech database contains 1200 utterances in total. The results show that most of the proposed parameters can distinguish three kinds of speaking styles explicitly. Conclusion: From the finding, it is a strong evidence to further apply the successful parameters in the speech synthesis systems or
Equivalent method for accurate solution to linear interval equations
王冲; 邱志平
2013-01-01
Based on linear interval equations, an accurate interval finite element method for solving structural static problems with uncertain parameters in terms of optimization is discussed. On the premise of ensuring the consistency of solution sets, the original interval equations are equivalently transformed into some deterministic inequations. On this basis, calculating the structural displacement response with interval parameters is predigested to a number of deterministic linear optimization problems. The results are proved to be accurate to the interval governing equations. Finally, a numerical example is given to demonstrate the feasibility and eﬃciency of the proposed method.
Accurate estimation of indoor travel times
Prentow, Thor Siiger; Blunck, Henrik; Stisen, Allan;
2014-01-01
the InTraTime method for accurately estimating indoor travel times via mining of historical and real-time indoor position traces. The method learns during operation both travel routes, travel times and their respective likelihood---both for routes traveled as well as for sub-routes thereof. InTraTime...... allows to specify temporal and other query parameters, such as time-of-day, day-of-week or the identity of the traveling individual. As input the method is designed to take generic position traces and is thus interoperable with a variety of indoor positioning systems. The method's advantages include...... a minimal-effort setup and self-improving operations due to unsupervised learning---as it is able to adapt implicitly to factors influencing indoor travel times such as elevators, rotating doors or changes in building layout. We evaluate and compare the proposed InTraTime method to indoor adaptions...
Toward Accurate and Quantitative Comparative Metagenomics.
Nayfach, Stephen; Pollard, Katherine S
2016-08-25
Shotgun metagenomics and computational analysis are used to compare the taxonomic and functional profiles of microbial communities. Leveraging this approach to understand roles of microbes in human biology and other environments requires quantitative data summaries whose values are comparable across samples and studies. Comparability is currently hampered by the use of abundance statistics that do not estimate a meaningful parameter of the microbial community and biases introduced by experimental protocols and data-cleaning approaches. Addressing these challenges, along with improving study design, data access, metadata standardization, and analysis tools, will enable accurate comparative metagenomics. We envision a future in which microbiome studies are replicable and new metagenomes are easily and rapidly integrated with existing data. Only then can the potential of metagenomics for predictive ecological modeling, well-powered association studies, and effective microbiome medicine be fully realized. PMID:27565341
Reassessing The Fundamentals: New Constraints on the Evolution, Ages and Masses of Neutron Stars
Kiziltan, Bulent
2011-01-01
The ages and masses of neutron stars (NSs) are two fundamental threads that make pulsars accessible to other sub-disciplines of astronomy and physics. A realistic and accurate determination of these two derived parameters play an important role in understanding of advanced stages of stellar evolution and the physics that govern relevant processes. Here I summarize new constraints on the ages and masses of NSs with an evolutionary perspective. I show that the observed P-Pdot demographics is more diverse than what is theoretically predicted for the standard evolutionary channel. In particular, standard recycling followed by dipole spin-down fails to reproduce the population of millisecond pulsars with higher magnetic fields (B > 4 x 10^{8} G) at rates deduced from observations. A proper inclusion of constraints arising from binary evolution and mass accretion offers a more realistic insight into the age distribution. By analytically implementing these constraints, I propose a "modified" spin-down age for millis...
Accurate ab initio spin densities
Boguslawski, Katharina; Legeza, Örs; Reiher, Markus
2012-01-01
We present an approach for the calculation of spin density distributions for molecules that require very large active spaces for a qualitatively correct description of their electronic structure. Our approach is based on the density-matrix renormalization group (DMRG) algorithm to calculate the spin density matrix elements as basic quantity for the spatially resolved spin density distribution. The spin density matrix elements are directly determined from the second-quantized elementary operators optimized by the DMRG algorithm. As an analytic convergence criterion for the spin density distribution, we employ our recently developed sampling-reconstruction scheme [J. Chem. Phys. 2011, 134, 224101] to build an accurate complete-active-space configuration-interaction (CASCI) wave function from the optimized matrix product states. The spin density matrix elements can then also be determined as an expectation value employing the reconstructed wave function expansion. Furthermore, the explicit reconstruction of a CA...
Accurate thickness measurement of graphene
Shearer, Cameron J.; Slattery, Ashley D.; Stapleton, Andrew J.; Shapter, Joseph G.; Gibson, Christopher T.
2016-03-01
Graphene has emerged as a material with a vast variety of applications. The electronic, optical and mechanical properties of graphene are strongly influenced by the number of layers present in a sample. As a result, the dimensional characterization of graphene films is crucial, especially with the continued development of new synthesis methods and applications. A number of techniques exist to determine the thickness of graphene films including optical contrast, Raman scattering and scanning probe microscopy techniques. Atomic force microscopy (AFM), in particular, is used extensively since it provides three-dimensional images that enable the measurement of the lateral dimensions of graphene films as well as the thickness, and by extension the number of layers present. However, in the literature AFM has proven to be inaccurate with a wide range of measured values for single layer graphene thickness reported (between 0.4 and 1.7 nm). This discrepancy has been attributed to tip-surface interactions, image feedback settings and surface chemistry. In this work, we use standard and carbon nanotube modified AFM probes and a relatively new AFM imaging mode known as PeakForce tapping mode to establish a protocol that will allow users to accurately determine the thickness of graphene films. In particular, the error in measuring the first layer is reduced from 0.1-1.3 nm to 0.1-0.3 nm. Furthermore, in the process we establish that the graphene-substrate adsorbate layer and imaging force, in particular the pressure the tip exerts on the surface, are crucial components in the accurate measurement of graphene using AFM. These findings can be applied to other 2D materials.
Fundamental Rotorcraft Acoustic Modeling From Experiments (FRAME)
Greenwood, Eric
2011-01-01
A new methodology is developed for the construction of helicopter source noise models for use in mission planning tools from experimental measurements of helicopter external noise radiation. The models are constructed by employing a parameter identification method to an assumed analytical model of the rotor harmonic noise sources. This new method allows for the identification of individual rotor harmonic noise sources and allows them to be characterized in terms of their individual non-dimensional governing parameters. The method is applied to both wind tunnel measurements and ground noise measurements of two-bladed rotors. The method is shown to match the parametric trends of main rotor harmonic noise, allowing accurate estimates of the dominant rotorcraft noise sources to be made for operating conditions based on a small number of measurements taken at different operating conditions. The ability of this method to estimate changes in noise radiation due to changes in ambient conditions is also demonstrated.
Elaine Gomes Fiore
2012-12-01
Full Text Available A Segurança Alimentar e Nutricional (SAN deve ser assegurada a todos. A escola é ambiente propício à formação de hábitos saudáveis e à construção de cidadania. Os Parâmetros Curriculares Nacionais (PCNs orientam a promoção de concepções de saúde de modo transversal no currículo escolar. Este estudo teve como objetivo identificar e analisar a abordagem dos temas alimentação e nutrição no material didático do ensino fundamental e sua interface com o conceito de SAN e com os PCNs. Foi realizada pesquisa documental mediante o material didático de 5ª a 8ª séries do ensino fundamental da rede pública do Estado de São Paulo. A presença difusa do tema alimentação e nutrição na maioria das disciplinas, por todos os bimestres, nas quatro séries, traz à tona a interdisciplinaridade em saúde. Verificou-se que os PCNs estão relacionados ao conceito de SAN nos seus diversos aspectos e que a maioria das disciplinas contém temas que abordam esta relação. Na interface entre os temas, destaca-se a promoção da saúde e a produção dos alimentos. A metodologia utilizada no material didático apresenta o tema, mas não o conteúdo correlato, o que impossibilitou a análise de sua adequação. Conclui-se que existe a abordagem dos temas relacionados à alimentação e nutrição no material didático, alguns de forma inconsistente, e cabe aos educadores a seleção do conteúdo e da estratégia adequada, além de sua constante atualização, o que está sendo proposto pelo Estado, mas não está ao alcance de todos os profissionais e, portanto, ainda depende da iniciativa de cada docente.Food and Nutrition Security (FNS must be ensured to everybody. The school environment is favorable to the formation of healthy habits and citizenship. The National Curriculum Parameters (PCNs guide the promotion of health concepts in a transversal way in the school curriculum. This study aimed to identify and analyze the approach used for
Fundamental neutron physics at LANSCE
Greene, G.
1995-10-01
Modern neutron sources and science share a common origin in mid-20th-century scientific investigations concerned with the study of the fundamental interactions between elementary particles. Since the time of that common origin, neutron science and the study of elementary particles have evolved into quite disparate disciplines. The neutron became recognized as a powerful tool for studying condensed matter with modern neutron sources being primarily used (and justified) as tools for neutron scattering and materials science research. The study of elementary particles has, of course, led to the development of rather different tools and is now dominated by activities performed at extremely high energies. Notwithstanding this trend, the study of fundamental interactions using neutrons has continued and remains a vigorous activity at many contemporary neutron sources. This research, like neutron scattering research, has benefited enormously by the development of modern high-flux neutron facilities. Future sources, particularly high-power spallation sources, offer exciting possibilities for continuing this research.
Modern measurements fundamentals and applications
Petri, D; Carbone, P; Catelani, M
2015-01-01
This book explores the modern role of measurement science for both the technically most advanced applications and in everyday and will help readers gain the necessary skills to specialize their knowledge for a specific field in measurement. Modern Measurements is divided into two parts. Part I (Fundamentals) presents a model of the modern measurement activity and the already recalled fundamental bricks. It starts with a general description that introduces these bricks and the uncertainty concept. The next chapters provide an overview of these bricks and ﬁnishes (Chapter 7) with a more general and complex model that encompasses both traditional (hard) measurements and (soft) measurements, aimed at quantifying non-physical concepts, such as quality, satisfaction, comfort, etc. Part II (Applications) is aimed at showing how the concepts presented in Part I can be usefully applied to design and implement measurements in some very impor ant and broad ﬁelds. The editors cover System Identiﬁcation (Chapter 8...
THE FUNDAMENTS OF EXPLANATORY CAUSES
Lavinia Mihaela VLĂDILĂ
2015-07-01
Full Text Available The new Criminal Code in the specter of the legal life the division of causes removing the criminal feature of the offence in explanatory causes and non-attributable causes. This dichotomy is not without legal and factual fundaments and has been subjected to doctrinaire debates even since the period when the Criminal Code of 1969 was still in force. From our perspective, one of the possible legal fundaments of the explanatory causes results from that the offence committed is based on the protection of a right at least equal with the one prejudiced by the action of aggression, salvation, by the legal obligation imposed or by the victim’s consent.
DOE Fundamentals Handbook: Classical Physics
The Classical Physics Fundamentals Handbook was developed to assist nuclear facility operating contractors provide operators, maintenance personnel, and the technical staff with the necessary fundamentals training to ensure a basic understanding of physical forces and their properties. The handbook includes information on the units used to measure physical properties; vectors, and how they are used to show the net effect of various forces; Newton's Laws of motion, and how to use these laws in force and motion applications; and the concepts of energy, work, and power, and how to measure and calculate the energy involved in various applications. This information will provide personnel with a foundation for understanding the basic operation of various types of DOE nuclear facility systems and equipment
Fundamentals of condensed matter physics
Cohen, Marvin L
2016-01-01
Based on an established course and covering the fundamentals, central areas, and contemporary topics of this diverse field, Fundamentals of Condensed Matter Physics is a much-needed textbook for graduate students. The book begins with an introduction to the modern conceptual models of a solid from the points of view of interacting atoms and elementary excitations. It then provides students with a thorough grounding in electronic structure as a starting point to understand many properties of condensed matter systems - electronic, structural, vibrational, thermal, optical, transport, magnetic and superconductivity - and methods to calculate them. Taking readers through the concepts and techniques, the text gives both theoretically and experimentally inclined students the knowledge needed for research and teaching careers in this field. It features 200 illustrations, 40 worked examples and 150 homework problems for students to test their understanding. Solutions to the problems for instructors are available at w...
Fundamental research in developing countries
Technical assistance is today a widespread activity. Large numbers of persons with special qualifications in the applied sciences go to the developing countries to work on specific research and development projects, as do educationists on Fulbright or other programmes - usually to teach elementary or intermediate courses. But I believe that until now it has been rare for a person primarily interested in fundamental research to go to one of these countries to help build up advanced education and pure research work. Having recently returned from such an assignment, and having found it a most stimulating and enlightening experience, I feel moved to urge strongly upon others who may be in a position to do so that they should seek similar experience themselves. The first step is to show that advanced education and fundamental research are badly needed in the under-developed countries.
Fundamental Complexity Measures of Life
Grandpierre, Attila
2012-01-01
At present, there is a great deal of confusion regarding complexity and its measures (reviews on complexity measures are found in, e.g. Lloyd, 2001 and Shalizi, 2006 and more references therein). Moreover, there is also confusion regarding the nature of life. In this situation, it seems the task of determining the fundamental complexity measures of life is especially difficult. Yet this task is just part of a greater task: obtaining substantial insights into the nature of biological evolution. We think that without a firm quantitative basis characterizing the most fundamental aspects of life, it is impossible to overcome the confusion so as to clarify the nature of biological evolution. The approach we present here offers such quantitative measures of complexity characterizing biological organization and, as we will see, evolution.
Fundamental indexation for bond markets
Marielle de Jong; Hongwen Wu
2014-01-01
Purpose – The purpose of this paper is to build alternative indices weighing using a measure of fundamental value rather than debt size. The official bond indices built to reflect general price trends are market weighted, meaning that the bonds are weighted by their debt size. The more indebted, the more weight in the index, which mechanically increments the investment risks that are inherent. Those market indices are shown to be return-to-risk inefficient in recent studies compared to indice...
Fundamentals: IVC and computer science
Gozalvez, Javier; Haerri, Jerome; Hartenstein, Hannes; Heijenk, Geert; Kargl, Frank; Petit, Jonathan; Scheuermann, Björn; Tieler, Tessa; Altintas, O.; Dressler, F; Hartenstein, H.; Tonguz, O.K.
2013-01-01
The working group on “Fundamentals: IVC and Computer Science” discussed the lasting value of achieved research results as well as potential future directions in the field of inter- vehicular communication. Two major themes ‘with variations’ were the dependence on a specific technology (particularly the focus on IEEE 802.11p in the last decade) and the struggling with bringing self-organizing networks to deployment/market. The team started with a retrospective view and identified the following...
Bangladesh: Drifting into Islamic Fundamentalism?
Wolf, Siegfried O.
2013-01-01
Since 9/11 the world has regarded Pakistan and Afghanistan as the epicentre of Islamic fundamentalism. Many of the early observations dealt with the tremendous challenge that terrorism and religious-militant extremism would pose for peace and stability from a geopolitical perspective. Realising the increasingly complex scenarios as well as the causalities and impacts, analyses on the phenomenon under discussion were slowly but persistently broadening. In order to be able to address not only t...
Fundamentals of plastic optical fibers
Koike, Yasuhiro
2014-01-01
Polymer photonics is an interdisciplinary field which demands excellence both in optics (photonics) and materials science (polymer). However, theses disciplines have developed independently, and therefore the demand for a comprehensive work featuring the fundamentals of photonic polymers is greater than ever.This volume focuses on Polymer Optical Fiber and their applications. The first part of the book introduces typical optical fibers according to their classifications of material, propagating mode, and structure. Optical properties, the high bandwidth POF and transmission loss are discussed,
The fundamental problem of accounting
Robert D. Cairns
2013-01-01
The fundamental problem of economic accounting is to determine a forwardlooking schedule of rentals, user costs or quasirents to provide for the recovery of irreversible investments. The method derived herein relaxes some restrictive assumptions that are common in capital theory. There can be multiple forms of comprehensive capital. Accounting for all forms of capital, including tangible and intangible capital, is symmetrical. The analytical focus becomes one of fixities and frictions and not...
Early Cosmology and Fundamental Physics
De Vega, Hector
2003-01-01
Based on Lectures at the 9th. Chalonge School in Astrofundamental Physics, Palermo, September 2002, NATO ASI. To appear in the Proceedings, N. S'anchez and Yu. Parijskij editors, Kluwer. This is a pedagogical introduction to early cosmology and the host of fundamental physics involved in it (particle physics, grand unification andgeneral relativity). Inflation and the inflaton field are the centraltheme of this review. The quantum field treatment of the inflaton ispresented including its o...
Fundamental Properties of Quaternion Spinors
Yefremov, Alexander P.
2012-01-01
The interior structure of arbitrary sets of quaternion units is analyzed using general methods of the theory of matrices. It is shown that the units are composed of quadratic combinations of fundamental objects having a dual mathematical meaning as spinor couples and dyads locally describing 2D surfaces. A detailed study of algebraic relationships between the spinor sets belonging to different quaternion units is suggested as an initial step aimed at producing a self-consistent geometric imag...
DOE fundamentals handbook: Material science
This handbook was developed to assist nuclear facility operating contractors in providing operators, maintenance personnel, and the technical staff with the necessary fundamentals training to ensure a basic understanding of the structure and properties of metals. This volume contains the two modules: structure of metals (bonding, common lattic types, grain structure/boundary, polymorphis, alloys, imperfections in metals) and properties of metals (stress, strain, Young modulus, stress-strain relation, physical properties, working of metals, corrosion, hydrogen embrittlement, tritium/material compatibility)
Fundamental requirements for petrochemical development
The development of NOVA Chemicals over the past 20 years is described as an illustration of how the petrochemical industry provides markets for natural gas, natural gas liquids and the products of crude oil distillation, and functions as a conduit for upgrading products which would otherwise be sold into the fuel market. Some fundamental characteristics of the business which are foundations for competitiveness are reviewed in the process. These fundamentals help to understand why the industry locates in certain geographic regions of the world, which are often remote from end-use markets. Chief among these fundamentals is access to an adequate supply of appropriately priced feedstock; this is the single most important reason why chemical companies continue to emphasize developments in areas of the world where feedstock are advantageously priced. The cost of operations is equally significant. Cost depends not so much on location but on the scale of operations, hence the tendency towards large scale plants. Plant and product rationalization, technology and product development synergies and leverage with suppliers are all opportunities for cost reduction throughout the product supply chain. The combination of lower natural gas cost in Alberta, the lower fixed cost of extraction and the economies of scale achieved by large scale operation (five billion pounds per year of polyethylene production capacity) are the crucial factors that will enable NOVA Chemicals to maintain its competitive position and to weather the highs and lows in industry price fluctuations
A More Accurate Fourier Transform
Courtney, Elya
2015-01-01
Fourier transform methods are used to analyze functions and data sets to provide frequencies, amplitudes, and phases of underlying oscillatory components. Fast Fourier transform (FFT) methods offer speed advantages over evaluation of explicit integrals (EI) that define Fourier transforms. This paper compares frequency, amplitude, and phase accuracy of the two methods for well resolved peaks over a wide array of data sets including cosine series with and without random noise and a variety of physical data sets, including atmospheric $\\mathrm{CO_2}$ concentrations, tides, temperatures, sound waveforms, and atomic spectra. The FFT uses MIT's FFTW3 library. The EI method uses the rectangle method to compute the areas under the curve via complex math. Results support the hypothesis that EI methods are more accurate than FFT methods. Errors range from 5 to 10 times higher when determining peak frequency by FFT, 1.4 to 60 times higher for peak amplitude, and 6 to 10 times higher for phase under a peak. The ability t...
Improved dynamic compensation for accurate cutting force measurements in milling applications
Scippa, A.; Sallese, L.; Grossi, N.; Campatelli, G.
2015-03-01
Accurate cutting-force measurements appear to be the key information in most of the machining related studies as they are fundamental in understanding the cutting processes, optimizing the cutting operations and evaluating the presence of instabilities that could affect the effectiveness of cutting processes. A variety of specifically designed transducers are commercially available nowadays and many different approaches in measuring cutting forces are presented in literature. The available transducers, though, express some limitations since they are conditioned by the vibration of the surrounding system and by the transducer's natural frequency. These parameters can drastically affect the measurement accuracy in some cases; hence an effective and accurate tool is required to compensate those dynamically induced errors in cutting force measurements. This work is aimed at developing and testing a compensation technique based on Kalman filter estimator. Two different approaches named "band-fitting" and "parallel elaboration" methods, have been developed to extend applications of this compensation technique, especially for milling purpose. The compensation filter has been designed upon the experimentally identified system's dynamic and its accuracy and effectiveness has been evaluated by numerical and experimental tests. Finally its specific application in cutting force measurements compensation is described.
Retrieving Bulge and Disk Parameters and Asymptotic Magnitudes from the Growth Curves of Galaxies
Okamura, S; Shimasaku, K; Yagi, M; Weinberg, D H; Okamura, Sadanori; Yasuda, Naoki; Shimasaku, Kazuhiro; Yagi, Masafumi; Weinberg, David H.
1998-01-01
We show that the growth curves of galaxies can be used to determine their bulge and disk parameters and bulge-to-total luminosity ratios, in addition to their conventional asymptotic magnitudes, provided that the point spread function is accurately known and signal-to-noise ratio is modest (S/N$\\gtrsim30$). The growth curve is a fundamental quantity that most future large galaxy imaging surveys will measure. Bulge and disk parameters retrieved from the growth curve will enable us to perform statistical studies of luminosity structure for a large number of galaxies.
Fast and Accurate Residential Fire Detection Using Wireless Sensor Networks
Bahrepour, Majid; Meratnia, Nirvana; Havinga, Paul J.M.
2010-01-01
Prompt and accurate residential fire detection is important for on-time fire extinguishing and consequently reducing damages and life losses. To detect fire sensors are needed to measure the environmental parameters and algorithms are required to decide about occurrence of fire. Recently, wireless s
Big Bang nucleosynthesis as a probe of varying fundamental 'constants'
We analyze the effect of variation of fundamental couplings and mass scales on primordial nucleosynthesis in a systematic way. The first step establishes the response of primordial element abundances to the variation of a large number of nuclear physics parameters, including nuclear binding energies. We find a strong influence of the n-p mass difference, of the nucleon mass and of A = 3,4,7 binding energies. A second step relates the nuclear parameters to the parameters of the Standard Model of particle physics. The deuterium, and, above all, 7Li abundances depend strongly on the average light quark mass. We calculate the behaviour of abundances when variations of fundamental parameters obey relations arising from grand unification. We also discuss the possibility of a substantial shift in the lithium abundance while the deuterium and 4He abundances are only weakly affected
The deformation-stability fundamental length and deviations from c
A fundamental length (or time) is conjectured in many contexts. The “stability of physical theories principle” provides an unambiguous derivation of the stable structures that Nature might have chosen for its algebraic framework. 1/c and ℏ are the deformation parameters that stabilize the Galilean and Poisson algebras. The stability principle applied to the Poincaré–Heisenberg algebra, yields two deformation parameters defining two length (or time) scales. One of the scales is probably related to Planck's length but the other might be much larger. This is used as working hypothesis to compute deviations from c in speed measurements of massless particles. -- Highlights: ► Fundamental length as one of the deformation parameters of the Poincaré–Heisenberg algebra. ► Conjecture that one of the deformation parameters is much larger than Planck's length. ► Deviations from c in speed measurements of massless wave packets.
Fundamental symmetries and astrophysics with radioactive beams
A major new initiative at TRIUMF pertains to the use of radioactive beams for astrophysics and for fundamental symmetry experiments. Some recent work is described in which the β-decay-followed by alpha particle emission of 16N was used to find the resonance parameters dominating the alpha particle capture in 12C and thus to find the astrophysical S-factor of this reaction which is of crucial importance for alpha-particle burning and the subsequent collapse of stars. In some work underway trapped neural atoms of radioactive potassium atoms will be used to study fundamental symmetries of the weak interactions. Trapping has been achieved and soon 38mK decay will be used to search for evidence of scalar interactions and 37K decay to search for right-handed gauge-bosom interactions. Future experiments are planned to look for parity non-conservation in trapped francium atoms. This program is part of a revitalization for the TRIUMF laboratory accompanied by the construction of the radioactive beam facility (ISAC). (author)
Numerical simulation of fundamental trapped sausage modes
Cécere, M; Reula, O
2011-01-01
Context: We integrate the 2D MHD ideal equations of a straight slab to simulate observational results associated with fundamental sausage trapped modes. Aims: Starting from a non-equilibrium state with a dense chromospheric layer, we analyse the evolution of the internal plasma dynamics of magnetic loops, subject to line-tying boundary conditions, and with the coronal parameters described in Asai et al. (2001) and Melnikov et al. (2002) to investigate the onset and damping of sausage modes. Methods: To integrate the equations we used a high resolution shock-capturing (HRSC) method specially designed to deal appropriately with flow discontinuities. Results: Due to non-linearities and inhomogeneities, pure modes are difficult to sustain and always occur coupled among them so as to satisfy, e.g., the line-tying constraint. We found that, in one case, the resonant coupling of the sausage fundamental mode with a slow one results in a non-dissipative damping of the former. Conclusions: In scenarios of thick and den...
Mammascintigraphy: fundamentals, protocols, clinical application
Radiopharmaceuticals, methods and results of clinical application of mammascintigraphy are reviewed. The main advantage of mammascintigraphy is its higher (at least 90%) specificity in comparison with X-ray mammography, possibility of using it in patients with X-ray dense gland, predicting cancer metastases and follow-up of tumor regression during chemotherapy. Mammascintigraphy is an effective reliable method of accurate diagnosis of breast cancer, which should be introduced in clinical practice of Russian breast cancer control centers as soon as possible
Accurate Parallel Algorithm for Adini Nonconforming Finite Element
罗平; 周爱辉
2003-01-01
Multi-parameter asymptotic expansions are interesting since they justify the use of multi-parameter extrapolation which can be implemented in parallel and are well studied in many papers for the conforming finite element methods. For the nonconforming finite element methods, however, the work of the multi-parameter asymptotic expansions and extrapolation have seldom been found in the literature. This paper considers the solution of the biharmonic equation using Adini nonconforming finite elements and reports new results for the multi-parameter asymptotic expansions and extrapolation. The Adini nonconforming finite element solution of the biharmonic equation is shown to have a multi-parameter asymptotic error expansion and extrapolation. This expansion and a multi-parameter extrapolation technique were used to develop an accurate approximation parallel algorithm for the biharmonic equation. Finally, numerical results have verified the extrapolation theory.
Fundamentals of aircraft and rocket propulsion
El-Sayed, Ahmed F
2016-01-01
This book provides a comprehensive basics-to-advanced course in an aero-thermal science vital to the design of engines for either type of craft. The text classifies engines powering aircraft and single/multi-stage rockets, and derives performance parameters for both from basic aerodynamics and thermodynamics laws. Each type of engine is analyzed for optimum performance goals, and mission-appropriate engines selection is explained. Fundamentals of Aircraft and Rocket Propulsion provides information about and analyses of: thermodynamic cycles of shaft engines (piston, turboprop, turboshaft and propfan); jet engines (pulsejet, pulse detonation engine, ramjet, scramjet, turbojet and turbofan); chemical and non-chemical rocket engines; conceptual design of modular rocket engines (combustor, nozzle and turbopumps); and conceptual design of different modules of aero-engines in their design and off-design state. Aimed at graduate and final-year undergraduate students, this textbook provides a thorough grounding in th...
Fundamental Limits of Ultrathin Metasurfaces
Arbabi, Amir
2014-01-01
We present universal theoretical limits on the operation and performance of non-magnetic passive ultrathin metasurfaces. In particular, we prove that their local transmission, reflection, and polarization conversion coefficients are confined to limited regions of the complex plane. As a result, full control over the phase of the light transmitted through such metasurfaces cannot be achieved if the polarization of the light is not to be affected at the same time. We also establish fundamental limits on the maximum polarization conversion efficiency of these metasurfaces, and show that they cannot achieve more than 25% polarization conversion efficiency in transmission.
Fundamentals of liquid crystal devices
Yang, Deng-Ke
2014-01-01
Revised throughout to cover the latest developments in the fast moving area of display technology, this 2nd edition of Fundamentals of Liquid Crystal Devices, will continue to be a valuable resource for those wishing to understand the operation of liquid crystal displays. Significant updates include new material on display components, 3D LCDs and blue-phase displays which is one of the most promising new technologies within the field of displays and it is expected that this new LC-technology will reduce the response time and the number of optical components of LC-modules. Prof. Yang is a pion
Testing Fundamental Gravitation in Space
Turyshev, Slava G.
2013-10-15
General theory of relativity is a standard theory of gravitation; as such, it is used to describe gravity when the problems in astronomy, astrophysics, cosmology, and fundamental physics are concerned. The theory is also relied upon in many modern applications involving spacecraft navigation, geodesy, and time transfer. Here we review the foundations of general relativity and discuss its current empirical status. We describe both the theoretical motivation and the scientific progress that may result from the new generation of high-precision tests that are anticipated in the near future.
Fundamentals of soft matter science
Hirst, Linda S
2012-01-01
""The publication is written at a very fundamental level, which will make it easily readable for undergraduate students. It will certainly also be a valuable text for students and postgraduates in interdisciplinary programmes, as not only physical aspects, but also the chemistry and applications are presented and discussed. … The book is well illustrated, and I really do like the examples and pictures provided for simple demonstration experiments, which can be done during the lectures. Also, the experimental techniques chapter at the end of the book may be helpful. The question sections are he
Heterogeneous catalysis fundamentals and applications
Ross, Julian RH
2011-01-01
Heterogeneous catalysis plays a part in the production of more than 80% of all chemical products. It is therefore essential that all chemists and chemical engineers have an understanding of the fundamental principles as well as the applications of heterogeneous catalysts. This book introduces the subject, starting at a basic level, and includes sections on adsorption and surface science, catalytic kinetics, experimental methods for preparing and studying heterogeneous catalysts, as well as some aspects of the design of industrial catalytic reactors. It ends with a chapter that covers a range
Fundamentals of ultrasonic phased arrays
Schmerr, Lester W
2014-01-01
This book describes in detail the physical and mathematical foundations of ultrasonic phased array measurements.?The book uses linear systems theory to develop a comprehensive model of the signals and images that can be formed with phased arrays. Engineers working in the field of ultrasonic nondestructive evaluation (NDE) will find in this approach a wealth of information on how to design, optimize and interpret ultrasonic inspections with phased arrays. The fundamentals and models described in the book will also be of significant interest to other fields, including the medical ultrasound and
Foam engineering fundamentals and applications
2012-01-01
Containing contributions from leading academic and industrial researchers, this book provides a much needed update of foam science research. The first section of the book presents an accessible summary of the theory and fundamentals of foams. This includes chapters on morphology, drainage, Ostwald ripening, coalescence, rheology, and pneumatic foams. The second section demonstrates how this theory is used in a wide range of industrial applications, including foam fractionation, froth flotation and foam mitigation. It includes chapters on suprafroths, flotation of oil sands, foams in enhancing petroleum recovery, Gas-liquid Mass Transfer in foam, foams in glass manufacturing, fire-fighting foam technology and consumer product foams.
Pleitez, V
1999-01-01
One of the main activities in science teaching, and in particular in Physics teaching, is not only the discussion of both modern problems and problems which solution is an urgent matter. It means that the picture of an active and alive science should be transmitted to the students, mainly to the College students. A central point in this matter is the issue which characterizes the Fundamental Laws of Nature. In this work we emphasize that this sort of laws may exist in areas which are different from those usually considered. In this type of discussion it is neither possible nor desirable to avoid the historical perspective of the scientific development.
Fundamental Laser Welding Process Investigations
Bagger, Claus; Olsen, Flemming Ove
1998-01-01
In a number of systematic laboratory investigations the fundamental behavior of the laser welding process was analyzed by the use of normal video (30 Hz), high speed video (100 and 400 Hz) and photo diodes. Sensors were positioned to monitor the welding process from both the top side and the rear...... side of the specimen.Special attention has been given to the dynamic nature of the laser welding process, especially during unstable welding conditions. In one series of experiments, the stability of the process has been varied by changing the gap distance in lap welding. In another series...
Computing fundamentals digital literacy edition
Wempen, Faithe
2014-01-01
Computing Fundamentals has been tailor made to help you get up to speed on your Computing Basics and help you get proficient in entry level computing skills. Covering all the key topics, it starts at the beginning and takes you through basic set-up so that you'll be competent on a computer in no time.You'll cover: Computer Basics & HardwareSoftwareIntroduction to Windows 7Microsoft OfficeWord processing with Microsoft Word 2010Creating Spreadsheets with Microsoft ExcelCreating Presentation Graphics with PowerPointConnectivity and CommunicationWeb BasicsNetwork and Internet Privacy and Securit
Computing fundamentals introduction to computers
Wempen, Faithe
2014-01-01
The absolute beginner's guide to learning basic computer skills Computing Fundamentals, Introduction to Computers gets you up to speed on basic computing skills, showing you everything you need to know to conquer entry-level computing courses. Written by a Microsoft Office Master Instructor, this useful guide walks you step-by-step through the most important concepts and skills you need to be proficient on the computer, using nontechnical, easy-to-understand language. You'll start at the very beginning, getting acquainted with the actual, physical machine, then progress through the most common
Fundamentals of magnetism and electricity
Arya, SN
2009-01-01
Fundamentals of Magnetism and Electricity is a textbook on the physics of electricity, magnetism, and electromagnetic fields and waves. It is written mainly with the physics student in mind, although it will also be of use to students of electrical and electronic engineering. The approach is concise but clear, and the author has assumed that the reader will be familiar with the basic phenomena. The theory, however, is set out in a completely self-contained and coherent way and developed to the point where the reader can appreciate the beauty and coherence of the Maxwell equations.
Communication technology update and fundamentals
Grant, August E
2014-01-01
A classic now in its 14th edition, Communication Technology Update and Fundamentals is the single best resource for students and professionals looking to brush up on how these technologies have developed, grown, and converged, as well as what's in store for the future. It begins by developing the communication technology framework-the history, ecosystem, and structure-then delves into each type of technology, including everything from mass media, to computers and consumer electronics, to networking technologies. Each chapter is written by faculty and industry experts who p
Photovoltaics fundamentals, technology and practice
Mertens, Konrad
2013-01-01
Concise introduction to the basic principles of solar energy, photovoltaic systems, photovoltaic cells, photovoltaic measurement techniques, and grid connected systems, overviewing the potential of photovoltaic electricity for students and engineers new to the topic After a brief introduction to the topic of photovoltaics' history and the most important facts, Chapter 1 presents the subject of radiation, covering properties of solar radiation, radiation offer, and world energy consumption. Chapter 2 looks at the fundamentals of semiconductor physics. It discusses the build-up of semiconducto
Fundamental triangulation networks in Denmark
Borre, Kai
2014-01-01
Academy of Sciences and Letters initiated a mapping project which should be based on the principle of triangulation. Eventually 24 maps were printed in varying scales, predominantly in 1:120 000. The last map was engraved in 1842. The Danish GradeMeasurement initiated remeasurements and redesign of the...... fundamental triangulation network. This network served scientific as well as cartographic purposes in more than a century. Only in the 1960s all triangulation sides were measured electronically. A combined least-squares adjustment followed in the 1970s...
Fundamentals of gas particle flow
Rudinger, G
1980-01-01
Fundamentals of Gas-Particle Flow is an edited, updated, and expanded version of a number of lectures presented on the "Gas-Solid Suspensions course organized by the von Karman Institute for Fluid Dynamics. Materials presented in this book are mostly analytical in nature, but some experimental techniques are included. The book focuses on relaxation processes, including the viscous drag of single particles, drag in gas-particles flow, gas-particle heat transfer, equilibrium, and frozen flow. It also discusses the dynamics of single particles, such as particles in an arbitrary flow, in a r
Testing Fundamental Gravitation in Space
General theory of relativity is a standard theory of gravitation; as such, it is used to describe gravity when the problems in astronomy, astrophysics, cosmology, and fundamental physics are concerned. The theory is also relied upon in many modern applications involving spacecraft navigation, geodesy, and time transfer. Here we review the foundations of general relativity and discuss its current empirical status. We describe both the theoretical motivation and the scientific progress that may result from the new generation of high-precision tests that are anticipated in the near future
Fundamental Properties of Quaternion Spinors
Yefremov, Alexander P
2012-01-01
The interior structure of arbitrary sets of quaternion units is analyzed using general methods of the theory of matrices. It is shown that the units are composed of quadratic combinations of fundamental objects having a dual mathematical meaning as spinor couples and dyads locally describing 2D surfaces. A detailed study of algebraic relationships between the spinor sets belonging to different quaternion units is suggested as an initial step aimed at producing a self-consistent geometric image of spinor-surface distribution on the physical 3D space background.
Fundamentals of spread spectrum modulation
Ziemer, Rodger E
2007-01-01
This lecture covers the fundamentals of spread spectrum modulation, which can be defined as any modulation technique that requires a transmission bandwidth much greater than the modulating signal bandwidth, independently of the bandwidth of the modulating signal. After reviewing basic digital modulation techniques, the principal forms of spread spectrum modulation are described. One of the most important components of a spread spectrum system is the spreading code, and several types and their characteristics are described. The most essential operation required at the receiver in a spread spect
Quantum Uncertainty and Fundamental Interactions
Tosto S.
2013-04-01
Full Text Available The paper proposes a simplified theoretical approach to infer some essential concepts on the fundamental interactions between charged particles and their relative strengths at comparable energies by exploiting the quantum uncertainty only. The worth of the present approach relies on the way of obtaining the results, rather than on the results themselves: concepts today acknowledged as fingerprints of the electroweak and strong interactions appear indeed rooted in the same theoretical frame including also the basic principles of special and general relativity along with the gravity force.
DOE fundamentals handbook: Material science
This handbook was developed to assist nuclear facility operating contractors in providing operators, maintenance personnel, and the technical staff with the necessary fundamentals training to ensure a basic understanding of the structure and properties of metals. This volume contains the following modules: thermal shock (thermal stress, pressurized thermal shock), brittle fracture (mechanism, minimum pressurization-temperature curves, heatup/cooldown rate limits), and plant materials (properties considered when selecting materials, fuel materials, cladding and reflectors, control materials, nuclear reactor core problems, plant material problems, atomic displacement due to irradiation, thermal and displacement spikes due to irradiation, neutron capture effect, radiation effects in organic compounds, reactor use of aluminum)
Autodesk Combustion 4 fundamentals courseware
Autodesk,
2005-01-01
Whether this is your first experience with Combustion software or you're upgrading to take advantage of the many new features and tools, this guide will serve as your ultimate resource to this all-in-one professional compositing application. Much more than a point-and-click manual, this guide explains the principles behind the software, serving as an overview of the package and associated techniques. Written by certified Autodesk training specialists for motion graphic designers, animators, and visual effects artists, Combustion 4 Fundamentals Courseware provides expert advice for all skill le
Fundamental Frequency and Model Order Estimation Using Spatial Filtering
Karimian-Azari, Sam; Jensen, Jesper Rindom; Christensen, Mads Græsbøll
parameters. In this paper, we present an estimation procedure for harmonic-structured signals in situations with strong interference using spatial filtering, or beamforming. We jointly estimate the fundamental frequency and the constrained model order through the output of the beamformers. Besides that, we...
Developing a Competency-based Fundamentals of Management Communication Course.
Murranka, Patricia A.; Lynch, David
1999-01-01
Describes steps the authors went through to develop an innovative course in fundamentals of management communication that derives from competency-based instruction. Describes how they reviewed administrative and course parameters; identified instructional modules and generated competencies for each one; and created objectives, achievement levels,…
Towards an accurate bioimpedance identification
Sanchez, B.; Louarroudi, E.; Bragos, R.; Pintelon, R.
2013-04-01
This paper describes the local polynomial method (LPM) for estimating the time-invariant bioimpedance frequency response function (FRF) considering both the output-error (OE) and the errors-in-variables (EIV) identification framework and compare it with the traditional cross— and autocorrelation spectral analysis techniques. The bioimpedance FRF is measured with the multisine electrical impedance spectroscopy (EIS) technique. To show the overwhelming accuracy of the LPM approach, both the LPM and the classical cross— and autocorrelation spectral analysis technique are evaluated through the same experimental data coming from a nonsteady-state measurement of time-varying in vivo myocardial tissue. The estimated error sources at the measurement frequencies due to noise, σnZ, and the stochastic nonlinear distortions, σZNL, have been converted to Ω and plotted over the bioimpedance spectrum for each framework. Ultimately, the impedance spectra have been fitted to a Cole impedance model using both an unweighted and a weighted complex nonlinear least square (CNLS) algorithm. A table is provided with the relative standard errors on the estimated parameters to reveal the importance of which system identification frameworks should be used.
Towards an accurate bioimpedance identification
This paper describes the local polynomial method (LPM) for estimating the time-invariant bioimpedance frequency response function (FRF) considering both the output-error (OE) and the errors-in-variables (EIV) identification framework and compare it with the traditional cross— and autocorrelation spectral analysis techniques. The bioimpedance FRF is measured with the multisine electrical impedance spectroscopy (EIS) technique. To show the overwhelming accuracy of the LPM approach, both the LPM and the classical cross— and autocorrelation spectral analysis technique are evaluated through the same experimental data coming from a nonsteady-state measurement of time-varying in vivo myocardial tissue. The estimated error sources at the measurement frequencies due to noise, σnZ, and the stochastic nonlinear distortions, σZNL, have been converted to Ω and plotted over the bioimpedance spectrum for each framework. Ultimately, the impedance spectra have been fitted to a Cole impedance model using both an unweighted and a weighted complex nonlinear least square (CNLS) algorithm. A table is provided with the relative standard errors on the estimated parameters to reveal the importance of which system identification frameworks should be used.
Accurate Development of Thermal Neutron Scattering Cross Section Libraries
Hawari, Ayman; Dunn, Michael
2014-06-10
The objective of this project is to develop a holistic (fundamental and accurate) approach for generating thermal neutron scattering cross section libraries for a collection of important enutron moderators and reflectors. The primary components of this approach are the physcial accuracy and completeness of the generated data libraries. Consequently, for the first time, thermal neutron scattering cross section data libraries will be generated that are based on accurate theoretical models, that are carefully benchmarked against experimental and computational data, and that contain complete covariance information that can be used in propagating the data uncertainties through the various components of the nuclear design and execution process. To achieve this objective, computational and experimental investigations will be performed on a carefully selected subset of materials that play a key role in all stages of the nuclear fuel cycle.
38 CFR 4.46 - Accurate measurement.
2010-07-01
... 38 Pensions, Bonuses, and Veterans' Relief 1 2010-07-01 2010-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...
Encoding voice fundamental frequency into vibrotactile frequency.
Rothenberg, M; Molitor, R D
1979-10-01
Measured in this study was the ability of eight hearing and five deaf subjects to identify the stress pattern in a short sentence from the variation in voice fundamental frequency (F0), when presented aurally (for hearing subjects) and when transformed into vibrotactile pulse frequency. Various transformations from F0 to pulse frequency were tested in an attempt to determine an optimum transformation, the amount of F0 information that could be transmitted, and what the limitations in the tactile channel might be. The results indicated that a one- or two-octave reduction of F0 vibrotactile frequency (transmitting every second or third glottal pulse) might result in a significant ability to discriminate the intonation patterns associated with moderate-to-strong patterns of sentence stress in English. However, accurate reception of the details of the intonation pattern may require a slower than normal pronounciation because of an apparent temporal indeterminacy of about 200 ms in the perception of variations in vibrotactile frequency. A performance deficit noted for the two prelingually, profoundly deaf subjects with marginally discriminable encodings offers some support for our previous hypothesis that there is a natural association between auditory pitch and perceived vibrotactile frequency. PMID:159917
Fundamental studies of polymer filtration
Smith, B.F.; Lu, M.T.; Robison, T.W.; Rogers, Y.C.; Wilson, K.V.
1998-12-31
This is the final report of a one-year, Laboratory Directed Research and Development (LDRD) project at Los Alamos National Laboratory (LANL). The objectives of this project were (1) to develop an enhanced fundamental understanding of the coordination chemistry of hazardous-metal-ion complexation with water-soluble metal-binding polymers, and (2) to exploit this knowledge to develop improved separations for analytical methods, metals processing, and waste treatment. We investigated features of water-soluble metal-binding polymers that affect their binding constants and selectivity for selected transition metal ions. We evaluated backbone polymers using light scattering and ultrafiltration techniques to determine the effect of pH and ionic strength on the molecular volume of the polymers. The backbone polymers were incrementally functionalized with a metal-binding ligand. A procedure and analytical method to determine the absolute level of functionalization was developed and the results correlated with the elemental analysis, viscosity, and molecular size.
Materials Fundamentals of Gate Dielectrics
Demkov, Alexander A
2006-01-01
This book presents materials fundamentals of novel gate dielectrics that are being introduced into semiconductor manufacturing to ensure the continuous scalling of the CMOS devices. This is a very fast evolving field of research so we choose to focus on the basic understanding of the structure, thermodunamics, and electronic properties of these materials that determine their performance in device applications. Most of these materials are transition metal oxides. Ironically, the d-orbitals responsible for the high dielectric constant cause sever integration difficulties thus intrinsically limiting high-k dielectrics. Though new in the electronics industry many of these materials are wel known in the field of ceramics, and we describe this unique connection. The complexity of the structure-property relations in TM oxides makes the use of the state of the art first-principles calculations necessary. Several chapters give a detailed description of the modern theory of polarization, and heterojunction band discont...
Fluid mechanics fundamentals and applications
Cengel, Yunus
2013-01-01
Cengel and Cimbala's Fluid Mechanics Fundamentals and Applications, communicates directly with tomorrow's engineers in a simple yet precise manner. The text covers the basic principles and equations of fluid mechanics in the context of numerous and diverse real-world engineering examples. The text helps students develop an intuitive understanding of fluid mechanics by emphasizing the physics, using figures, numerous photographs and visual aids to reinforce the physics. The highly visual approach enhances the learning of Fluid mechanics by students. This text distinguishes itself from others by the way the material is presented - in a progressive order from simple to more difficult, building each chapter upon foundations laid down in previous chapters. In this way, even the traditionally challenging aspects of fluid mechanics can be learned effectively. McGraw-Hill is also proud to offer ConnectPlus powered by Maple with the third edition of Cengel/Cimbabla, Fluid Mechanics. This innovative and powerful new sy...
Green Manufacturing Fundamentals and Applications
2013-01-01
Green Manufacturing: Fundamentals and Applications introduces the basic definitions and issues surrounding green manufacturing at the process, machine and system (including supply chain) levels. It also shows, by way of several examples from different industry sectors, the potential for substantial improvement and the paths to achieve the improvement. Additionally, this book discusses regulatory and government motivations for green manufacturing and outlines the path for making manufacturing more green as well as making production more sustainable. This book also: • Discusses new engineering approaches for manufacturing and provides a path from traditional manufacturing to green manufacturing • Addresses regulatory and economic issues surrounding green manufacturing • Details new supply chains that need to be in place before going green • Includes state-of-the-art case studies in the areas of automotive, semiconductor and medical areas as well as in the supply chain and packaging areas Green Manufactu...
Molecular imaging. Fundamentals and applications
Tian, Jie (ed.) [Chinese Academy of Sciences, Beijing (China). Intelligent Medical Research Center
2013-07-01
Covers a wide range of new theory, new techniques and new applications. Contributed by many experts in China. The editor has obtained the National Science and Technology Progress Award twice. ''Molecular Imaging: Fundamentals and Applications'' is a comprehensive monograph which describes not only the theory of the underlying algorithms and key technologies but also introduces a prototype system and its applications, bringing together theory, technology and applications. By explaining the basic concepts and principles of molecular imaging, imaging techniques, as well as research and applications in detail, the book provides both detailed theoretical background information and technical methods for researchers working in medical imaging and the life sciences. Clinical doctors and graduate students will also benefit from this book.
Fundamentals of Protein NMR Spectroscopy
Rule, Gordon S
2006-01-01
NMR spectroscopy has proven to be a powerful technique to study the structure and dynamics of biological macromolecules. Fundamentals of Protein NMR Spectroscopy is a comprehensive textbook that guides the reader from a basic understanding of the phenomenological properties of magnetic resonance to the application and interpretation of modern multi-dimensional NMR experiments on 15N/13C-labeled proteins. Beginning with elementary quantum mechanics, a set of practical rules is presented and used to describe many commonly employed multi-dimensional, multi-nuclear NMR pulse sequences. A modular analysis of NMR pulse sequence building blocks also provides a basis for understanding and developing novel pulse programs. This text not only covers topics from chemical shift assignment to protein structure refinement, as well as the analysis of protein dynamics and chemical kinetics, but also provides a practical guide to many aspects of modern spectrometer hardware, sample preparation, experimental set-up, and data pr...
Molecular imaging. Fundamentals and applications
Covers a wide range of new theory, new techniques and new applications. Contributed by many experts in China. The editor has obtained the National Science and Technology Progress Award twice. ''Molecular Imaging: Fundamentals and Applications'' is a comprehensive monograph which describes not only the theory of the underlying algorithms and key technologies but also introduces a prototype system and its applications, bringing together theory, technology and applications. By explaining the basic concepts and principles of molecular imaging, imaging techniques, as well as research and applications in detail, the book provides both detailed theoretical background information and technical methods for researchers working in medical imaging and the life sciences. Clinical doctors and graduate students will also benefit from this book.
Queueing networks a fundamental approach
Dijk, Nico
2011-01-01
This handbook aims to highlight fundamental, methodological and computational aspects of networks of queues to provide insights and to unify results that can be applied in a more general manner. The handbook is organized into five parts: Part 1 considers exact analytical results such as of product form type. Topics include characterization of product forms by physical balance concepts and simple traffic flow equations, classes of service and queue disciplines that allow a product form, a unified description of product forms for discrete time queueing networks, insights for insensitivity, and aggregation and decomposition results that allow subnetworks to be aggregated into single nodes to reduce computational burden. Part 2 looks at monotonicity and comparison results such as for computational simplification by either of two approaches: stochastic monotonicity and ordering results based on the ordering of the proces generators, and comparison results and explicit error bounds based on an underlying Markov r...
Overview: Main Fundamentals for Steganography
AL-Ani, Zaidoon Kh; Zaidan, B B; Alanazi, Hamdan O
2010-01-01
The rapid development of multimedia and internet allows for wide distribution of digital media data. It becomes much easier to edit, modify and duplicate digital information .Besides that, digital documents are also easy to copy and distribute, therefore it will be faced by many threats. It is a big security and privacy issue, it become necessary to find appropriate protection because of the significance, accuracy and sensitivity of the information. Steganography considers one of the techniques which used to protect the important information. The main goals for this paper, to recognize the researchers for the main fundamentals of steganography. In this paper provides a general overview of the following subject areas: Steganography types, General Steganography system, Characterization of Steganography Systems and Classification of Steganography Techniques.
Fundamentals of modern unsteady aerodynamics
Gülçat, Ülgen
2016-01-01
In this book, the author introduces the concept of unsteady aerodynamics and its underlying principles. He provides the readers with a comprehensive review of the fundamental physics of free and forced unsteadiness, the terminology and basic equations of aerodynamics ranging from incompressible flow to hypersonics. The book also covers modern topics related to the developments made in recent years, especially in relation to wing flapping for propulsion. The book is written for graduate and senior year undergraduate students in aerodynamics and also serves as a reference for experienced researchers. Each chapter includes ample examples, questions, problems and relevant references. The treatment of these modern topics has been completely revised end expanded for the new edition. It now includes new numerical examples, a section on the ground effect, and state-space representation.
Fundamental Travel Demand Model Example
Hanssen, Joel
2010-01-01
Instances of transportation models are abundant and detailed "how to" instruction is available in the form of transportation software help documentation. The purpose of this paper is to look at the fundamental inputs required to build a transportation model by developing an example passenger travel demand model. The example model reduces the scale to a manageable size for the purpose of illustrating the data collection and analysis required before the first step of the model begins. This aspect of the model development would not reasonably be discussed in software help documentation (it is assumed the model developer comes prepared). Recommendations are derived from the example passenger travel demand model to suggest future work regarding the data collection and analysis required for a freight travel demand model.
Holographic viscosity of fundamental matter.
Mateos, David; Myers, Robert C; Thomson, Rowan M
2007-03-01
A holographic dual of a finite-temperature SU(Nc) gauge theory with a small number of flavors Nfblack hole background. By considering the backreaction of the branes, we demonstrate that, to leading order in Nf/Nc, the viscosity to entropy ratio in these theories saturates the conjectured universal bound eta/s> or =1/4pi. Given the known results for the entropy density, the contribution of the fundamental matter eta fund is therefore enhanced at strong 't Hooft coupling lambda; for example, eta fund approximately lambda NcNfT3 in four dimensions. Other transport coefficients are analogously enhanced. These results hold with or without a baryon number chemical potential. PMID:17358523
Optical Metamaterials Fundamentals and Applications
Cai, Wenshan
2010-01-01
Metamaterials—artificially structured materials with engineered electromagnetic properties—have enabled unprecedented flexibility in manipulating electromagnetic waves and producing new functionalities. In just a few years, the field of optical metamaterials has emerged as one of the most exciting topics in the science of light, with stunning and unexpected outcomes that have fascinated scientists and the general public alike. This volume details recent advances in the study of optical metamaterials, ranging from fundamental aspects to up-to-date implementations, in one unified treatment. Important recent developments and applications such as superlenses and cloaking devices are also treated in detail and made understandable. Optical Metamaterials will serve as a very timely book for both newcomers and advanced researchers in this rapidly evolving field. Early praise for Optical Metamaterials: "...this book is timely bringing to students and other new entrants to the field the most up to date concepts. Th...
ALPHA: antihydrogen and fundamental physics
Madsen, Niels
2014-02-01
Detailed comparisons of antihydrogen with hydrogen promise to be a fruitful test bed of fundamental symmetries such as the CPT theorem for quantum field theory or studies of gravitational influence on antimatter. With a string of recent successes, starting with the first trapped antihydrogen and recently resulting in the first measurement of a quantum transition in anti-hydrogen, the ALPHA collaboration is well on its way to perform such precision comparisons. We will discuss the key innovative steps that have made these results possible and in particular focus on the detailed work on positron and antiproton preparation to achieve antihydrogen cold enough to trap as well as the unique features of the ALPHA apparatus that has allowed the first quantum transitions in anti-hydrogen to be measured with only a single trapped antihydrogen atom per experiment. We will also look at how ALPHA plans to step from here towards more precise comparisons of matter and antimatter.
Phononic crystals fundamentals and applications
Adibi, Ali
2016-01-01
This book provides an in-depth analysis as well as an overview of phononic crystals. This book discusses numerous techniques for the analysis of phononic crystals and covers, among other material, sonic and ultrasonic structures, hypersonic planar structures and their characterization, and novel applications of phononic crystals. This is an ideal book for those working with micro and nanotechnology, MEMS (microelectromechanical systems), and acoustic devices. This book also: Presents an introduction to the fundamentals and properties of phononic crystals Covers simulation techniques for the analysis of phononic crystals Discusses sonic and ultrasonic, hypersonic and planar, and three-dimensional phononic crystal structures Illustrates how phononic crystal structures are being deployed in communication systems and sensing systems.
Fundamentals of reversible flowchart languages
Yokoyama, Tetsuo; Axelsen, Holger Bock; Glück, Robert
2016-01-01
Abstract This paper presents the fundamentals of reversible flowcharts. They are intended to naturally represent the structure and control flow of reversible (imperative) programming languages in a simple computation model, in the same way classical flowcharts do for conventional languages......, structured reversible flowcharts are as expressive as unstructured ones, as shown by a reversible version of the classic Structured Program Theorem. We illustrate how reversible flowcharts can be concretized with two example programming languages, complete with syntax and semantics: a low-level unstructured...... language and a high-level structured language. We introduce concrete tools such as program inverters and translators for both languages, which follow the structure suggested by the flowchart model. To further illustrate the different concepts and tools brought together in this paper, we present two major...
Fundamental enabling issues in nanotechnology :
Floro, Jerrold Anthony; Foiles, Stephen Martin; Hearne, Sean Joseph; Hoyt, Jeffrey John; Seel, Steven Craig; Webb, Edmund Blackburn,; Morales, Alfredo Martin; Zimmerman, Jonathan A.
2007-10-01
To effectively integrate nanotechnology into functional devices, fundamental aspects of material behavior at the nanometer scale must be understood. Stresses generated during thin film growth strongly influence component lifetime and performance; stress has also been proposed as a mechanism for stabilizing supported nanoscale structures. Yet the intrinsic connections between the evolving morphology of supported nanostructures and stress generation are still a matter of debate. This report presents results from a combined experiment and modeling approach to study stress evolution during thin film growth. Fully atomistic simulations are presented predicting stress generation mechanisms and magnitudes during all growth stages, from island nucleation to coalescence and film thickening. Simulations are validated by electrodeposition growth experiments, which establish the dependence of microstructure and growth stresses on process conditions and deposition geometry. Sandia is one of the few facilities with the resources to combine experiments and modeling/theory in this close a fashion. Experiments predicted an ongoing coalescence process that generates signficant tensile stress. Data from deposition experiments also supports the existence of a kinetically limited compressive stress generation mechanism. Atomistic simulations explored island coalescence and deposition onto surfaces intersected by grain boundary structures to permit investigation of stress evolution during later growth stages, e.g. continual island coalescence and adatom incorporation into grain boundaries. The predictive capabilities of simulation permit direct determination of fundamental processes active in stress generation at the nanometer scale while connecting those processes, via new theory, to continuum models for much larger island and film structures. Our combined experiment and simulation results reveal the necessary materials science to tailor stress, and therefore performance, in
Sales-Cruz, Mauricio; Heitzig, Martina; Cameron, Ian;
2011-01-01
In this chapter the importance of parameter estimation in model development is illustrated through various applications related to reaction systems. In particular, rate constants in a reaction system are obtained through parameter estimation methods. These approaches often require the application...... of optimisation techniques coupled with dynamic solution of the underlying model. Linear and nonlinear approaches to parameter estimation are investigated. There is also the application of maximum likelihood principles in the estimation of parameters, as well as the use of orthogonal collocation to...... generate a set of algebraic equations as the basis for parameter estimation.These approaches are illustrated using estimations of kinetic constants from reaction system models....
High precision fundamental constants at the TeV scale'
This report summarizes the proceedings of the 2014 Mainz Institute for Theoretical Physics (MITP) scientific program on ''High precision fundamental constants at the TeV scale''. The two outstanding parameters in the Standard Model dealt with during the MITP scientific program are the strong coupling constant αs and the top-quark mass mt. Lacking knowledge on the value of those fundamental constants is often the limiting factor in the accuracy of theoretical predictions. The current status on αs and mt has been reviewed and directions for future research have been identified.
Understanding the Cash Flow-Fundamental Ratio
Chyi-Lun Chiou
2015-01-01
This article investigates the use of cash flow-fundamental ratio in forecasting stock market return and examines implications behind this ratio. By presuming the dynamics of cash flow-fundamental ratio I identify the relationship between economic uncertainty and risk premium. The evidence shows that cash flow-fundamental ratio is procyclical and is a predictor of cash flow growth and excess returns. The cash flow-fundamental ratio is proved to be negatively associated with risk premium. I als...
Fundamentals of statistical signal processing
Kay, Steven M
1993-01-01
A unified presentation of parameter estimation for those involved in the design and implementation of statistical signal processing algorithms. Covers important approaches to obtaining an optimal estimator and analyzing its performance; and includes numerous examples as well as applications to real- world problems. MARKETS: For practicing engineers and scientists who design and analyze signal processing systems, i.e., to extract information from noisy signals — radar engineer, sonar engineer, geophysicist, oceanographer, biomedical engineer, communications engineer, economist, statistician, physicist, etc.
Fundamental problems in fault detection and identification
Saberi, A.; Stoorvogel, A. A.; Sannuti, P.;
2000-01-01
A number of different fundamental problems in fault detection and fault identification are formulated in this paper. The fundamental problems include exact, almost, generic and class-wise fault detection and identification. Necessary and sufficient conditions for the solvability of the fundamental...
Accurate and Simple Calibration of DLP Projector Systems
Wilm, Jakob; Olesen, Oline Vinter; Larsen, Rasmus
2014-01-01
Much work has been devoted to the calibration of optical cameras, and accurate and simple methods are now available which require only a small number of calibration targets. The problem of obtaining these parameters for light projectors has not been studied as extensively and most current methods...... require a camera and involve feature extraction from a known projected pattern. In this work we present a novel calibration technique for DLP Projector systems based on phase shifting profilometry projection onto a printed calibration target. In contrast to most current methods, the one presented here...... does not rely on an initial camera calibration, and so does not carry over the error into projector calibration. A radial interpolation scheme is used to convert features coordinates into projector space, thereby allowing for a very accurate procedure. This allows for highly accurate determination of...
Estimation of physical parameters in induction motors
Børsting, H.; Knudsen, Morten; Rasmussen, Henrik;
1994-01-01
Parameter estimation in induction motors is a field of great interest, because accurate models are needed for robust dynamic control of induction motors......Parameter estimation in induction motors is a field of great interest, because accurate models are needed for robust dynamic control of induction motors...
Fundamental research with neutron interferometry
The invention of neutron interferometry in 1974 stimulated many experiments related to the wave-particle dualism of quantum mechanics. Widely separated coherent beams can be produced within a perfect crystal interforemeter which can be influenced by nuclear, magnetic and gravitational interaction. High order interferences have been observed connected with the occurrence of an interferometric spectral modeling. This effect has been demonstrated by a proper post-selection procedure showing a persisting action of plane wave components outside the wave packets. The verification of the 4π-symmetry of spinor wave functions and of the spin superposition law at a macroscopic scale and the observation of gravitational effects including the Sagnac effect have been widely debated in literature. The coupling of the neutron magnetic moment to resonator coils permitted the coherent energy exchange between the neutron quantum system and the macroscopic resonator. This phenomenon provided the basis for the observation of the magnetic Josephson effect with an energy sensitivity of 10-19 eV. Partial beam path detection experiments are in close connection with the development of quantum mechanical measurement theory. The very high sensitivity of neutron interferometry may be used in future for new fundamental-, solid-state and nuclear-physics application. Further steps towards advanced neutron quantum optical methods are envisaged. (author)
Fundamentals of the DIGES code
Recently the authors have completed the development of the DIGES code (Direct GEneration of Spectra) for the US Nuclear Regulatory Commission. This paper presents the fundamental theoretical aspects of the code. The basic modeling involves a representation of typical building-foundation configurations as multi degree-of-freedom dynamic which are subjected to dynamic inputs in the form of applied forces or pressure at the superstructure or in the form of ground motions. Both the deterministic as well as the probabilistic aspects of DIGES are described. Alternate ways of defining the seismic input for the estimation of in-structure spectra and their consequences in terms of realistically appraising the variability of the structural response is discussed in detaiL These include definitions of the seismic input by ground acceleration time histories, ground response spectra, Fourier amplitude spectra or power spectral densities. Conversions of one of these forms to another due to requirements imposed by certain analysis techniques have been shown to lead, in certain cases, in controversial results. Further considerations include the definition of the seismic input as the excitation which is directly applied at the foundation of a structure or as the ground motion of the site of interest at a given point. In the latter case issues related to the transferring of this motion to the foundation through convolution/deconvolution and generally through kinematic interaction approaches are considered
Fundamental principles of robot vision
Hall, Ernest L.
1993-08-01
Robot vision is a specialty of intelligent machines which describes the interaction between robotic manipulators and machine vision. Early robot vision systems were built to demonstrate that a robot with vision could adapt to changes in its environment. More recently attention is being directed toward machines with expanded adaptation and learning capabilities. The use of robot vision for automatic inspection and recognition of objects for manipulation by an industrial robot or for guidance of a mobile robot are two primary applications. Adaptation and learning characteristics are often lacking in industrial automation and if they can be added successfully, result in a more robust system. Due to a real time requirement, the robot vision methods that have proven most successful have been ones which could be reduced to a simple, fast computation. The purpose of this paper is to discuss some of the fundamental concepts in sufficient detail to provide a starting point for the interested engineer or scientist. A detailed example of a camera system viewing an object and for a simple, two dimensional robot vision system is presented. Finally, conclusions and recommendations for further study are presented.
The water, fundamental ecological base?
To speak of ecology and the man's interaction with the environment takes, in fact implicit many elements that, actuating harmoniously generates a conducive entropy to a better to be, however it is necessary to hierarchy the importance of these elements, finding that the water, not alone to constitute sixty five percent of the total volume of the planet, or sixty percent of the human body, but to be the well called molecule of the life, it is constituted in the main element to consider in the study of the ecology. The water circulates continually through the endless hydrological cycle of condensation, precipitation, filtration, retention, evaporation, precipitation and so forth; however, due to the quick growth of the cities, its expansion of the green areas or its border lands, result of a demographic behavior and of inadequate social establishment; or of the advance industrial excessive, they produce irreparable alterations in the continuous processes of the water production, for this reason it is fundamental to know some inherent problems to the sources of water. The water, the most important in the renewable natural resources, essential for the life and for the achievement of good part of the man's goals in their productive function, it is direct or indirectly the natural resource more threatened by the human action
Nanostructured metals. Fundamentals to applications
Grivel, J.-C.; Hansen, N.; Huang, X.; Juul Jensen, D.; Mishin, O.V.; Nielsen, S.F.; Pantleon, W.; Toftegaard, H.; Winther, G.; Yu, T. (eds.)
2009-07-01
In the today's world, materials science and engineering must as other technical fields focus on sustainability. Raw materials and energy have to be conserved and metals with improved or new structural and functional properties must be invented, developed and brought to application. In this endeavour a very promising route is to reduce the structural scale of metallic materials, thereby bridging industrial metals of today with emerging nanometals of tomorrow, i.e. structural scales ranging from a few micrometres to the nanometre regime. While taking a focus on metals with structures in this scale regime the symposium spans from fundamental aspects towards applications, uniting materials scientists and technologists. A holistic approach characterizes the themes of the symposium encompassing synthesis, characterization, modelling and performance where in each area significant progress has been made in recent years. Synthesis now covers top-down processes, e.g. plastic deformation, and bottom-up processes, e.g. chemical and physical synthesis. In the area of structural and mechanical characterization advanced techniques are now widely applied and in-situ techniques for structural characterization under mechanical or thermal loading are under rapid development in both 2D and 3D. Progress in characterization techniques has led to a precise description of different boundaries (grain, dislocation, twin, phase), and of how they form and evolve, also including theoretical modelling and simulations of structures, properties and performance. (au)
Nanostructured metals. Fundamentals to applications
In the today's world, materials science and engineering must as other technical fields focus on sustainability. Raw materials and energy have to be conserved and metals with improved or new structural and functional properties must be invented, developed and brought to application. In this endeavour a very promising route is to reduce the structural scale of metallic materials, thereby bridging industrial metals of today with emerging nanometals of tomorrow, i.e. structural scales ranging from a few micrometres to the nanometre regime. While taking a focus on metals with structures in this scale regime the symposium spans from fundamental aspects towards applications, uniting materials scientists and technologists. A holistic approach characterizes the themes of the symposium encompassing synthesis, characterization, modelling and performance where in each area significant progress has been made in recent years. Synthesis now covers top-down processes, e.g. plastic deformation, and bottom-up processes, e.g. chemical and physical synthesis. In the area of structural and mechanical characterization advanced techniques are now widely applied and in-situ techniques for structural characterization under mechanical or thermal loading are under rapid development in both 2D and 3D. Progress in characterization techniques has led to a precise description of different boundaries (grain, dislocation, twin, phase), and of how they form and evolve, also including theoretical modelling and simulations of structures, properties and performance. (au)
IDDT: Fundamentals and Test Generation
KUANG JiShun(邝继顺); YOU ZhiQiang(尤志强); ZHU QiJian(朱启建); MIN YingHua(闵应骅)
2003-01-01
It is the time to explore the fundamentals of IDDT testing when extensive workhas been done for IDDT testing since it was proposed. This paper precisely defines the concept ofaverage transient current (IDDT) of CMOS digital ICs, and experimentally analyzes the feasibilityof IDDT test generation at gate level. Based on the SPICE simulation results, the paper suggests aformula to calculateIDDT by means of counting only logical up-transitions, which enablesIDDT testgeneration at logic level. The Bayesian optimization algorithm is utilized for IDDT test generation.Experimental results show that about 25% stuck-open faults are withIDDT testability larger than2.5, and likely to beIDDT testable. It is also found that most IDDT testable faults are located nearthe primary inputs of a circuit under test. IDDT test generation does not require fault sensitizationprocedure compared with stuck-at fault test generation. Furthermore, some redundant stuck-atfaults can be detected by using IDDT testing.
Fundamental studies of fusion plasmas
The major portion of this program is devoted to critical ICH phenomena. The topics include edge physics, fast wave propagation, ICH induced high frequency instabilities, and a preliminary antenna design for Ignitor. This research was strongly coordinated with the world's experimental and design teams at JET, Culham, ORNL, and Ignitor. The results have been widely publicized at both general scientific meetings and topical workshops including the speciality workshop on ICRF design and physics sponsored by Lodestar in April 1992. The combination of theory, empirical modeling, and engineering design in this program makes this research particularly important for the design of future devices and for the understanding and performance projections of present tokamak devices. Additionally, the development of a diagnostic of runaway electrons on TEXT has proven particularly useful for the fundamental understanding of energetic electron confinement. This work has led to a better quantitative basis for quasilinear theory and the role of magnetic vs. electrostatic field fluctuations on electron transport. An APS invited talk was given on this subject and collaboration with PPPL personnel was also initiated. Ongoing research on these topics will continue for the remainder fo the contract period and the strong collaborations are expected to continue, enhancing both the relevance of the work and its immediate impact on areas needing critical understanding
Fundamentals of positron emission tomography
Positron emission tomography is a modern radionuclide method of measuring physiological quantities or metabolic parameters in vivo. The methods is based on: (1) Radioactive labelling with positron emitters; (2) the coincidence technique for the measurement of the annihilation radiation following positron decay; (3) analysis of the data measured using biological models. The basic aspects and problems of the method are discussed. The main fields of future research are the synthesis of new labelled compounds and the development of mathematical models of the biological processes to be investigated. (orig.)
[Fundamentals of positron emission tomography].
Ostertag, H
1989-07-01
Positron emission tomography is a modern radionuclide method of measuring physiological quantities or metabolic parameters in vivo. The method is based on: (1) radioactive labelling with positron emitters; (2) the coincidence technique for the measurement of the annihilation radiation following positron decay; (3) analysis of the data measured using biological models. The basic aspects and problems of the method are discussed. The main fields of future research are the synthesis of new labelled compounds and the development of mathematical models of the biological processes to be investigated. PMID:2667029
A multiple more accurate Hardy-Littlewood-Polya inequality
Qiliang Huang
2012-11-01
Full Text Available By introducing multi-parameters and conjugate exponents and using Euler-Maclaurin’s summation formula, we estimate the weight coefficient and prove a multiple more accurate Hardy-Littlewood-Polya (H-L-P inequality, which is an extension of some earlier published results. We also prove that the constant factor in the new inequality is the best possible, and obtain its equivalent forms.
Accurate method of modeling cluster scaling relations in modified gravity
He, Jian-hua; Li, Baojiu
2016-06-01
We propose a new method to model cluster scaling relations in modified gravity. Using a suite of nonradiative hydrodynamical simulations, we show that the scaling relations of accumulated gas quantities, such as the Sunyaev-Zel'dovich effect (Compton-y parameter) and the x-ray Compton-y parameter, can be accurately predicted using the known results in the Λ CDM model with a precision of ˜3 % . This method provides a reliable way to analyze the gas physics in modified gravity using the less demanding and much more efficient pure cold dark matter simulations. Our results therefore have important theoretical and practical implications in constraining gravity using cluster surveys.
Towards a more accurate concept of fuels
Full text: The introduction of LEU in Atucha and the approval of CARA show an advancement of the Argentine power stations fuels, which stimulate and show a direction to follow. In the first case, the use of enriched U fuel relax an important restriction related to neutronic economy; that means that it is possible to design less penalized fuels using more Zry. The second case allows a decrease in the lineal power of the rods, enabling a better performance of the fuel in normal and also in accident conditions. In this work we wish to emphasize this last point, trying to find a design in which the surface power of the rod is diminished. Hence, in accident conditions owing to lack of coolant, the cladding tube will not reach temperatures that will produce oxidation, with the corresponding H2 formation and with plasticity enough to form blisters, which will obstruct the reflooding and hydration that will produce fragility and rupture of the cladding tube, with the corresponding radioactive material dispersion. This work is oriented to find rods designs with quasi rectangular geometry to lower the surface power of the rods, in order to obtain a lower central temperature of the rod. Thus, critical temperatures will not be reached in case of lack of coolant. This design is becoming a reality after PPFAE's efforts in search of cladding tubes fabrication with different circumferential values, rectangular in particular. This geometry, with an appropriate pellet design, can minimize the pellet-cladding interaction and, through the accurate width election, non rectified pellets could be used. This means an important economy in pellets production, as well as an advance in the fabrication of fuels in gloves box and hot cells in the future. The sequence to determine critical geometrical parameters is described and some rod dispositions are explored
Fundamental Principles of Proper Space Kinematics
Wade, Sean
It is desirable to understand the movement of both matter and energy in the universe based upon fundamental principles of space and time. Time dilation and length contraction are features of Special Relativity derived from the observed constancy of the speed of light. Quantum Mechanics asserts that motion in the universe is probabilistic and not deterministic. While the practicality of these dissimilar theories is well established through widespread application inconsistencies in their marriage persist, marring their utility, and preventing their full expression. After identifying an error in perspective the current theories are tested by modifying logical assumptions to eliminate paradoxical contradictions. Analysis of simultaneous frames of reference leads to a new formulation of space and time that predicts the motion of both kinds of particles. Proper Space is a real, three-dimensional space clocked by proper time that is undergoing a densification at the rate of c. Coordinate transformations to a familiar object space and a mathematical stationary space clarify the counterintuitive aspects of Special Relativity. These symmetries demonstrate that within the local universe stationary observers are a forbidden frame of reference; all is in motion. In lieu of Quantum Mechanics and Uncertainty the use of the imaginary number i is restricted for application to the labeling of mass as either material or immaterial. This material phase difference accounts for both the perceived constant velocity of light and its apparent statistical nature. The application of Proper Space Kinematics will advance more accurate representations of microscopic, oscopic, and cosmological processes and serve as a foundation for further study and reflection thereafter leading to greater insight.
Fundamental structures of dynamic social networks.
Sekara, Vedran; Stopczynski, Arkadiusz; Lehmann, Sune
2016-09-01
Social systems are in a constant state of flux, with dynamics spanning from minute-by-minute changes to patterns present on the timescale of years. Accurate models of social dynamics are important for understanding the spreading of influence or diseases, formation of friendships, and the productivity of teams. Although there has been much progress on understanding complex networks over the past decade, little is known about the regularities governing the microdynamics of social networks. Here, we explore the dynamic social network of a densely-connected population of ∼1,000 individuals and their interactions in the network of real-world person-to-person proximity measured via Bluetooth, as well as their telecommunication networks, online social media contacts, geolocation, and demographic data. These high-resolution data allow us to observe social groups directly, rendering community detection unnecessary. Starting from 5-min time slices, we uncover dynamic social structures expressed on multiple timescales. On the hourly timescale, we find that gatherings are fluid, with members coming and going, but organized via a stable core of individuals. Each core represents a social context. Cores exhibit a pattern of recurring meetings across weeks and months, each with varying degrees of regularity. Taken together, these findings provide a powerful simplification of the social network, where cores represent fundamental structures expressed with strong temporal and spatial regularity. Using this framework, we explore the complex interplay between social and geospatial behavior, documenting how the formation of cores is preceded by coordination behavior in the communication networks and demonstrating that social behavior can be predicted with high precision. PMID:27555584
PREFACE: Fundamental Constants in Physics and Metrology
Klose, Volkmar; Kramer, Bernhard
1986-01-01
This volume contains the papers presented at the 70th PTB Seminar which, the second on the subject "Fundamental Constants in Physics and Metrology", was held at the Physikalisch-Technische Bundesanstalt in Braunschweig from October 21 to 22, 1985. About 100 participants from the universities and various research institutes of the Federal Republic of Germany participated in the meeting. Besides a number of review lectures on various broader subjects there was a poster session which contained a variety of topical contributed papers ranging from the theory of the quantum Hall effect to reports on the status of the metrological experiments at the PTB. In addition, the participants were also offered the possibility to visit the PTB laboratories during the course of the seminar. During the preparation of the meeting we noticed that even most of the general subjects which were going to be discussed in the lectures are of great importance in connection with metrological experiments and should be made accessible to the scientific community. This eventually resulted in the idea of the publication of the papers in a regular journal. We are grateful to the editor of Metrologia for providing this opportunity. We have included quite a number of papers from basic physical research. For example, certain aspects of high-energy physics and quantum optics, as well as the many-faceted role of Sommerfeld's fine-structure constant, are covered. We think that questions such as "What are the intrinsic fundamental parameters of nature?" or "What are we doing when we perform an experiment?" can shed new light on the art of metrology, and do, potentially, lead to new ideas. This appears to be especially necessary when we notice the increasing importance of the role of the fundamental constants and macroscopic quantum effects for the definition and the realization of the physical units. In some cases we have reached a point where the limitations of our knowledge of a fundamental constant and
An accurate determination of the flux within a slab
During the past decade, several articles have been written concerning accurate solutions to the monoenergetic neutron transport equation in infinite and semi-infinite geometries. The numerical formulations found in these articles were based primarily on the extensive theoretical investigations performed by the open-quotes transport greatsclose quotes such as Chandrasekhar, Busbridge, Sobolev, and Ivanov, to name a few. The development of numerical solutions in infinite and semi-infinite geometries represents an example of how mathematical transport theory can be utilized to provide highly accurate and efficient numerical transport solutions. These solutions, or analytical benchmarks, are useful as open-quotes industry standards,close quotes which provide guidance to code developers and promote learning in the classroom. The high accuracy of these benchmarks is directly attributable to the rapid advancement of the state of computing and computational methods. Transport calculations that were beyond the capability of the open-quotes supercomputersclose quotes of just a few years ago are now possible at one's desk. In this paper, we again build upon the past to tackle the slab problem, which is of the next level of difficulty in comparison to infinite media problems. The formulation is based on the monoenergetic Green's function, which is the most fundamental transport solution. This method of solution requires a fast and accurate evaluation of the Green's function, which, with today's computational power, is now readily available
Fundamentals for remote condition monitoring of offshore wind turbine blades
McGugan, Malcolm; Sørensen, Bent F.
inspection, repair or replacement. The paper explores the requirements for the level of remote data Output that will allow an initial improvement in the overall management of offshore wind farms., and ultimately accurate estimates of remaining life for individual blades. The practical and theoretical...... knowledge synergy required to introduce a working system is also considered. Although the initial objectives of the present Study were simply to establish the fundamentals for such technology, with industrial collaboration to follow, it quickly became clear that the development of specific prototype...
Fundamentals of friction stir spot welding
Badarinarayan, Harsha
The recent spike in energy costs has been a major contributor to propel the use of light weight alloys in the transportation industry. In particular, the automotive industry sees benefit in using light weight alloys to increase fuel efficiency and enhance performance. In this context, light weight design by replacing steel with Al and/or Mg alloys have been considered as promising initiatives. The joining of structures made of light weight alloys is therefore very important and calls for more attention. Friction Stir Spot Welding (FSSW) is an evolving technique that offers several advantages over conventional joining processes. The fundamentals aspects of FSSW are systematically studied in this dissertation. The effects and influence of process inputs (weld parameters and tool geometry) on the process output (weld geometry and static strength) is studied. A Design of Experiments (DoE) is carried out to identify the effect of each process parameter on weld strength. It is found that the tool geometry, and in particular the pin profile has a significant role in determining the weld geometry (hook, stir zone size etc.) which in turn influences the failure mode and weld strength. A novel triangular pin tool geometry is proposed that suppresses the hook formation and produces welds with twice the static strength as those produced with conventional cylindrical pin tools. An experimental and numerical approach is undertaken to understand the effect of pin geometry on the material flow and failure mechanism of spot welds. In addition, key practical issues have been addressed such as quantification of tool life and a methodology to control tool plunge depth during welding. Finally, by implementing the findings of this dissertation, FSSW is successfully performed on a closure panel assembly for an automotive application.
Fundamental gap of molecular crystals via constrained density functional theory
Droghetti, Andrea; Rungger, Ivan; Das Pemmaraju, Chaitanya; Sanvito, Stefano
2016-05-01
The energy gap of a molecular crystal is one of the most important properties since it determines the crystal charge transport when the material is utilized in electronic devices. This is, however, a quantity difficult to calculate and standard theoretical approaches based on density functional theory (DFT) have proven unable to provide accurate estimates. In fact, besides the well-known band-gap problem, DFT completely fails in capturing the fundamental gap reduction occurring when molecules are packed in a crystal structures. The failure has to be associated with the inability of describing the electronic polarization and the real space localization of the charged states. Here we describe a scheme based on constrained DFT, which can improve upon the shortcomings of standard DFT. The method is applied to the benzene crystal, where we show that accurate results can be achieved for both the band gap and also the energy level alignment.
Analysis of fundamental processes in laser isotope separation
Laser Isotope Separation based on atomic beam is achieved by selective excitation of neutral atoms with laser beam whose line width is as narrow as might discriminate isotope shift of specified atoms, and by recovering those excited atoms with appropriate method. This separation scheme thus involves fundamental processes of atomic beam generation, selective excitation, photoionization and recover. Enriching model, in which atoms excited with exciter is recovered by applying static electric field after ionized with ionizer, is presented to evaluate each fundamental processes and separation mechanism. That is, energy model having two kinds of 3-level system for specified atoms and nonspecified atoms respectively, which includes energy and charge transfer between two system, is calculated for parameter studies. Hearth temperature, exciter power, ionizer power, electrode length in the direction of both laser and atomic beam, and electrode gap are selected as the parameters. (author)
Coupled variations of fundamental couplings and primordial nucleosynthesis
Coc, Alain [Centre de Spectrometrie Nucleaire et de Spectrometrie de Masse, IN2P3/CNRS/UPS, Bat. 104, 91405 0rsay Campus (France); Nunes, Nelson J.; Olive, Keith A. [William I. Fine Theoretical Physics Institute, University of Minnesota, Minneapolis, MN 55455 (United States); Uzan, Jean-Philippe; Vangioni, Elisabeth [Institut d' Astrophysique de Paris, UMR-7095 du CNRS, Universite Pierre et Marie Curie, 98 bis bd Arago, 75014 Paris (France)
2006-10-15
The effect of variations of the fundamental nuclear parameters on big-bang nucleosynthesis are modeled and discussed in detail taking into account the interrelations between the fundamental parameters arising in unified theories. Considering only {sup 4}He, strong constraints on the variation of the neutron lifetime, neutron-proton mass difference are set. These constraints are then translated into constraints on the time variation of the Yukawa couplings and the fine structure constant. Furthermore, we show that a variation of the deuterium binding energy is able to reconcile the {sup 7}Li abundance deduced from the WMAP analysis with its spectroscopically determined value while maintaining concordance with D and {sup 4}He. (authors)
Fundamental Constants as Monitors of the Universe
Thompson, Rodger I
2016-01-01
Astronomical observations have a unique ability to determine the laws of physics at distant times in the universe. They, therefore, have particular relevance in answering the basic question as to whether the laws of physics are invariant with time. The dimesionless fundamental constants, such as the proton to electron mass ratio and the fine structure constant are key elements in the investigation. If they vary with time then the answer is clearly that the laws of physics are not invariant with time and significant new physics must be developed to describe the universe. Limits on their variance, on the other hand, constrains the parameter space available to new physics that requires a variation with time of basic physical law. There are now observational constraints on the time variation of the proton to electron mass ratio mu at the 1.E-7 level. Constraints on the variation of the fine structure constant alpha are less rigorous, 1E-5, but are imposed at higher redshift. The implications of these limits on ne...
Fundamental Considerations for Biobank Legacy Planning.
Matzke, Lise Anne Marie; Fombonne, Benjamin; Watson, Peter Hamilton; Moore, Helen Marie
2016-04-01
Biobanking in its various forms is an activity involving the collection of biospecimens and associated data and their storage for differing lengths of time before use. In some cases, biospecimens are immediately used, but in others, they are stored typically for the term of a specified project or in perpetuity until the materials are used up or declared to be of little scientific value. Legacy planning involves preparing for the phase that follows either biobank closure or a significant change at an operational level. In the case of a classical finite collection, this may be brought about by the completion of the initial scientific goals of a project, a loss of funding, or loss of or change in leadership. Ultimately, this may require making a decision about when and where to transfer materials or whether to destroy them. Because biobanking in its entirety is a complex endeavour, legacy planning touches on biobank operations as well as ethical, legal, financial, and governance parameters. Given the expense and time that goes into setting up and maintaining biobanks, coupled with the ethical imperative to appropriately utilize precious resources donated to research, legacy planning is an activity that every biobanking entity should think about. This article describes some of the fundamental considerations for preparing and executing a legacy plan, and we envisage that this article will facilitate dialogue to help inform best practices and policy development in the future. PMID:26890981
Fundamentals and applications of QCD
The main principles of quantum chromodynamics (QCD) and application of QCD calculation technique to account for the observed properties of hadron collisions and hadron structure are discussed in the lecture. Theoretical aspects of QCD are briefly considered: application of the Feynman rules, calculation of the coupling constant, renormalizability of the coupling constant, the use of the Gell-Mann-Low theory, properties of QCD asymptotic freedom, phenomenological models of confinement. In the framework of QCD and perturbation theory the confinement of coloured quarks in hadron, infrared parameter of QCD, e+e--annihilation into hadrons, electromagnetic polarization of vacuum in QCD, quark and gluon condensates, dynamics of heavy quarkonia instanton structure of vacuum, nonperturbative interaction potential and vacuum energy are studied. Sum rules to calculate hadron resonances obsreved in e+e--annihilation are presented
Measuring the chargino parameters
J Kalinowski
2000-07-01
After the supersymmetric particles have been discovered, the priority will be to determine independently the fundamental parameters to reveal the structure of the underlying supersymmetric theory. In my talk I discuss how the chargino sector can be reconstructed completely by measuring the cross-sections with polarized beams at e+e- collider experiments: $\\tilde{X}^{+}_{i}\\tilde{X}^{-}_{j}[i,j=1,2]$. The closure of the two-chargino system can be investigated by analysing sum rules for the production cross-sections.
Fundamental Frequency and Model Order Estimation Using Spatial Filtering
Karimian-Azari, Sam; Jensen, Jesper Rindom; Christensen, Mads Græsbøll
2014-01-01
In signal processing applications of harmonic-structured signals, estimates of the fundamental frequency and number of harmonics are often necessary. In real scenarios, a desired signal is contaminated by different levels of noise and interferers, which complicate the estimation of the signal parameters. In this paper, we present an estimation procedure for harmonic-structured signals in situations with strong interference using spatial filtering, or beamforming. We jointly estimate the funda...
Forward and inverse problems in fundamental and applied magnetohydrodynamics
Giesecke, Andre; Stefani, Frank; Wondrak, Thomas; Xu, Mingtian
2012-01-01
This Minireview summarizes the recent efforts to solve forward and inverse problems as they occur in different branches of fundamental and applied magnetohydrodynamics. As for the forward problem, the main focus is on the numerical treatment of induction processes, including self-excitation of magnetic fields in non-spherical domains and/or under the influence of non-homogeneous material parameters. As an important application of the developed numerical schemes, the functioning of the von-K\\'...
Astronomers Gain Clues About Fundamental Physics
2005-12-01
superstring theory and extra dimensions in spacetime calling for the "constants" to change over time, he said. The astronomers used the GBT to detect and study radio emissions at four specific frequencies between 1612 MHz and 1720 MHz coming from hydroxyl (OH) molecules in a galaxy more than 6 billion light-years from Earth, seen as it was at roughly half the Universe's current age. Each of the four frequencies represents a specific change in the energy level of the molecule. The exact frequency emitted or absorbed when the molecule undergoes a transition from one energy level to another depends on the values of the fundamental physical constants. However, each of the four frequencies studied in the OH molecule will react differently to a change in the constants. That difference is what the astronomers sought to detect using the GBT, which, Kanekar explained, is the ideal telescope for this work because of its technical capabilities and its location in the National Radio Quiet Zone, where radio interference is at a minimum. "We can place very tight limits on changes in the physical constants by studying the behavior of these OH molecules at a time when the Universe was only about half its current age, and comparing this result to how the molecules behave today in the laboratory," said Karl Menten of the Max-Planck Institute for Radioastronomy in Germany. Wetterich, a theorist, welcomes the new capability, saying the observational method "seems very promising to obtain perhaps the most accurate values for such possible time changes of the constants." He pointed out that, while some theoretical models call for the constants to change only in the early moments after the Big Bang, models of the recently-discovered, mysterious "dark energy" that seems to be accelerating the Universe's expansion call for changes "even in the last couple of billion years." "Only observations can tell," he said. This research ties together the theoretical and observational work of Wetterich and
Geometrical constraints of the synthetic method of estimating fundamental matrix and its analysing
沈沛意; 王伟; 吴成柯
1999-01-01
The new geometrical constraints, based on the geometrical analysing of synthetic method, are developed to estimate fundamental matrix (F matrix). Applying the new constraints, the four parameters of fundamental matrix could be estimated firstly, and these four parameters are the coordinates of the two epipoles. The other four parameters of the fundamental matrix could be solved by solving the linear equations with the other new constraint secondly, and these parameters represent the homography between the two pencils of epipolar lines. The synthetic data and the real data are used to test the new method. And the method is of the advantages of obvious geometrical meaning, and high stability of the epipoles of the fundamental matrix.
Fundamental behavior of montmorillonite surfaces
Document available in extended abstract form only. Montmorillonite in contact with water undergoes a variety of surface chemical reactions that can be detected as changes in solution pH and conductivity. The nature of these reactions depends on pH and ionic strength and can roughly be divided into two categories, namely hydrolysis reactions and ion specific surface complexation reactions (the only difference being that the former involves reactions with water (also H+/OH-) and the latter with any other (ionic) species). Hydrolysis reactions may induce changes in surface charge density or lead to dissolution of montmorillonite. Mainly, reactions between montmorillonite and water have previously been studied at an empirical level that yields very little information about specific reaction mechanisms. With this contribution, we aim to increase the level of understanding regarding reaction mechanisms at the montmorillonite-water interface. Because of the unconventional structure and composition of montmorillonite, surface phenomena tend to be complex and difficult to interpret qualitatively. Usually, a combination of different techniques is required to gain meaningful information. Through montmorillonite surface titrations coupled with solution elemental analysis we have studied and identified a few fundamental processes occurring at the montmorillonite-water interface. It is possible to distinguish between at least three different reaction domains. Over a 24 hour period, an initial rapid drop in pH is observed which is followed by a linear drop in pH (implying a zero order reaction), continuing approximately during the first three hours. This period is followed by an exponentially decreasing pH which eventually levels out after approximately 8 hours to a constant value of 6.9. Taking a closer look at the first 20 minutes reveals that the initial sharp drop in pH actually contains two distinct regions, both of them linear. Interestingly, the conductivity increases
The monitoring of chemistry and radiochemistry parameters is a fundamental need in nuclear power plants in order to ensure: - The reactivity control in real time, - The barrier integrity surveillance by means of the fuel cladding failures detection and the primary-pressure boundary components control, - The water quality to limit the radiation build-up and the material corrosion permitting to prepare the maintenance, radioprotection and waste operations. - The efficiency of treatment systems and hence the minimization of chemical and radiochemical substances discharges The relevant chemistry and radiochemistry parameters to be monitored are selected depending on the chemistry conditioning of systems, the source term evaluations, the corrosion mechanisms and the radioactivity consequences. In spite of the difficulties for obtaining representative samples under all circumstances, the EPRM design provides the appropriate provisions and analytical procedures for ensuring the reliable and accurate monitoring of parameters in compliance with the specification requirements. The design solutions, adopted for Flamanville 3-EPRM and UK-EPRM, concerning the sampling conditions and locations, the on-line and analytical equipment, the procedures and the results transmission to control room and chemistry laboratory are supported by ALARP considerations, international experience and researches concerning the nuclides behavior (corrosion product and actinides solubility, fission product degassing, impurities and additives reactions also). This paper details the means developed by EDF for making successful and meaningful sampling and measurements to achieve the essential objectives associated with the monitoring. (authors)
Flavour of fundamental particles and prime numbers
The discreteness and continuity as described by the ratio of the discreteness of the n's and the continuous spread of n/2n, which is directly connected with the width and lifetimes of fundamental particles, the flavours of fundamental particles can be directly obtained. The behaviour of beta decay and the energy levels of light nuclei can also be predicted. The appearance of primes also seems to suggest that further application of reductionism to fundamental particles is not possible
How accurately can 21cm tomography constrain cosmology?
Mao, Yi; Tegmark, Max; McQuinn, Matthew; Zaldarriaga, Matias; Zahn, Oliver
2008-07-01
There is growing interest in using 3-dimensional neutral hydrogen mapping with the redshifted 21 cm line as a cosmological probe. However, its utility depends on many assumptions. To aid experimental planning and design, we quantify how the precision with which cosmological parameters can be measured depends on a broad range of assumptions, focusing on the 21 cm signal from 6noise, to uncertainties in the reionization history, and to the level of contamination from astrophysical foregrounds. We derive simple analytic estimates for how various assumptions affect an experiment’s sensitivity, and we find that the modeling of reionization is the most important, followed by the array layout. We present an accurate yet robust method for measuring cosmological parameters that exploits the fact that the ionization power spectra are rather smooth functions that can be accurately fit by 7 phenomenological parameters. We find that for future experiments, marginalizing over these nuisance parameters may provide constraints almost as tight on the cosmology as if 21 cm tomography measured the matter power spectrum directly. A future square kilometer array optimized for 21 cm tomography could improve the sensitivity to spatial curvature and neutrino masses by up to 2 orders of magnitude, to ΔΩk≈0.0002 and Δmν≈0.007eV, and give a 4σ detection of the spectral index running predicted by the simplest inflation models.
Fundamental Laws and the Completeness of Physics
Spurrett, David Jon
1999-01-01
The status of fundamental laws is an important issue when deciding between the three broad ontological options of fundamentalism (of which the thesis that physics is complete is typically a sub-type), emergentism, and disorder or promiscuous realism. Cartwrights assault on fundamental laws which argues that such laws do not, and cannot, typically state the facts, and hence cannot be used to support belief in a fundamental ontological order, is discussed in this context. A case is made in def...
Arithmetic fundamental groups and moduli of curves
This is a short note on the algebraic (or sometimes called arithmetic) fundamental groups of an algebraic variety, which connects classical fundamental groups with Galois groups of fields. A large part of this note describes the algebraic fundamental groups in a concrete manner. This note gives only a sketch of the fundamental groups of the algebraic stack of moduli of curves. Some application to a purely topological statement, i.e., an obstruction to the subjectivity of Johnson homomorphisms in the mapping class groups, which comes from Galois group of Q, is explained. (author)
Laboratory Building for Accurate Determination of Plutonium
2008-01-01
<正>The accurate determination of plutonium is one of the most important assay techniques of nuclear fuel, also the key of the chemical measurement transfer and the base of the nuclear material balance. An
Grate Leslie R
2005-04-01
Full Text Available Abstract Background Molecular profiling generates abundance measurements for thousands of gene transcripts in biological samples such as normal and tumor tissues (data points. Given such two-class high-dimensional data, many methods have been proposed for classifying data points into one of the two classes. However, finding very small sets of features able to correctly classify the data is problematic as the fundamental mathematical proposition is hard. Existing methods can find "small" feature sets, but give no hint how close this is to the true minimum size. Without fundamental mathematical advances, finding true minimum-size sets will remain elusive, and more importantly for the microarray community there will be no methods for finding them. Results We use the brute force approach of exhaustive search through all genes, gene pairs (and for some data sets gene triples. Each unique gene combination is analyzed with a few-parameter linear-hyperplane classification method looking for those combinations that form training error-free classifiers. All 10 published data sets studied are found to contain predictive small feature sets. Four contain thousands of gene pairs and 6 have single genes that perfectly discriminate. Conclusion This technique discovered small sets of genes (3 or less in published data that form accurate classifiers, yet were not reported in the prior publications. This could be a common characteristic of microarray data, thus making looking for them worth the computational cost. Such small gene sets could indicate biomarkers and portend simple medical diagnostic tests. We recommend checking for small gene sets routinely. We find 4 gene pairs and many gene triples in the large hepatocellular carcinoma (HCC, Liver cancer data set of Chen et al. The key component of these is the "placental gene of unknown function", PLAC8. Our HMM modeling indicates PLAC8 might have a domain like part of lP59's crystal structure (a Non
Individual differences in fundamental social motives.
Neel, Rebecca; Kenrick, Douglas T; White, Andrew Edward; Neuberg, Steven L
2016-06-01
Motivation has long been recognized as an important component of how people both differ from, and are similar to, each other. The current research applies the biologically grounded fundamental social motives framework, which assumes that human motivational systems are functionally shaped to manage the major costs and benefits of social life, to understand individual differences in social motives. Using the Fundamental Social Motives Inventory, we explore the relations among the different fundamental social motives of Self-Protection, Disease Avoidance, Affiliation, Status, Mate Seeking, Mate Retention, and Kin Care; the relationships of the fundamental social motives to other individual difference and personality measures including the Big Five personality traits; the extent to which fundamental social motives are linked to recent life experiences; and the extent to which life history variables (e.g., age, sex, childhood environment) predict individual differences in the fundamental social motives. Results suggest that the fundamental social motives are a powerful lens through which to examine individual differences: They are grounded in theory, have explanatory value beyond that of the Big Five personality traits, and vary meaningfully with a number of life history variables. A fundamental social motives approach provides a generative framework for considering the meaning and implications of individual differences in social motivation. (PsycINFO Database Record PMID:26371400
An Approximate Bayesian Fundamental Frequency Estimator
Nielsen, Jesper Kjær; Christensen, Mads Græsbøll; Jensen, Søren Holdt
Joint fundamental frequency and model order estimation is an important problem in several applications such as speech and music processing. In this paper, we develop an approximate estimation algorithm of these quantities using Bayesian inference. The inference about the fundamental frequency and...
Simple and High-Accurate Schemes for Hyperbolic Conservation Laws
Renzhong Feng
2014-01-01
Full Text Available The paper constructs a class of simple high-accurate schemes (SHA schemes with third order approximation accuracy in both space and time to solve linear hyperbolic equations, using linear data reconstruction and Lax-Wendroff scheme. The schemes can be made even fourth order accurate with special choice of parameter. In order to avoid spurious oscillations in the vicinity of strong gradients, we make the SHA schemes total variation diminishing ones (TVD schemes for short by setting flux limiter in their numerical fluxes and then extend these schemes to solve nonlinear Burgers’ equation and Euler equations. The numerical examples show that these schemes give high order of accuracy and high resolution results. The advantages of these schemes are their simplicity and high order of accuracy.
Constraining the Nordtvedt parameter with the BepiColombo Radioscience experiment
De Marchi, Fabrizio; Milani, Andrea; Schettino, Giulia
2016-01-01
BepiColombo is a joint ESA/JAXA mission to Mercury with challenging objectives regarding geophysics, geodesy and fundamental physics. The Mercury Orbiter Radioscience Experiment (MORE) is one of the on-board experiments, including three different but linked experiments: gravimetry, rotation and relativity. The aim of the relativity experiment is the measurement of the post-Newtonian parameters. Thanks to accurate tracking between Earth and spacecraft, the results are expected to be very precise. However, the outcomes of the experiment strictly depends on our "knowledge" about solar system: ephemerides, number of bodies (planets, satellites and asteroids) and their masses. In this paper we describe a semi-analytic model used to perform a covariance analysis to quantify the effects, on the relativity experiment, due to the uncertainties of solar system bodies parameters. In particular, our attention is focused on the Nordtvedt parameter $\\eta$ used to parametrize the strong equivalence principle violation. Afte...
Innovation of Methods for Measurement and Modelling of Twisted Pair Parameters
Lukas Cepa
2011-01-01
Full Text Available The goal of this paper is to optimize a measurement methodology for the most accurate broadband modelling of characteristic impedance and other parameters for twisted pairs. Measured values and theirs comparison is presented in this article. Automated measurement facility was implemented at the Department of telecommunication of Faculty of electrical engineering of Czech technical university in Prague. Measurement facility contains RF switches allowing measurements up to 300 MHz or 1GHz. Measured twisted pair’s parameters can be obtained by measurement but for purposes of fundamental characteristics modelling is useful to define functions that model the properties of the twisted pair. Its primary and secondary parameters depend mostly on the frequency. For twisted pair deployment, we are interested in a frequency band range from 1 MHz to 100 MHz.
An accurate parametrisation of the x-ray attenuation coefficient
A brief note is presented concerning the derivation of an expression for the parametrisation of the linear x-ray attenuation coefficient, starting from the fundamental theory of photon-electron interactions. Accuracy is discussed for elements common in tissue and contrast media. Calculated atomic cross-sections for coherent nd incoherent scattering are compared with those of other workers (Hubbell, Schofield, et al l) and improved parametrisation for the photoelectric effect is also discussed. It is concluded that approximations in terms of atomic number, the number of atoms per unit volume and the photon energy have no sound theoretical foundation and have limited accuracy and validity, and that many quantities defined as energy independent parameters of the mixture (e.g. Compton and photoelectric parameters) are in fact dependent on energy over a wide range. (U.K.)
Invariant Image Watermarking Using Accurate Zernike Moments
Ismail A. Ismail
2010-01-01
Full Text Available problem statement: Digital image watermarking is the most popular method for image authentication, copyright protection and content description. Zernike moments are the most widely used moments in image processing and pattern recognition. The magnitudes of Zernike moments are rotation invariant so they can be used just as a watermark signal or be further modified to carry embedded data. The computed Zernike moments in Cartesian coordinate are not accurate due to geometrical and numerical error. Approach: In this study, we employed a robust image-watermarking algorithm using accurate Zernike moments. These moments are computed in polar coordinate, where both approximation and geometric errors are removed. Accurate Zernike moments are used in image watermarking and proved to be robust against different kind of geometric attacks. The performance of the proposed algorithm is evaluated using standard images. Results: Experimental results show that, accurate Zernike moments achieve higher degree of robustness than those approximated ones against rotation, scaling, flipping, shearing and affine transformation. Conclusion: By computing accurate Zernike moments, the embedded bits watermark can be extracted at low error rate.
Fundamental mode in advanced technology optical fibres by two-point quasi-rational approximations
An analytic approximant is found for the fundamental guided mode of advanced technology optical fibres using a two-point quasi-rational approximation method. Very accurate results are obtained particularly in the linearly varying refractive index core, the maximum error being less than 1x10-6. The fractional power transport across the fibre is also given
Jungmann, K
2002-01-01
At the Kernfysisch Versneller Instituut (KVI) in Groningen, NL, a new facility (TRImuP) is under development. It aims for producing, slowing down and trapping of radioactive isotopes in order to perform accurate measurements on fundamental symmetries and interactions. A spectrum of radioactive nucli
The reconstruction of neutron and gamma-ray doses at Hiroshima and Nagasaki begins with a determination of the parameters describing the explosion. The calculations of the air transported radiation fields and survivor doses from the Hiroshima and Nagasaki bombs require knowledge of a variety of parameters related to the explosions. These various parameters include the heading of the bomber when the bomb was released, the epicenters of the explosions, the bomb yields, and the tilt of the bombs at time of explosion. The epicenter of a bomb is the explosion point in air that is specified in terms of a burst height and a hypocenter (or the point on the ground directly below the epicenter of the explosion). The current reassessment refines the energy yield and burst height for the Hiroshima bomb, as well as the locations of the Hiroshima and Nagasaki hypocenters on the modern city maps used in the analysis of the activation data for neutrons and TLD data for gamma rays. (J.P.N.)
Fast and accurate estimation for astrophysical problems in large databases
Richards, Joseph W.
2010-10-01
A recent flood of astronomical data has created much demand for sophisticated statistical and machine learning tools that can rapidly draw accurate inferences from large databases of high-dimensional data. In this Ph.D. thesis, methods for statistical inference in such databases will be proposed, studied, and applied to real data. I use methods for low-dimensional parametrization of complex, high-dimensional data that are based on the notion of preserving the connectivity of data points in the context of a Markov random walk over the data set. I show how this simple parameterization of data can be exploited to: define appropriate prototypes for use in complex mixture models, determine data-driven eigenfunctions for accurate nonparametric regression, and find a set of suitable features to use in a statistical classifier. In this thesis, methods for each of these tasks are built up from simple principles, compared to existing methods in the literature, and applied to data from astronomical all-sky surveys. I examine several important problems in astrophysics, such as estimation of star formation history parameters for galaxies, prediction of redshifts of galaxies using photometric data, and classification of different types of supernovae based on their photometric light curves. Fast methods for high-dimensional data analysis are crucial in each of these problems because they all involve the analysis of complicated high-dimensional data in large, all-sky surveys. Specifically, I estimate the star formation history parameters for the nearly 800,000 galaxies in the Sloan Digital Sky Survey (SDSS) Data Release 7 spectroscopic catalog, determine redshifts for over 300,000 galaxies in the SDSS photometric catalog, and estimate the types of 20,000 supernovae as part of the Supernova Photometric Classification Challenge. Accurate predictions and classifications are imperative in each of these examples because these estimates are utilized in broader inference problems
Leveraging Two Kinect Sensors for Accurate Full-Body Motion Capture.
Gao, Zhiquan; Yu, Yao; Zhou, Yu; Du, Sidan
2015-01-01
Accurate motion capture plays an important role in sports analysis, the medical field and virtual reality. Current methods for motion capture often suffer from occlusions, which limits the accuracy of their pose estimation. In this paper, we propose a complete system to measure the pose parameters of the human body accurately. Different from previous monocular depth camera systems, we leverage two Kinect sensors to acquire more information about human movements, which ensures that we can still get an accurate estimation even when significant occlusion occurs. Because human motion is temporally constant, we adopt a learning analysis to mine the temporal information across the posture variations. Using this information, we estimate human pose parameters accurately, regardless of rapid movement. Our experimental results show that our system can perform an accurate pose estimation of the human body with the constraint of information from the temporal domain. PMID:26402681
Leveraging Two Kinect Sensors for Accurate Full-Body Motion Capture
Zhiquan Gao
2015-09-01
Full Text Available Accurate motion capture plays an important role in sports analysis, the medical field and virtual reality. Current methods for motion capture often suffer from occlusions, which limits the accuracy of their pose estimation. In this paper, we propose a complete system to measure the pose parameters of the human body accurately. Different from previous monocular depth camera systems, we leverage two Kinect sensors to acquire more information about human movements, which ensures that we can still get an accurate estimation even when significant occlusion occurs. Because human motion is temporally constant, we adopt a learning analysis to mine the temporal information across the posture variations. Using this information, we estimate human pose parameters accurately, regardless of rapid movement. Our experimental results show that our system can perform an accurate pose estimation of the human body with the constraint of information from the temporal domain.
Fundamentals of preparative and nonlinear chromatography
Guiochon, Georges A [ORNL; Felinger, Attila [ORNL; Katti, Anita [University of Tennessee, Knoxville (UTK) & Oak Ridge National Laboratory (ORNL); Shirazi, Dean G [unknown
2006-02-01
The second edition of Fundamentals of Preparative and Nonlinear Chromatography is devoted to the fundamentals of a new process of purification or extraction of chemicals or proteins widely used in the pharmaceutical industry and in preparative chromatography. This process permits the preparation of extremely pure compounds satisfying the requests of the US Food and Drug Administration. The book describes the fundamentals of thermodynamics, mass transfer kinetics, and flow through porous media that are relevant to chromatography. It presents the models used in chromatography and their solutions, discusses the applications made, describes the different processes used, their numerous applications, and the methods of optimization of the experimental conditions of this process.
Ion beam analysis fundamentals and applications
Nastasi, Michael; Wang, Yongqiang
2015-01-01
Ion Beam Analysis: Fundamentals and Applications explains the basic characteristics of ion beams as applied to the analysis of materials, as well as ion beam analysis (IBA) of art/archaeological objects. It focuses on the fundamentals and applications of ion beam methods of materials characterization.The book explains how ions interact with solids and describes what information can be gained. It starts by covering the fundamentals of ion beam analysis, including kinematics, ion stopping, Rutherford backscattering, channeling, elastic recoil detection, particle induced x-ray emission, and nucle
Modeling of fundamental phenomena in welds
Zacharia, T.; Vitek, J.M. [Oak Ridge National Lab., TN (United States); Goldak, J.A. [Carleton Univ., Ottawa, Ontario (Canada); DebRoy, T.A. [Pennsylvania State Univ., University Park, PA (United States); Rappaz, M. [Ecole Polytechnique Federale de Lausanne (Switzerland); Bhadeshia, H.K.D.H. [Cambridge Univ. (United Kingdom)
1993-12-31
Recent advances in the mathematical modeling of fundamental phenomena in welds are summarized. State-of-the-art mathematical models, advances in computational techniques, emerging high-performance computers, and experimental validation techniques have provided significant insight into the fundamental factors that control the development of the weldment. The current status and scientific issues in the areas of heat and fluid flow in welds, heat source metal interaction, solidification microstructure, and phase transformations are assessed. Future research areas of major importance for understanding the fundamental phenomena in weld behavior are identified.
The Geometric Nature of the Fundamental Lemma
Nadler, David
2010-01-01
The Fundamental Lemma is a somewhat obscure combinatorial identity introduced by Robert P. Langlands as an ingredient in the theory of automorphic representations. After many years of deep contributions by mathematicians working in representation theory, number theory, algebraic geometry, and algebraic topology, a proof of the Fundamental Lemma was recently completed by Ngo Bau Chau, for which he was awarded a Fields Medal. Our aim here is to touch on some of the beautiful ideas contributing to the Fundamental Lemma and its proof. We highlight the geometric nature of the problem which allows one to attack a question in p-adic analysis with the tools of algebraic geometry.
An accurate RLGC circuit model for dual tapered TSV structure
A fast RLGC circuit model with analytical expression is proposed for the dual tapered through-silicon via (TSV) structure in three-dimensional integrated circuits under different slope angles at the wide frequency region. By describing the electrical characteristics of the dual tapered TSV structure, the RLGC parameters are extracted based on the numerical integration method. The RLGC model includes metal resistance, metal inductance, substrate resistance, outer inductance with skin effect and eddy effect taken into account. The proposed analytical model is verified to be nearly as accurate as the Q3D extractor but more efficient. (semiconductor integrated circuits)
Accurate analysis of EBSD data for phase identification
Palizdar, Y; Cochrane, R C; Brydson, R; Leary, R; Scott, A J, E-mail: preyp@leeds.ac.u [Institute for Materials Research, University of Leeds, Leeds LS2 9JT UK (United Kingdom)
2010-07-01
This paper aims to investigate the reliability of software default settings in the analysis of EBSD results. To study the effect of software settings on the EBSD results, the presence of different phases in high Al steel has been investigated by EBSD. The results show the importance of appropriate automated analysis parameters for valid and reliable phase discrimination. Specifically, the importance of the minimum number of indexed bands and the maximum solution error have been investigated with values of 7-9 and 1.0-1.5{sup 0} respectively, found to be needed for accurate analysis.
Accurate studies on dissociation energies of diatomic molecules
SUN; WeiGuo; FAN; QunChao
2007-01-01
The molecular dissociation energies of some electronic states of hydride and N2 molecules were studied using a parameter-free analytical formula suggested in this study and the algebraic method (AM) proposed recently. The results show that the accurate AM dissociation energies DeAM agree excellently with experimental dissociation energies Deexpt, and that the dissociation energy of an electronic state such as the 23△g state of 7Li2 whose experimental value is not available can be predicted using the new formula.
Optimized pulse sequences for the accurate measurement of aortic compliance
Aortic compliance is potentially an important cardiovascular diagnostic parameter by virtue of a proposed correlation with cardiovascular fitness. Measurement requires cross-sectional images of the ascending and descending aorta in systole and diastole for measurement of aortic lumen areas. Diastolic images have poor vessel- wall delineation due to signal from slow-flowing blood. A comparison has been carried out using presaturation (SAT) RF pulses, transparent RF pulses, and flow-compensated gradients in standard pulse sequences to improve vessel-wall delineation in diastole. Properly timed SAT pulses provide the most consistent vessel-wall delineation and the most accurate measurement of aortic compliance
Biological fitness and the fundamental theorem of natural selection.
Grafen, Alan
2015-07-01
Fisher's fundamental theorem of natural selection is proved satisfactorily for the first time, resolving confusions in the literature about the nature of reproductive value and fitness. Reproductive value is defined following Fisher, without reference to genetic variation, and fitness is the proportional rate of increase in an individual's contribution to the demographic population size. The mean value of fitness is the same in each age class, and it also equals the population's Malthusian parameter. The statement and derivation are regarded as settled here, and so the general biological significance of the fundamental theorem can be debated. The main purpose of the theorem is to find a quantitative measure of the effect of natural selection in a Mendelian system, thus founding Darwinism on Mendelism and identifying the design criterion for biological adaptation, embodied in Fisher's ingenious definition of fitness. The relevance of the newly understood theorem to five current research areas is discussed. PMID:26098334
Integrative physiology of fundamental frequency control in birds.
Goller, Franz; Riede, Tobias
2013-06-01
One major feature of the remarkable vocal repertoires of birds is the range of fundamental frequencies across species, but also within individual species. This review discusses four variables that determine the oscillation frequency of the vibrating structures within a bird's syrinx. These are (1) viscoelastic properties of the oscillating tissue, (2) air sac pressure, (3) neuromuscular control of movements and (4) source-filter interactions. Our current understanding of morphology, biomechanics and neural control suggests that a complex interplay of these parameters can lead to multiple combinations for generating a particular fundamental frequency. An increase in the complexity of syringeal morphology from non-passeriform birds to oscines also led to a different interplay for regulating oscillation frequency by enabling control of tension that is partially independent of regulation of airflow. In addition to reviewing the available data for all different contributing variables, we point out open questions and possible approaches. PMID:23238240
Fundamentals of nuclear science and engineering
Shultis, J Kenneth
2002-01-01
An ideal introduction to the fundamentals of nuclear science and engineering. Presents the basic nuclear science needed to understand and quantify nuclear phenomena such as nuclear reactions, nuclear energy, radioactivity, and radiation interactions with matter.
A fundamental identity for Parseval frames
Balan, R.; Casazza, P. G.; Edidin, D; Kutyniok, G.
2005-01-01
In this paper we establish a suprising fundamental identity for Parseval frames in a Hilbert space. Several variations of this result are given, including an extension to general frames. Finally, we discuss the derived results.
Fundamentals of nanoscaled field effect transistors
Chaudhry, Amit
2013-01-01
Fundamentals of Nanoscaled Field Effect Transistors gives comprehensive coverage of the fundamental physical principles and theory behind nanoscale transistors. The specific issues that arise for nanoscale MOSFETs, such as quantum mechanical tunneling and inversion layer quantization, are fully explored. The solutions to these issues, such as high-κ technology, strained-Si technology, alternate devices structures and graphene technology are also given. Some case studies regarding the above issues and solution are also given in the book. In summary, this book: Covers the fundamental principles behind nanoelectronics/microelectronics Includes chapters devoted to solutions tackling the quantum mechanical effects occurring at nanoscale Provides some case studies to understand the issue mathematically Fundamentals of Nanoscaled Field Effect Transistors is an ideal book for researchers and undergraduate and graduate students in the field of microelectronics, nanoelectronics, and electronics.
Fundamental Group of Desargues Configuration Spaces
Berceanu, Barbu
2011-01-01
We compute the fundamental group of various spaces of Desargues configurations in complex projective spaces: planar and non-planar configurations, with a fixed center and also with an arbitrary center.
FUNDAMENTAL PRINCIPLES OF AYURVEDA PART - III
Pandya, Vaidya Navnitlal B.
1983-01-01
In this part the author explains the origination of Rasas (tastes) and their effect on Dosas (bodily humors) in various seasons. Also he throws some light on the fundamentals of Pharmacology of Ayurveda.
FUNDAMENTAL PRINCIPLES OF AYURVEDA - PART V
Pandya, Vaidya Navnitlal B.
1983-01-01
In this concluding part of the study, the author explains the pathological investigation of urine and faeces. Also he throws some light on the fundamentals of Pharmacy relevant to pharmacology of Ayurveda.
Symmetric differentials and the fundamental group
Brunebarbe, Yohan; Totaro, Burt
2012-01-01
Esnault asked whether every smooth complex projective variety with infinite fundamental group has a nonzero symmetric differential (a section of a symmetric power of the cotangent bundle). In a sense, this would mean that every variety with infinite fundamental group has some nonpositive curvature. We show that the answer to Esnault's question is positive when the fundamental group has a finite-dimensional complex representation with infinite image. This applies to all known varieties with infinite fundamental group. Along the way, we produce many symmetric differentials on the base of a variation of Hodge structures. One interest of these results is that symmetric differentials give information in the direction of Kobayashi hyperbolicity. For example, they limit how many rational curves the variety can contain.
Instructor Special Report: RIF (Reading Is FUNdamental)
Instructor, 1976
1976-01-01
At a time when innovative programs of the sixties are quickly falling out of the picture, Reading Is FUNdamental, after ten years and five million free paperbacks, continues to expand and show results. (Editor)
Accurate atomic data for industrial plasma applications
Griesmann, U.; Bridges, J.M.; Roberts, J.R.; Wiese, W.L.; Fuhr, J.R. [National Inst. of Standards and Technology, Gaithersburg, MD (United States)
1997-12-31
Reliable branching fraction, transition probability and transition wavelength data for radiative dipole transitions of atoms and ions in plasma are important in many industrial applications. Optical plasma diagnostics and modeling of the radiation transport in electrical discharge plasmas (e.g. in electrical lighting) depend on accurate basic atomic data. NIST has an ongoing experimental research program to provide accurate atomic data for radiative transitions. The new NIST UV-vis-IR high resolution Fourier transform spectrometer has become an excellent tool for accurate and efficient measurements of numerous transition wavelengths and branching fractions in a wide wavelength range. Recently, the authors have also begun to employ photon counting techniques for very accurate measurements of branching fractions of weaker spectral lines with the intent to improve the overall accuracy for experimental branching fractions to better than 5%. They have now completed their studies of transition probabilities of Ne I and Ne II. The results agree well with recent calculations and for the first time provide reliable transition probabilities for many weak intercombination lines.
More accurate picture of human body organs
Computerized tomography and nucler magnetic resonance tomography (NMRT) are revolutionary contributions to radiodiagnosis because they allow to obtain a more accurate image of human body organs. The principles are described of both methods. Attention is mainly devoted to NMRT which has clinically only been used for three years. It does not burden the organism with ionizing radiation. (Ha)
How Unstable Are Fundamental Quantum Supermembranes?
Kaku, Michio
1996-01-01
String duality requires the presence of solitonic $p$-branes. By contrast, the existence of fundamental supermembranes is problematic, since they are probably unstable. In this paper, we re-examine the quantum stability of fundamental supermembranes in 11 dimensions. Previously, supermembranes were shown to be unstable by approximating them with SU(n) super Yang-Mills fields as $n \\rightarrow \\infty$. We show that this instability persists even if we quantize the continuum theory from the ver...