Choi, Ki Yong; No, Hee Cheon [Korea Advanced Institute of Science and Technology, Taejon (Korea, Republic of)
1998-12-31
The calibrating method for an electrochemical probe, neglecting the effect of the normal velocity on the mass transport, can cause large errors when applied to the measurement of wall shear rates in thin wavy flow with large amplitude waves. An extended calibrating method is developed to consider the contributions of the normal velocity. The inclusion of the turbulence-induced normal velocity term is found to have a negligible effect on the mass transfer coefficient. The contribution of the wave-induced normal velocity can be classified on the dimensionless parameter, V. If V is above a critical value of V, V{sub crit}, the effects of the wave-induced normal velocity become larger with an increase in V. While its effects negligible for inversely. The present inverse method can predict the unknown shear rate more accurately in thin wavy flow with large amplitude waves than the previous method. 18 refs., 8 figs. (Author)
Model Calibration of Exciter and PSS Using Extended Kalman Filter
Kalsi, Karanjit; Du, Pengwei; Huang, Zhenyu
2012-07-26
Power system modeling and controls continue to become more complex with the advent of smart grid technologies and large-scale deployment of renewable energy resources. As demonstrated in recent studies, inaccurate system models could lead to large-scale blackouts, thereby motivating the need for model calibration. Current methods of model calibration rely on manual tuning based on engineering experience, are time consuming and could yield inaccurate parameter estimates. In this paper, the Extended Kalman Filter (EKF) is used as a tool to calibrate exciter and Power System Stabilizer (PSS) models of a particular type of machine in the Western Electricity Coordinating Council (WECC). The EKF-based parameter estimation is a recursive prediction-correction process which uses the mismatch between simulation and measurement to adjust the model parameters at every time step. Numerical simulations using actual field test data demonstrate the effectiveness of the proposed approach in calibrating the parameters.
Automated Calibration For Numerical Models Of Riverflow
Fernandez, Betsaida; Kopmann, Rebekka; Oladyshkin, Sergey
2017-04-01
Calibration of numerical models is fundamental since the beginning of all types of hydro system modeling, to approximate the parameters that can mimic the overall system behavior. Thus, an assessment of different deterministic and stochastic optimization methods is undertaken to compare their robustness, computational feasibility, and global search capacity. Also, the uncertainty of the most suitable methods is analyzed. These optimization methods minimize the objective function that comprises synthetic measurements and simulated data. Synthetic measurement data replace the observed data set to guarantee an existing parameter solution. The input data for the objective function derivate from a hydro-morphological dynamics numerical model which represents an 180-degree bend channel. The hydro- morphological numerical model shows a high level of ill-posedness in the mathematical problem. The minimization of the objective function by different candidate methods for optimization indicates a failure in some of the gradient-based methods as Newton Conjugated and BFGS. Others reveal partial convergence, such as Nelder-Mead, Polak und Ribieri, L-BFGS-B, Truncated Newton Conjugated, and Trust-Region Newton Conjugated Gradient. Further ones indicate parameter solutions that range outside the physical limits, such as Levenberg-Marquardt and LeastSquareRoot. Moreover, there is a significant computational demand for genetic optimization methods, such as Differential Evolution and Basin-Hopping, as well as for Brute Force methods. The Deterministic Sequential Least Square Programming and the scholastic Bayes Inference theory methods present the optimal optimization results. keywords: Automated calibration of hydro-morphological dynamic numerical model, Bayesian inference theory, deterministic optimization methods.
Objective calibration of numerical weather prediction models
Voudouri, A.; Khain, P.; Carmona, I.; Bellprat, O.; Grazzini, F.; Avgoustoglou, E.; Bettems, J. M.; Kaufmann, P.
2017-07-01
Numerical weather prediction (NWP) and climate models use parameterization schemes for physical processes, which often include free or poorly confined parameters. Model developers normally calibrate the values of these parameters subjectively to improve the agreement of forecasts with available observations, a procedure referred as expert tuning. A practicable objective multi-variate calibration method build on a quadratic meta-model (MM), that has been applied for a regional climate model (RCM) has shown to be at least as good as expert tuning. Based on these results, an approach to implement the methodology to an NWP model is presented in this study. Challenges in transferring the methodology from RCM to NWP are not only restricted to the use of higher resolution and different time scales. The sensitivity of the NWP model quality with respect to the model parameter space has to be clarified, as well as optimize the overall procedure, in terms of required amount of computing resources for the calibration of an NWP model. Three free model parameters affecting mainly turbulence parameterization schemes were originally selected with respect to their influence on the variables associated to daily forecasts such as daily minimum and maximum 2 m temperature as well as 24 h accumulated precipitation. Preliminary results indicate that it is both affordable in terms of computer resources and meaningful in terms of improved forecast quality. In addition, the proposed methodology has the advantage of being a replicable procedure that can be applied when an updated model version is launched and/or customize the same model implementation over different climatological areas.
Mathematical Properties of Numerical Inversion for Jet Calibrations
Cukierman, Aviv
2016-01-01
Numerical inversion is a general detector calibration technique that is independent of the underlying spectrum. This procedure is formalized and important statistical properties are presented, using high energy jets at the Large Hadron Collider as an example setting. In particular, numerical inversion is inherently biased and common approximations to the calibrated jet energy tend to over-estimate the resolution. Analytic approximations to the closure and calibrated resolutions are demonstrated to effectively predict the full forms under realistic conditions. Finally, extensions of numerical inversion are presented which can reduce the inherent biases. These methods will be increasingly important to consider with degraded resolution at low jet energies due to a much higher instantaneous luminosity in the near future.
Numerical Analysis of a Radiant Heat Flux Calibration System
Jiang, Shanjuan; Horn, Thomas J.; Dhir, V. K.
1998-01-01
A radiant heat flux gage calibration system exists in the Flight Loads Laboratory at NASA's Dryden Flight Research Center. This calibration system must be well understood if the heat flux gages calibrated in it are to provide useful data during radiant heating ground tests or flight tests of high speed aerospace vehicles. A part of the calibration system characterization process is to develop a numerical model of the flat plate heater element and heat flux gage, which will help identify errors due to convection, heater element erosion, and other factors. A 2-dimensional mathematical model of the gage-plate system has been developed to simulate the combined problem involving convection, radiation and mass loss by chemical reaction. A fourth order finite difference scheme is used to solve the steady state governing equations and determine the temperature distribution in the gage and plate, incident heat flux on the gage face, and flat plate erosion. Initial gage heat flux predictions from the model are found to be within 17% of experimental results.
Characterization of the Humidity Calibration Chamber by Numerical Simulations
Salminen, J.; Sairanen, H.; Grahn, P.; Högström, R.; Lakka, A.; Heinonen, M.
2017-07-01
At the Centre for Metrology MIKES of VTT Technical Research Centre of Finland (VTT MIKES), we have been developing a humidity calibration apparatus for radiosondes within an EMRP Project Metrology for Essential Climate Variables. The minimum air temperature and absolute humidity are -80°C and 2.576 × 10^{-4} g\\cdot m^{-3} (corresponding the dew-point temperature -90°C), respectively. Recent developments for the apparatus extend its pressure operation range down to 7 hPa (abs). When operating in such dry conditions, the efficiency in calibration is highly limited by the time of humidity stabilization in a measurement chamber: Because the water vapor pressure is very low, the adsorption and desorption of water molecules at the chamber walls have a significant effect on the spatial and temporal humidity differences in the chamber. Inhomogeneity in humidity field inside the calibration chamber increases calibration uncertainty. In order to understand how varying parameters such as pressure, temperature, inflow speed and geometry of chamber effect on stabilization time of humidity field, computational fluid dynamics simulations were developed using Comsol software. Velocity and pressure of fluid, water vapor diffusion, temperature as well as adsorption/desorption of water molecules on the chamber walls were included in the simulations. Adsorption and desorption constants for water on the measurement chamber wall were determined experimentally. The results show that the flow speed and the surface area are the dominant parameters affecting the stabilization time of a calibration chamber. It was also discovered that more homogenous water vapor concentration field is obtained at low pressures.
Numerical simulation and experimental verification of extended source interferometer
Hou, Yinlong; Li, Lin; Wang, Shanshan; Wang, Xiao; Zang, Haijun; Zhu, Qiudong
2013-12-01
Extended source interferometer, compared with the classical point source interferometer, can suppress coherent noise of environment and system, decrease dust scattering effects and reduce high-frequency error of reference surface. Numerical simulation and experimental verification of extended source interferometer are discussed in this paper. In order to provide guidance for the experiment, the modeling of the extended source interferometer is realized by using optical design software Zemax. Matlab codes are programmed to rectify the field parameters of the optical system automatically and get a series of interferometric data conveniently. The communication technique of DDE (Dynamic Data Exchange) was used to connect Zemax and Matlab. Then the visibility of interference fringes can be calculated through adding the collected interferometric data. Combined with the simulation, the experimental platform of the extended source interferometer was established, which consists of an extended source, interference cavity and image collection system. The decrease of high-frequency error of reference surface and coherent noise of the environment is verified. The relation between the spatial coherence and the size, shape, intensity distribution of the extended source is also verified through the analysis of the visibility of interference fringes. The simulation result is in line with the result given by real extended source interferometer. Simulation result shows that the model can simulate the actual optical interference of the extended source interferometer quite well. Therefore, the simulation platform can be used to guide the experiment of interferometer which is based on various extended sources.
Numerical results for extended field method applications. [thin plates
Donaldson, B. K.; Chander, S.
1973-01-01
This paper presents the numerical results obtained when a new method of analysis, called the extended field method, was applied to several thin plate problems including one with non-rectangular geometry, and one problem involving both beams and a plate. The numerical results show that the quality of the single plate solutions was satisfactory for all cases except those involving a freely deflecting plate corner. The results for the beam and plate structure were satisfactory even though the structure had a freely deflecting corner.
Ku, L.P.; Hendel, H.W.; Liew, S.L.; Strachan, J.D.
1990-02-01
Accurate determinations of fusion neutron yields on the TFTR require that the neutron detectors be absolutely calibrated in-situ, using neutron sources of known strengths. For such calibrations, numerical simulations of neutron transport can be powerful tools in the design of experiments and the study of measurement results. On the TFTR, numerical calibration experiments' have been frequently used to complement actual detector calibrations. We present calculational approaches and transport models used in these numerical simulations, and summarize the results from simulating the calibration of {sup 235}U fission detectors carried out in December 1988. 12 refs., 9 figs., 6 tabs.
Parameter Calibration and Numerical Analysis of Twin Shallow Tunnels
Paternesi, Alessandra; Schweiger, Helmut F.; Scarpelli, Giuseppe
2017-05-01
Prediction of displacements and lining stresses in underground openings represents a challenging task. The main reason is primarily related to the complexity of this ground-structure interaction problem and secondly to the difficulties in obtaining a reliable geotechnical characterisation of the soil or the rock. In any case, especially when class A predictions fail in forecasting the system behaviour, performing class B or C predictions, which rely on a higher level of knowledge of the surrounding ground, can represent a useful resource for identifying and reducing model deficiencies. The case study presented in this paper deals with the construction works of twin-tube shallow tunnels excavated in a stiff and fine-grained deposit. The work initially focuses on the ground parameter calibration against experimental data, which together with the choice of an appropriate constitutive model plays a major role in the assessment of tunnelling-induced deformations. Since two-dimensional analyses imply initial assumptions to take into account the effect of the 3D excavation, three-dimensional finite element analyses were preferred. Comparisons between monitoring data and results of numerical simulations are provided. The available field data include displacements and deformation measurements regarding both the ground and tunnel lining.
Extended Calibration Technique of a Four-Hole Probe for Three-Dimensional Flow Measurements
Suresh Munivenkatareddy
2016-01-01
Full Text Available The present paper reports the development and nonnulling calibration technique to calibrate a cantilever type cylindrical four-hole probe of 2.54 mm diameter to measure three-dimensional flows. The probe is calibrated at a probe Reynolds number of 9525. The probe operative angular range is extended using a zonal method by dividing into three zones, namely, center, left, and right zone. Different calibration coefficients are defined for each zone. The attainable angular range achieved using the zonal method is ±60 degrees in the yaw plane and −50 to +30 degrees in the pitch plane. Sensitivity analysis of all the four calibration coefficients shows that probe pitch sensitivity is lower than the yaw sensitivity in the center zone, and extended left and right zones have lower sensitivity than the center zone. In addition, errors due to the data reduction program for the probe are presented. The errors are found to be reasonably small in all the three zones. However, the errors in the extended left and right zones have slightly larger magnitudes compared to those in the center zone.
A DIGITAL CALIBRATION ALGORITHM WITH VARIABLE-AMPLITUDE DITHERING FOR DOMAIN-EXTENDED PIPELINE ADCS
Ting Li
2014-02-01
Full Text Available The pseudorandom noise dither (PN dither technique is used to measure domain-extended pipeline analog-to-digital converter (ADC gain errors and to calibrate them digitally, while the digital error correction technique is used to correct the comparator offsets through the use of redundancy bits. However, both these techniques suffer from three disadvantages: slow convergence speed, deduction of the amplitude of the transmitting signal, and deduction of the redundancy space. A digital calibration algorithm with variable-amplitude dithering for domain-extended pipeline ADCs is used in this research to overcome these disadvantages. The proposed algorithm is implemented in a 12-bit, 100 MS/s sample-rate pipeline ADC. The simulation results illustrate both static and dynamic performance improvement after calibration. Moreover, the convergence speed is much faster.
Raster Scanning the Crab Nebula to Produce an Extended VHE Calibration Source
Bird, Ralph
2015-01-01
The Crab Nebula has long been the standard reference point source for very-high-energy (VHE, E $>$100 GeV) gamma-ray observatories such as VERITAS. It has enabled testing and improvement of analysis methods, validation of techniques, and has served as a calibration source. No comparable extended source is known with a high, constant flux and well understood morphology. In order to artificially generate such a source, VERITAS has performed raster scans across the Crab Nebula. By displacing the source within the field-of-view in a known pattern, it is possible to generate an extended calibration source for verification of extended source analysis techniques. The method as well as early results of this novel technique are presented.
Calibration of a numerical ionospheric model with EISCAT observations
P.-L. Blelly
Full Text Available A set of EISCAT UHF and VHF observations is used for calibrating a coupled fluid-kinetic model of the ionosphere. The data gathered in the period 1200- 2400 UT on 24 March 1995 had various intervals of interest for such a calibration. The magnetospheric activity was very low during the afternoon, allowing for a proper examination of a case of quiet ionospheric conditions. The radars entered the auroral oval just after 1900 UT: a series of dynamic events probably associated with rapidly moving auroral arcs was observed until after 2200 UT. No attempts were made to model the dynamical behaviour during the 1900–2200 UT period. In contrast, the period 2200–2400 UT was characterised by quite steady precipitation: this latter period was then chosen for calibrating the model during precipitation events. The adjustment of the model on the four primary parameters observed by the radars (namely the electron concentration and temperature and the ion temperature and velocity needed external inputs (solar fluxes and magnetic activity index and the adjustments of a neutral atmospheric model in order to reach a good agreement. It is shown that for the quiet ionosphere, only slight adjustments of the neutral atmosphere models are needed. In contrast, adjusting the observations during the precipitation event requires strong departures from the model, both for the atomic oxygen and hydrogen. However, it is argued that this could well be the result of inadequately representing the vibrational states of N_{2} during precipitation events, and that these factors have to be considered only as ad hoc corrections.
Efficiency calibration of an extended-range Ge detector by a detailed Monte Carlo simulation
Peyres, V. [Metrologia de Radiaciones Ionizantes, CIEMAT, Avda. Complutense 22, Madrid 28040 (Spain)], E-mail: Virginia.peyres@ciemat.es; Garcia-Torano, E. [Metrologia de Radiaciones Ionizantes, CIEMAT, Avda. Complutense 22, Madrid 28040 (Spain)
2007-09-21
A Monte Carlo simulation has been employed for calibrating an extended-range Ge detector in an energy range from 14 to 1800 keV. A set of sources from monoenergetic and multi-gamma emitters point were measured at 15 cm from the detector window and provided 26 experimental values to which the results of the simulations are compared. Discrepancies between simulated and experimental values are within 1 standard deviation, and relative differences are, in most cases, below 1%.
Calibrated hot deck imputation for numerical data under edit restrictions
de Waal, A.G.; Coutinho, Wieger; Shlomo, Natalie
2017-01-01
We develop a non-parametric imputation method for item non-response based on the well-known hot-deck approach. The proposed imputation method is developed for imputing numerical data that ensure that all record-level edit rules are satisfied and previously estimated or known totals are exactly prese
Xiao-song HU; Feng-chun SUN; Xi-ming CHENG
2011-01-01
In this paper,an efficient model structure composed of a second-order resistance-capacitance networkand a simply analytical open circuit voltage versus state of charge(SOC)map is applied to characterize the voltage behavior of a lithium iron phosphate battery for electric vehicles(EVs).As a result,the overpotentials of the battery can be depicted using a second-order circuit network and the model parameterization can be realized under any battery loading profile,without a special characterization experiment.In order to ensure good robustness,extended Kalman filtering is adopted to recursively implement the calibration process.The linearization involved in the calibration algorithm is realized through recurrent derivatives in a recursive form.Validation results show that the recursively calibrated battery model can accurately delineate the battery voltage behavior under two different transient power operating conditions.A comparison with a first-order model indicates that the recursively calibrated second-order model has a comparable accuracy in a major part of the battery SOC range and a better performance when the SOC is relatively low.
Mitja Morgut
2012-01-01
Full Text Available The numerical predictions of the cavitating flow around two model scale propellers in uniform inflow are presented and discussed. The simulations are carried out using a commercial CFD solver. The homogeneous model is used and the influence of three widespread mass transfer models, on the accuracy of the numerical predictions, is evaluated. The mass transfer models in question share the common feature of employing empirical coefficients to adjust mass transfer rate from water to vapour and back, which can affect the stability and accuracy of the predictions. Thus, for a fair and congruent comparison, the empirical coefficients of the different mass transfer models are first properly calibrated using an optimization strategy. The numerical results obtained, with the three different calibrated mass transfer models, are very similar to each other for two selected model scale propellers. Nevertheless, a tendency to overestimate the cavity extension is observed, and consequently the thrust, in the most severe operational conditions, is not properly predicted.
Design, calibration and tests of an extended-range Bonner sphere spectrometer
Mitaroff, Angela; Silari, Marco
2001-01-01
Stray radiation fields outside the shielding of hadron accelerators are of complex nature. They consist of a multiplicity of radiation components (neutrons, photons, electrons, pions, muons, ...) which extend over a wide range of energies. Since the dose equivalent in these mixed fields is mainly due to neutrons, neutron dosimetry is a particularly important task. The neutron energy in these fields ranges from thermal up to several hundreds of MeV, thus making dosimetry difficult. A well known instrument for measuring neutron energy distributions from thermal energies up to about E=10 MeV is the Bonner sphere spectrometer (BSS). It consists of a set of moderating spheres of different radii made of polyethylene, with a thermal neutron counter in the centre. Each detector (sphere plus counter) has a maximum response at a certain energy value depending on its size, but the overall response of the conventional BSS drops sharply between E=10-20 MeV. This thesis focuses on the development, the calibration and tests...
Calibration of Numerical Model for Shoreline Change Prediction Using Satellite Imagery Data
Sigit Sutikno
2015-12-01
Full Text Available This paper presents a method for calibration of numerical model for shoreline change prediction using satellite imagery data in muddy beach. Tanjung Motong beach, a muddy beach that is suffered high abrasion in Rangsang Island, Riau province, Indonesia was picked as study area. The primary numerical modeling tool used in this research was GENESIS (GENEralized Model for Simulating Shoreline change, which has been successfully applied in many case studies of shoreline change phenomena on a sandy beach.The model was calibrated using two extracted coastlines satellite imagery data, such as Landsat-5 TM and Landsat-8 OLI/TIRS. The extracted coastline data were analyzed by using DSAS (Digital Shoreline Analysis System tool to get the rate of shoreline change from 1990 to 2014. The main purpose of the calibration process was to find out the appropriate value for K 1 and K coefficients so that the predicted shoreline change had an acceptable correlation with the output of the satellite data processing. The result of this research showed that the shoreline change prediction had a good correlation with the historical evidence data in Tanjung Motong coast. It means that the GENESIS tool is not only applicable for shoreline prediction in sandy beach but also in muddy beach.
Pancotti, Anthony P; Gilpin, Matthew; Hilario, Martin S
2012-03-01
With the progression of high-power electric propulsion and high thrust-to-power propulsions system, thrust stand diagnostics require high-fidelity calibration systems that are accurate over a large-range of thrust levels. Multi-mode and variable I(sp) propulsion devices also require that a single stand be capable of measuring thrust from 10's of uNs to 100's of mNs. While the torsional thrust stand mechanic and diagnostics are capable of operating over such a large range, current pulsed calibration schemes are typically limited to a few orders of magnitude of dynamic range. In order to develop a stand with enough dynamic range, two separate calibration methods have been examined and compared to create a combined system. Electrostatic fin (ESF) and piezoelectric impact hammer (PIH) calibration systems were simultaneously tested on a large scale torsional thrust stand system. The use of the these two methods allowed the stand to be calibrated over four orders of magnitude, from 0.01 mNs to 750 mNs. The ESF system produced linear results within 0.52% from 0.01 mNs to 20 mNs, while the PIH system extended this calibration range from 10 mNs to 750 mNs with an error of 0.99%. The two calibration methods agreed within 4.51% over their overlapping range of 10-20 mNs.
Langevin, Christian D.; Hughes, Joseph D.
2010-01-01
A model with a small amount of numerical dispersion was used to represent saltwater 7 intrusion in a homogeneous aquifer for a 10-year historical calibration period with one 8 groundwater withdrawal location followed by a 10-year prediction period with two groundwater 9 withdrawal locations. Time-varying groundwater concentrations at arbitrary locations in this low-10 dispersion model were then used as observations to calibrate a model with a greater amount of 11 numerical dispersion. The low-dispersion model was solved using a Total Variation Diminishing 12 numerical scheme; an implicit finite difference scheme with upstream weighting was used for 13 the calibration simulations. Calibration focused on estimating a three-dimensional hydraulic 14 conductivity field that was parameterized using a regular grid of pilot points in each layer and a 15 smoothness constraint. Other model parameters (dispersivity, porosity, recharge, etc.) were 16 fixed at the known values. The discrepancy between observed and simulated concentrations 17 (due solely to numerical dispersion) was reduced by adjusting hydraulic conductivity through the 18 calibration process. Within the transition zone, hydraulic conductivity tended to be lower than 19 the true value for the calibration runs tested. The calibration process introduced lower hydraulic 20 conductivity values to compensate for numerical dispersion and improve the match between 21 observed and simulated concentration breakthrough curves at monitoring locations. 22 Concentrations were underpredicted at both groundwater withdrawal locations during the 10-23 year prediction period.
Numerical simulation of proton distribution with electric double layer in extended nanospaces.
Chang, Chih-Chang; Kazoe, Yutaka; Morikawa, Kyojiro; Mawatari, Kazuma; Yang, Ruey-Jen; Kitamori, Takehiko
2013-05-01
Understanding the properties of liquid confined in extended nanospaces (10-1000 nm) is crucial for nanofluidics. Because of the confinement and surface effects, water may have specific structures and reveals unique physicochemical properties. Recently, our group has developed a super resolution laser-induced fluorescence (LIF) technique to visualize proton distribution with the electrical double layer (EDL) in a fused-silica extended nanochannel (Kazoe, Y.; Mawatari, K.; Sugii, Y.; Kitamori, T. Anal. Chem.2011, 83, 8152). In this study, based on the coupling of the Poisson-Boltzmann theory and site-dissociation model, the effect of specific water properties in an extended nanochannel on formation of EDL was investigated by comparison of numerical results with our previous experimental results. The numerical results of the proton distribution with a lower dielectric constant of approximately 17 were shown to be in good agreement with our experimental results, which confirms our previous observation showing a lower water permittivity in an extended nanochannel. In addition, the higher silanol deprotonation rate in extended nanochannels was also demonstrated, which is supported by our previous results of NMR and streaming current measurements. The present results will be beneficial for a further understanding of interfacial chemistry, fluid physics, and electrokinetics in extended nanochannels.
Behaviour of mudflows realized in a laboratory apparatus and relative numerical calibration
Brezzi, Lorenzo; Gabrieli, Fabio; Kaitna, Roland; Cola, Simonetta
2016-04-01
Nowadays, numerical simulations are indispensable allies for the researchers to reproduce phenomena such as earth-flows, debris-flows and mudflows. One of the most difficult and problematic phases is about the choice and the calibration of the parameters to be included in the model at the real scale. Surely, it can be useful to start from laboratory experiment that simplify as much as possible the case study with the aim of reducing uncertainties related to the trigger and the propagation of a real flow. In this way, geometry of the problem, identification of the triggering mass, are well known and constrained in the experimental tests as in the numerical simulations and the focus of the study may be moved to the material parameters. This article wants to analyze the behavior of different mixtures of water and kaolin, which flow in a laboratory channel. A 10 dm3 prismatic container that discharges the material into a channel 2m long and 0.16 m wide composes the simple experimental apparatus. The chute base was roughened by glued sand and inclined with a 21° angle. Initially, we evaluated the lengths of run-out, the spread and shape of the deposit for five different mixtures. A huge quantity of information were obtained by 3 laser sensors attached to the channel and by photogrammetry, that gives out a 3D model of the deposit shape at the end of the flow. Subsequently, we reproduced these physical phenomena by using the numerical model Geoflow-SPH (Pastor et al., 2008; 2014) , governed by a Bingham rheological law (O'Brien & Julien, 1988), and we calibrated the different tests by back-analysis to assess optimum parameters. The final goal was the comprehension of the relationship that characterizes the parameters with the variation of the kaolin content in the mixtures.
Numerical modeling of concrete hydraulic fracturing with extended finite element method
REN QingWen; DONG YuWen; YU TianTang
2009-01-01
The extended finite element method (XFEM) is a new numerical method for modeling discontinuity.Research about numerical modeling for concrete hydraulic fracturing by XFEM is explored. By building the virtual work principle of the fracture problem considering water pressure on the crack surface, the governing equations of XFEM for hydraulic fracture modeling are derived. Implementation of the XFEM for hydraulic fracturing is presented. Finally, the method is verified by two examples and the advan-tages of the XFEM for hydraulic fracturing analysis are displayed.
Numerical modeling of concrete hydraulic fracturing with extended finite element method
无
2009-01-01
The extended finite element method (XFEM) is a new numerical method for modeling discontinuity. Research about numerical modeling for concrete hydraulic fracturing by XFEM is explored. By building the virtual work principle of the fracture problem considering water pressure on the crack surface, the governing equations of XFEM for hydraulic fracture modeling are derived. Implementation of the XFEM for hydraulic fracturing is presented. Finally, the method is verified by two examples and the advan- tages of the XFEM for hydraulic fracturing analysis are displayed.
Self-calibrating phase measurement based on diffraction theory and numerical simulation experiments
Zhou, Liao; Qi, Qiu; Hao, Xian
2015-02-01
To achieve a full-aperture, diffraction-limited image, a telescope's segmented primary mirror must be properly phased. Furthermore, it is crucial to detect the piston errors between individual segments with high accuracy. Based on the diffraction imaging theory, the symmetrically shaped aperture with an arbitrarily positioned entrance pupil would focus at the optical axis with a symmetrical diffraction pattern. By selecting a single mirror as a reference mirror and regarding the diffraction image's center as the calibration point, a function can be derived that expresses the relationship between the piston error and the distance from the center of the inference image to the calibration point is linearity within one-half wavelength. These theoretical results are shown to be consistent with the results of a numerical simulation. Using this method, not only the piston error, but also the tip-tilt error can be detected. This method is simple and effective; it yields high-accuracy measurements and requires less computation time.
Studies on numerical site calibration over complex terrain for wind turbines
Daisuke; MATSUSHITA; Hikaru; MATSUMIYA; Yoshinori; HARA; Satoshi; WATANABE; Akinori; FURUKAWA
2010-01-01
The estimation of wind turbine performance over complex terrain is very difficult because of the document of standard IEC61400-12 is adapted for flat or slightly complex topography.And the cost of constructing a meteorological mast is higher with scaling wind turbine up.We have proposed a numerical site calibration(NSC) technique in order to estimate the inflow velocity at the position of wind turbine by using CFD tool to calculate the flow field around the site.The present paper shows the problems for the procedure of NSC in which a commercial nonlinear CFD tool and the improvement method are used to gain a more accurate result.It is clarified that the wind turbine performance which is estimated by using the wind speed on the meteorological mast has a good result for annual energy production.
Numerical model calibration with the use of an observed sediment mobility mapping technique.
Javernick, Luke; Redolfi, Marco; Bertoldi, Walter
2017-04-01
Two-dimensional numerical models' use and accuracy has greatly increased over the last decade partially due ease of topographic data access and acquisition. This is largely due to the surge in survey technologies such as GPS, LiDAR, terrestrial laser scanners (TLS), and Structure-from-Motion (SfM). As many studies have shown, topography is often the greatest influence on a model's predictive accuracy. Recently, studies have shown the use of accurate topographic datasets for numerical modeling yields appreciable accuracies in both depth and inundation patterns when compared to observed data, even in highly complicated planforms such as shallow braided rivers. Model calibration is typically limited by data availability, data quality, and the user's experience. Hydraulic calibrations with a fixed bed mode often focuses purely on depth predictions using gauge data and more rarely spatial depth data, velocity data, and inundation patterns. Morphological models with bed updating and erosion are often calibrated using erosion and deposition patterns and more rarely consider sediment transport acquired field data. Transitioning from a hydraulic to morphological calibration includes a considerable increase in complicated processes, model parameters, assumptions, and sources of errors. With morphological observed data limited to documented topographic changes, a model's 'performance' is merely based on replicating results instead of processes, and thus it is difficult to fully evaluate the model's true ability. With the increase in data acquisition and model usage, there is a need to push numerical model testing beyond traditional performance metrics and toward process evaluations. To address this need, instantaneous morphology processes must be evaluated. Flume experiments of a 24 m x 1.6 m wide channel with 1 mm sediment and a 1% slope were ran to develop a braided river and fully documented with: i) highly accurate Structure-from-Motion derived topography (average errors
R. G. M. de Andrade
Full Text Available The last four decades were important for the Brazilian highway system. Financial investments were made so it could expand and many structural solutions for bridges and viaducts were developed. In parallel, there was a significant raise of pathologies in these structures, due to lack of maintenance procedures. Thus, this paper main purpose is to create a short-term monitoring plan in order to check the structural behavior of a curved highway concrete bridge in current use. A bridge was chosen as a case study. A hierarchy of six numerical models is shown, so it can validate the bridge's structural behaviour. The acquired data from the monitoring was compared with the finest models so a calibration could be made.
Extended Commissioning and Calibration of the Dual-Beam Imaging Polarimeter
Masiero, Joseph; Harrington, David; Lin, Haosheng
2008-01-01
In our previous paper (Masiero et al. 2007) we presented the design and initial calibrations of the Dual-Beam Imaging Polarimeter (DBIP), a new optical instrument for the University of Hawaii's 2.2 m telescope on the summit of Mauna Kea, Hawaii. In this followup work we discuss our full-Stokes mode commissioning including crosstalk determination and our typical observing methodology.
An Extended Chaboche's Viscoplastic Law at Finite Strains: Theoretical and Numerical Aspects
R.C.Lin; W.Brocks
2005-01-01
This paper presents a newly extended Chaboche's viscoplastic law at finite strains, so that the classical Chaboche's theories can be applied to the physical and numerical simulation of metals processing and behavior description of spatial metal structures. The extension is based on a new dissipation inequality at finite strains. The evolution equations are formulated in terms of the corotational rates of the logarithmic elastic strain and the strain-like internal variable conjugate to the back stress as well as the material time derivative of the accumulated plastic strain. The stress equation is expressed on the hyperelastic theory. Therefore, the possible inconsistency with elasticity, caused by the hypoelastic equations, is completely removed. A set of numerical examples with finite deformations are presented to prove the effectivities of the new model and numerical algorithms.
Calibration of the 4pi gamma-ray spectrometer using a new numerical simulation approach.
Nafee, Sherif S; Badawi, Mohamed S; Abdel-Moneim, Ali M; Mahmoud, Seham A
2010-09-01
The 4pi gamma-counting system is well suited for analysis of small environmental samples of low activity because it combines advantages of the low background and the high detection efficiency due to the 4pi solid angle. A new numerical simulation approach is proposed for the HPGe well-type detector geometry to calculate the full-energy peak and the total efficiencies, as well as to correct for the coincidence summing effect. This method depends on a calculation of the solid angle subtended by the source to the detector at the point of entrance, (Abbas, 2006a). The calculations are carried out for non-axial point and cylindrical sources inside the detector cavity. Attenuation of photons within the source itself (self-attenuation), the source container, the detector's end-cap and the detector's dead layer materials is also taken into account. In the Belgium Nuclear Research Center, low-activity aqueous solutions of (60)Co and (88)Y in small vials are routinely used to calibrate a gamma-ray p-type well HPGe detector in the 60-1836keV energy range. Efficiency values measured under such conditions are in good agreement with those obtained by the numerical simulation.
Calibration of numerical models for small debris flows in Yosemite Valley, California, USA
Bertolo, P.; Wieczorek, G.F.
2005-01-01
This study compares documented debris flow runout distances with numerical simulations in the Yosemite Valley of California, USA, where about 15% of historical events of slope instability can be classified as debris flows and debris slides (Wieczorek and Snyder, 2004). To model debris flows in the Yosemite Valley, we selected six streams with evidence of historical debris flows; three of the debris flow deposits have single channels, and the other three split their pattern in the fan area into two or more channels. From field observations all of the debris flows involved coarse material, with only very small clay content. We applied the one dimensional DAN (Dynamic ANalysis) model (Hungr, 1995) and the two-dimensional FLO2D model (O'Brien et al., 1993) to predict and compare the runout distance and the velocity of the debris flows observed in the study area. As a first step, we calibrated the parameters for the two softwares through the back analysis of three debris- flows channels using a trial-and-error procedure starting with values suggested in the literature. In the second step we applied the selected values to the other channels, in order to evaluate their predictive capabilities. After parameter calibration using three debris flows we obtained results similar to field observations We also obtained a good agreement between the two models for velocities. Both models are strongly influenced by topography: we used the 30 m cell size DTM available for the study area, that is probably not accurate enough for a highly detailed analysis, but it can be sufficient for a first screening. European Geosciences Union ?? 2005 Author(s). This work is licensed under a Creative Commons License.
The Kronig-Penney model extended to arbitrary potentials via numerical matrix mechanics
Pavelich, R. L.; Marsiglio, F.
2015-09-01
The Kronig-Penney model is a common starting point for studying the quantum mechanics of electrons in a confining periodic potential. This model uses a square-well potential; the energies and eigenstates can be obtained analytically for a single well, and then Bloch's theorem allows one to extend these solutions to the periodically repeating potential. In this work, we describe how to obtain simple numerical solutions for the eigenvalues and eigenstates for any confining one-dimensional potential within a unit cell and then extend this procedure, with virtually no extra effort, to the case of arbitrary periodically repeating potentials. In this way, one can study the band structure effects that arise from differently shaped potentials. One of these effects is the electron-hole mass asymmetry; more realistic unit cell potentials generally give rise to higher electron-hole mass asymmetries.
Numerical study of rotor-stator interactions in a hydraulic turbine with Foam-extend
Romain, Cappato; Guibault, François; Devals, Christophe; Nennemann, Bernd
2016-11-01
In the development of high head hydraulic turbines, vibrations are one of the critical problems. In Francis turbines, pressure fluctuations occur at the interface between the blades of the runner and guide vanes. This rotor-stator interaction can be responsible for fatigue failures and cracks. Although the flow inside the turbomachinery is complex, and the unsteadiness makes it difficult to model, the choice of an appropriate setup enables the study of this phenomenon. This study validates a numerical setup of the Foam-extend open source software for rotor-stator simulations. Pressure fluctuations results show a good correspondence with data from experiments.
Extended calibration range for prompt photon emission in ion beam irradiation
Bellini, F. [Dipartimento di Fisica, Sapienza Università di Roma, Roma (Italy); INFN Sezione di Roma, Roma (Italy); Boehlen, T.T.; Chin, M.P.W. [CERN, Geneva (Switzerland); Collamati, F. [Dipartimento di Fisica, Sapienza Università di Roma, Roma (Italy); INFN Sezione di Roma, Roma (Italy); De Lucia, E. [Laboratori Nazionali di Frascati dell' INFN, Frascati (Italy); Faccini, R., E-mail: riccardo.faccini@roma1.infn.it [Dipartimento di Fisica, Sapienza Università di Roma, Roma (Italy); INFN Sezione di Roma, Roma (Italy); Ferrari, A. [CERN, Geneva (Switzerland); Lanza, L. [Dipartimento di Fisica, Sapienza Università di Roma, Roma (Italy); INFN Sezione di Roma, Roma (Italy); Mancini-Terracciano, C. [CERN, Geneva (Switzerland); Dipartimento di Fisica, Università Roma Tre, Roma (Italy); Marafini, M. [Museo Storico della Fisica e Centro Studi e Ricerche “E. Fermi”, Roma (Italy); INFN Sezione di Roma, Roma (Italy); Mattei, I. [Dipartimento di Fisica, Università Roma Tre, Roma (Italy); Laboratori Nazionali di Frascati dell' INFN, Frascati (Italy); Morganti, S. [INFN Sezione di Roma, Roma (Italy); Ortega, P.G. [CERN, Geneva (Switzerland); Patera, V. [Dipartimento di Scienze di Base e Applicate per Ingegneria, Sapienza Università di Roma, Roma (Italy); INFN Sezione di Roma, Roma (Italy); Piersanti, L. [Dipartimento di Scienze di Base e Applicate per Ingegneria, Sapienza Università di Roma, Roma (Italy); Laboratori Nazionali di Frascati dell' INFN, Frascati (Italy); Russomando, A. [Center for Life Nano Science@Sapienza, Istituto Italiano di Tecnologia, Roma (Italy); INFN Sezione di Roma, Roma (Italy); Sala, P.R. [INFN Sezione di Milano, Milano (Italy); and others
2014-05-01
Monitoring the dose delivered during proton and carbon ion therapy is still a matter of research. Among the possible solutions, several exploit the measurement of the single photon emission from nuclear decays induced by the irradiation. To fully characterize such emission the detectors need development, since the energy spectrum spans the range above the MeV that is not traditionally used in medical applications. On the other hand, a deeper understanding of the reactions involving gamma production is needed in order to improve the physic models of Monte Carlo codes, relevant for an accurate prediction of the prompt-gamma energy spectrum. This paper describes a calibration technique tailored for the range of energy of interest and reanalyzes the data of the interaction of a 80 MeV/u fully stripped carbon ion beam with a Poly-methyl methacrylate target. By adopting the FLUKA simulation with the appropriate calibration and resolution a significant improvement in the agreement between data and simulation is reported.
Extending calibration-free force measurements to optically-trapped rod-shaped samples
Català, Frederic; Marsà, Ferran; Montes-Usategui, Mario; Farré, Arnau; Martín-Badosa, Estela
2017-02-01
Optical trapping has become an optimal choice for biological research at the microscale due to its non-invasive performance and accessibility for quantitative studies, especially on the forces involved in biological processes. However, reliable force measurements depend on the calibration of the optical traps, which is different for each experiment and hence requires high control of the local variables, especially of the trapped object geometry. Many biological samples have an elongated, rod-like shape, such as chromosomes, intracellular organelles (e.g., peroxisomes), membrane tubules, certain microalgae, and a wide variety of bacteria and parasites. This type of samples often requires several optical traps to stabilize and orient them in the correct spatial direction, making it more difficult to determine the total force applied. Here, we manipulate glass microcylinders with holographic optical tweezers and show the accurate measurement of drag forces by calibration-free direct detection of beam momentum. The agreement between our results and slender-body hydrodynamic theoretical calculations indicates potential for this force-sensing method in studying protracted, rod-shaped specimens.
Extended calibration range for prompt photon emission in ion beam irradiation
Bellini, F.
2014-01-01
Monitoring the dose delivered during proton and carbon ion therapy is still a matter of research. Among the possible solutions, several exploit the measurement of the single photon emission from nuclear decays induced by the irradiation. To fully characterize such emission the detectors need development, since the energy spectrum spans the range above the MeV that is not traditionally used in medical applications. On the other hand, a deeper understanding of the reactions involving gamma production is needed in order to improve the physic models of Monte Carlo codes, relevant for an accurate prediction of the prompt-gamma energy spectrum.This paper describes a calibration technique tailored for the range of energy of interest and reanalyzes the data of the interaction of a 80MeV/u fully stripped carbon ion beam with a Poly-methyl methacrylate target. By adopting the FLUKA simulation with the appropriate calibration and resolution a significant improvement in the agreement between data and simulation is report...
Jiangqi Long
2015-01-01
Full Text Available The total weight of Extended-Range Electric Vehicle (E-REV is too heavy, which affects rear-end collision safety. Using numerical simulation, a lightweight method is designed to reduce E-REV body and key parts weight based on rear-end collision failure analysis. To calculate and optimize the performance of vehicle safety, the simulation model of E-REV rear-end collision safety is built by using finite element analysis. Drive battery pack lightweight design method is analyzed and the bending mode and torsional mode of E-REV before and after lightweight are compared to evaluate E-REV rear-end collision safety performance. The simulation results of optimized E-REV safety structure are verified by both numerical simulation and experimental investigation of the entire vehicle crash test.
Baczewski, Andrew D
2013-01-01
Generalized Langevin dynamics (GLD) arise in the modeling of a number of systems, ranging from structured fluids that exhibit a viscoelastic mechanical response, to biological systems, and other media that exhibit anomalous diffusive phenomena. Molecular dynamics (MD) simulations that include GLD in conjunction with external and/or pairwise forces require the development of numerical integrators that are efficient, stable, and have known convergence properties. In this article, we derive a family of extended variable integrators for the Generalized Langevin equation (GLE) with a positive Prony series memory kernel. Using stability and error analysis, we identify a superlative choice of parameters and implement the corresponding numerical algorithm in the LAMMPS MD software package. Salient features of the algorithm include exact conservation of the first and second moments of the equilibrium velocity distribution in some important cases, stable behavior in the limit of conventional Langevin dynamics, and the ...
New method for computer numerical control machine tool calibration: Relay method
LIU Huanlao; SHI Hanming; LI Bin; ZHOU Huichen
2007-01-01
Relay measurement method,which uses the kilogram-meter (KGM) measurement system to identify volumetric errors on the planes of computer numerical con trol (CNC) machine tools,is verified through experimental tests.During the process,all position errors on the entire plane table are measured by the equipment,which is limited to a small field.All errors are obtained first by measuring the error of the basic position near the original point.On the basis of that positional error,the positional errors far away from the original point are measured.Using this analogy,the error information on the positional points on the entire plane can be obtained.The process outlined above is called the relay meth od.Test results indicate that the accuracy and repeatability are high,and the method can be used to calibrate geometric errors on the plane of CNC machine tools after backlash errors have been well compensated.
Visualized analysis of mixed numeric and categorical data via extended self-organizing map.
Hsu, Chung-Chian; Lin, Shu-Han
2012-01-01
Many real-world datasets are of mixed types, having numeric and categorical attributes. Even though difficult, analyzing mixed-type datasets is important. In this paper, we propose an extended self-organizing map (SOM), called MixSOM, which utilizes a data structure distance hierarchy to facilitate the handling of numeric and categorical values in a direct, unified manner. Moreover, the extended model regularizes the prototype distance between neighboring neurons in proportion to their map distance so that structures of the clusters can be portrayed better on the map. Extensive experiments on several synthetic and real-world datasets are conducted to demonstrate the capability of the model and to compare MixSOM with several existing models including Kohonen's SOM, the generalized SOM and visualization-induced SOM. The results show that MixSOM is superior to the other models in reflecting the structure of the mixed-type data and facilitates further analysis of the data such as exploration at various levels of granularity.
Baczewski, Andrew D; Bond, Stephen D
2013-07-28
Generalized Langevin dynamics (GLD) arise in the modeling of a number of systems, ranging from structured fluids that exhibit a viscoelastic mechanical response, to biological systems, and other media that exhibit anomalous diffusive phenomena. Molecular dynamics (MD) simulations that include GLD in conjunction with external and/or pairwise forces require the development of numerical integrators that are efficient, stable, and have known convergence properties. In this article, we derive a family of extended variable integrators for the Generalized Langevin equation with a positive Prony series memory kernel. Using stability and error analysis, we identify a superlative choice of parameters and implement the corresponding numerical algorithm in the LAMMPS MD software package. Salient features of the algorithm include exact conservation of the first and second moments of the equilibrium velocity distribution in some important cases, stable behavior in the limit of conventional Langevin dynamics, and the use of a convolution-free formalism that obviates the need for explicit storage of the time history of particle velocities. Capability is demonstrated with respect to accuracy in numerous canonical examples, stability in certain limits, and an exemplary application in which the effect of a harmonic confining potential is mapped onto a memory kernel.
Baczewski, Andrew D.; Bond, Stephen D.
2013-07-01
Generalized Langevin dynamics (GLD) arise in the modeling of a number of systems, ranging from structured fluids that exhibit a viscoelastic mechanical response, to biological systems, and other media that exhibit anomalous diffusive phenomena. Molecular dynamics (MD) simulations that include GLD in conjunction with external and/or pairwise forces require the development of numerical integrators that are efficient, stable, and have known convergence properties. In this article, we derive a family of extended variable integrators for the Generalized Langevin equation with a positive Prony series memory kernel. Using stability and error analysis, we identify a superlative choice of parameters and implement the corresponding numerical algorithm in the LAMMPS MD software package. Salient features of the algorithm include exact conservation of the first and second moments of the equilibrium velocity distribution in some important cases, stable behavior in the limit of conventional Langevin dynamics, and the use of a convolution-free formalism that obviates the need for explicit storage of the time history of particle velocities. Capability is demonstrated with respect to accuracy in numerous canonical examples, stability in certain limits, and an exemplary application in which the effect of a harmonic confining potential is mapped onto a memory kernel.
Xia Xiaozhou
2013-01-01
Full Text Available In the frame of the extended finite element method, the exponent disconnected function is introduced to reflect the discontinuous characteristic of crack and the crack tip enrichment function which is made of triangular basis function, and the linear polar radius function is adopted to describe the displacement field distribution of elastoplastic crack tip. Where, the linear polar radius function form is chosen to decrease the singularity characteristic induced by the plastic yield zone of crack tip, and the triangle basis function form is adopted to describe the displacement distribution character with the polar angle of crack tip. Based on the displacement model containing the above enrichment displacement function, the increment iterative form of elastoplastic extended finite element method is deduced by virtual work principle. For nonuniform hardening material such as concrete, in order to avoid the nonsymmetry characteristic of stiffness matrix induced by the non-associate flowing of plastic strain, the plastic flowing rule containing cross item based on the least energy dissipation principle is adopted. Finally, some numerical examples show that the elastoplastic X-FEM constructed in this paper is of validity.
Kazolea, M.; Delis, A. I.; Synolakis, C. E.
2014-08-01
A new methodology is presented to handle wave breaking over complex bathymetries in extended two-dimensional Boussinesq-type (BT) models which are solved by an unstructured well-balanced finite volume (FV) scheme. The numerical model solves the 2D extended BT equations proposed by Nwogu (1993), recast in conservation law form with a hyperbolic flux identical to that of the Non-linear Shallow Water (NSW) equations. Certain criteria, along with their proper implementation, are established to characterize breaking waves. Once breaking waves are recognized, we switch locally in the computational domain from the BT to NSW equations by suppressing the dispersive terms in the vicinity of the wave fronts. Thus, the shock-capturing features of the FV scheme enable an intrinsic representation of the breaking waves, which are handled as shocks by the NSW equations. An additional methodology is presented on how to perform a stable switching between the BT and NSW equations within the unstructured FV framework. Extensive validations are presented, demonstrating the performance of the proposed wave breaking treatment, along with some comparisons with other well-established wave breaking mechanisms that have been proposed for BT models.
Extended numerical modeling of impurity neoclassical transport in tokamak edge plasmas
Inoue, H.; Yamoto, S.; Hatayama, A. [Graduate School of Science and Technology, Keio University, Hiyoshi, Yokohama (Japan); Homma, Y. [Graduate School of Science and Technology, Keio University, Hiyoshi, Yokohama (Japan); Research Fellow of Japan Society for the Promotion of Science, Tokyo (Japan)
2016-08-15
Understanding of impurity transport in tokamaks is an important issue in order to reduce the impurity contamination in fusion core plasmas. Recently, a new kinetic numerical scheme of impurity classical/neoclassical transport has been developed. This numerical scheme makes it possible to include classical self-diffusion (CL SD), classical inward pinch (CL IWP), and classical temperature screening effect (CL TSE) of impurity ions. However, impurity neoclassical transport has been modeled only in the case where background plasmas are in the Pfirsch-Schluter (PS) regime. The purpose of this study is to extend our previous model to wider range of collisionality regimes, i.e., not only the PS regime, but also the plateau regime. As in the previous study, a kinetic model with Binary Collision Monte-Carlo Model (BMC) has been adopted. We focus on the modeling of the neoclassical self-diffusion (NC SD) and the neoclassical inward pinch (NC IWP). In order to simulate the neoclassical transport with the BCM, velocity distribution of background plasma ions has been modeled as a deformed Maxwell distribution which includes plasma density gradient. Some test simulations have been done. As for NC SD of impurity ions, our scheme reproduces the dependence on the collisionality parameter in wide range of collisionality regime. As for NC IWP, in cases where test impurity ions and background ions are in the PS and plateau regimes, parameter dependences have been reproduced. (copyright 2016 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)
Extending Semi-numeric Reionisation Models to the First Stars and Galaxies
Koh, Daegene
2016-01-01
Semi-numeric methods have made it possible to efficiently model the epoch of reionisation (EoR). While most implementations involve a reduction to a simple three-parameter model, we introduce a new mass-dependent ionising efficiency parameter that folds in physical parameters that are constrained by the latest numerical simulations. This new parameterization enables the effective modeling of a broad range of host halo masses containing ionising sources, extending from the smallest Population III host halos with $M \\sim 10^6 M_\\odot$, which are often ignored, to the rarest cosmic peaks with $M \\sim 10^{12} M_\\odot$ during EoR. We compare the resulting ionising histories with a typical three-parameter model and also compare with the latest constraints from the Planck mission. Our model results in a optical depth due to Thomson scattering, $\\tau_{\\mathrm{e}}$ = 0.057, that is consistent with Planck. The largest difference in our model is shown in the resulting bubble size distributions which peak at lower charac...
Cross-Wire Calibration for Freehand 3D Ultrasonography: Measurement and Numerical Issues
J. Jan
2005-06-01
Full Text Available 3D freehand ultrasound is an imaging technique, which is graduallyfinding clinical applications. A position sensor is attached to aconventional ultrasound probe, so that B-scans are acquired along withtheir relative locations. This allows the B-scans to be inserted into a3D regular voxel array, which can then be visualized usingarbitrary-plane slicing, and volume or surface rendering. A keyrequirement for correct reconstruction is the calibration: determiningthe position and orientation of the B-scans with respect to theposition sensor's receiver. Following calibration, interpolation in theset of irregularly spaced B-scans is required to reconstruct aregular-voxel array. This text describes a freehand measurement of 2Dultrasonic data, an approach to the calibration problem and severalnumerical issues concerned with the calibration and reconstruction.
Hernández-Caraballo, Edwin A; Rivas, Francklin; de Hernández, Rita M Avila
2005-02-01
A generalized regression artificial neural network (GRANN) was developed and evaluated for modeling cadmium's nonlinear calibration curve in order to extend its upper concentration limit from 4.0 microg L-1 up to 22.0 microg L-1. This type of neural network presents important advantages over the more popular backpropagation counterpart which are worth exploiting in analytical applications, namely, (1) a smaller number of variables have to be optimized, with the subsequent reduction in "development hassle"; and, (2) shorter development times, thanks to the fact that the adjustment of the weights (the artificial synapses) is a non-iterative, one-pass process. A backpropagation artificial neural network (BPANN), a second-order polynomial, and some less frequently employed polynomial and exponential functions (e.g., Gaussian, Lorentzian, and Boltzmann), were also evaluated for comparison purposes. The quality of the fit of the various models, assessed by calculating the root mean square of the percentage deviations, was as follows: GRANN>Boltzmann>second-order polynomial>BPANN>Gauss>Lorentz. The accuracy and precision of the models were further estimated through the determination of cadmium in the certified reference material "Trace Metals in Drinking Water" (High Purity Standards, Lot No. 490915), which has a cadmium certified concentration (12.00+/-0.06 microg L-1) that lies in the nonlinear regime of the calibration curve. Only the models generated by the GRANN and BPANN accurately predicted the concentrations of a series of solutions, prepared by serial dilution of the CRM, with cadmium concentrations below and above the maximum linear calibration limit (4.0 microg L-1). Extension of the working range by using the proposed methodology represents an attractive alternative from the analytical point of view, since it results in less specimen manipulation and consequently reduced contamination risks without compromising either the accuracy or the precision of the
Numerical Modelling of Extended Leak-Off Test with a Pre-Existing Fracture
Lavrov, A.; Larsen, I.; Bauer, A.
2016-04-01
Extended leak-off test (XLOT) is one of the few techniques available for stress measurements in oil and gas wells. Interpretation of the test is often difficult since the results depend on a multitude of factors, including the presence of natural or drilling-induced fractures in the near-well area. Coupled numerical modelling of XLOT has been performed to investigate the pressure behaviour during the flowback phase as well as the effect of a pre-existing fracture on the test results in a low-permeability formation. Essential features of XLOT known from field measurements are captured by the model, including the saw-tooth shape of the pressure vs injected volume curve, and the change of slope in the pressure vs time curve during flowback used by operators as an indicator of the bottomhole pressure reaching the minimum in situ stress. Simulations with a pre-existing fracture running from the borehole wall in the radial direction have revealed that the results of XLOT are quite sensitive to the orientation of the pre-existing fracture. In particular, the fracture initiation pressure and the formation breakdown pressure increase steadily with decreasing angle between the fracture and the minimum in situ stress. Our findings seem to invalidate the use of the fracture initiation pressure and the formation breakdown pressure for stress measurements or rock strength evaluation purposes.
Numerical Device Modeling, Analysis, and Optimization of Extended-SWIR HgCdTe Infrared Detectors
Schuster, J.; DeWames, R. E.; DeCuir, E. A.; Bellotti, E.; Dhar, N.; Wijewarnasuriya, P. S.
2016-09-01
Imaging in the extended short-wavelength infrared (eSWIR) spectral band (1.7-3.0 μm) for astronomy applications is an area of significant interest. However, these applications require infrared detectors with extremely low dark current (less than 0.01 electrons per pixel per second for certain applications). In these detectors, sources of dark current that may limit the overall system performance are fundamental and/or defect-related mechanisms. Non-optimized growth/device processing may present material point defects within the HgCdTe bandgap leading to Shockley-Read-Hall dominated dark current. While realizing contributions to the dark current from only fundamental mechanisms should be the goal for attaining optimal device performance, it may not be readily feasible with current technology and/or resources. In this regard, the U.S. Army Research Laboratory performed physics-based, two- and three-dimensional numerical modeling of HgCdTe photovoltaic infrared detectors designed for operation in the eSWIR spectral band. The underlying impetus for this capability and study originates with a desire to reach fundamental performance limits via intelligent device design.
Calibration of a Numerical Model for Heat Transfer and Fluid Flow in an Extruder
Hofstätter, Thomas; Pedersen, David Bue; Nielsen, Jakob Skov
2016-01-01
This paper discusses experiments performed in order to validate simulations on a fused deposition modelling (FDM) extruder. The nozzle has been simulated in terms of heat transfer and fluid flow. In order to calibrate and validate these simulations, experiments were performed giving a significant...
Calibration of a Numerical Model for Heat Transfer and Fluid Flow in an Extruder
Hofstätter, Thomas; Pedersen, David Bue; Nielsen, Jakob Skov
2016-01-01
This paper discusses experiments performed in order to validate simulations on a fused deposition modelling (FDM) extruder. The nozzle has been simulated in terms of heat transfer and fluid flow. In order to calibrate and validate these simulations, experiments were performed giving a significant...... dynamical parameters. This research sets the foundation for further research within melted extrusion based additive manufacturing. The heating process of the extruder will be described and a note on the material feeding will be given....
Numerical modeling of extended short wave infrared InGaAs focal plane arrays
Glasmann, Andreu; Wen, Hanqing; Bellotti, Enrico
2016-05-01
Indium gallium arsenide (In1-xGaxAs) is an ideal material choice for short wave infrared (SWIR) imaging due to its low dark current and excellent collection efficiency. By increasing the indium composition from 53% to 83%, it is possible to decrease the energy gap from 0.74 eV to 0.47 eV and consequently increase the cutoff wavelength from 1.7 μm to 2.63 μm for extended short wavelength (ESWIR) sensing. In this work, we apply our well-established numerical modeling methodology to the ESWIR InGaAs system to determine the intrinsic performance of pixel detectors. Furthermore, we investigate the effects of different buffer/cap materials. To accomplish this, we have developed composition-dependent models for In1-xGaxAs, In1-xAlxAs, and InAs1-y Py. Using a Green's function formalism, we calculate the intrinsic recombination coefficients (Auger, radiative) to model the diffusion-limited behavior of the absorbing layer under ideal conditions. Our simulations indicate that, for a given total thickness of the buffer and absorbing layer, structures utilizing a linearly graded small-gap InGaAs buffer will produce two orders of magnitude more dark current than those with a wide gap, such as InAlAs or InAsP. Furthermore, when compared with experimental results for ESWIR photodiodes and arrays, we estimate that there is still a 1.5x magnitude of reduction in dark current before reaching diffusion-limited behavior.
Buonanno, Alessandra; Pfeiffer, Harald P; Scheel, Mark A; Buchman, Luisa T; Kidder, Lawrence E
2009-01-01
We calibrate the effective-one-body (EOB) model to an accurate numerical simulation of an equal-mass, non-spinning binary black-hole coalescence produced by the Caltech-Cornell collaboration. Aligning the EOB and numerical waveforms at low frequency over a time interval of ~1000M, and taking into account the uncertainties in the numerical simulation, we investigate the significance and degeneracy of the EOB adjustable parameters during inspiral, plunge and merger, and determine the minimum number of EOB adjustable parameters that achieves phase and amplitude agreements on the order of the numerical error. We find that phase and fractional amplitude differences between the numerical and EOB values of the dominant gravitational wave mode h_{22} can be reduced to 0.02 radians and 2%, respectively, until a time 26 M before merger, and to 0.1 radians and 10%, at a time 16M after merger (during ringdown), respectively. Using LIGO, Enhanced LIGO and Advanced LIGO noise curves, we find that the overlap between the EO...
A numerical approach for solving an extended Fisher-Kolomogrov-Petrovskii-Piskunov equation
Khuri, S. A.; Sayfy, A.
2010-02-01
In the present paper a numerical method, based on finite differences and spline collocation, is presented for the numerical solution of a generalized Fisher integro-differential equation. A composite weighted trapezoidal rule is manipulated to handle the numerical integrations which results in a closed-form difference scheme. A number of test examples are solved to assess the accuracy of the method. The numerical solutions obtained, indicate that the approach is reliable and yields results compatible with the exact solutions and consistent with other existing numerical methods. Convergence and stability of the scheme have also been discussed.
Some Remarks on the Calibration and Validation of Numerical Water Quality Models
Larsen, Torben
1997-01-01
It is a general experience that complete deterministic water quality models for aquatic systems most often show surprisingly poor agreement when it comes to comparison between model estimates and measurement in the actual system. Often this discrepancy is misunderstood as a lack of complexity and....../or an incomplete formulation of the involved varied processes. But in this introduction to a debate it is argued that the explanation usually lies in the high complexity of the models in relation to the limited data available for the calibration of model constants. Two examples are given....
Calibration of a Numerical Model for Heat Transfer and Fluid Flow in an Extruder
Hofstätter, Thomas; Pedersen, David Bue; Nielsen, Jakob Skov;
2016-01-01
This paper discusses experiments performed in order to validate simulations on a fused deposition modelling (FDM) extruder. The nozzle has been simulated in terms of heat transfer and fluid flow. In order to calibrate and validate these simulations, experiments were performed giving a significant...... look into the physical behaviour of the nozzle, heating and cooling systems. Experiments on the model were performed at different sub-mm diameters of the extruder. Physical parameters of the model – especially temperature dependent parameters – were set into analytical relationships in order to receive...... dynamical parameters. This research sets the foundation for further research within melted extrusion based additive manufacturing. The heating process of the extruder will be described and a note on the material feeding will be given....
1985-11-01
Division, Library Trans-Australia Airlines, Library Qantas Airways Limited Gas and Fuel Corporation of Victoria, Manager Scientific Services... Analysis 1 2.2 Crack Tip Modelling 1 2.3 Pin Loading 2 2.4 Normalised Compliance 2 2.5 Normalised Stress Intensity Factor 2 3. NUMERICAL ANALYSES 3 3.1...equations which have been fitted to the results for convenience. , -7-:.-" 2. ANALYTICAL DETAILS 2.1 Specimens and Analysis " The ASTM standard
The Large Scale Bias of Dark Matter Halos: Numerical Calibration and Model Tests
Tinker, Jeremy L; Kravtsov, Andrey V; Klypin, Anatoly; Warren, Michael S; Yepes, Gustavo; Gottlober, Stefan
2010-01-01
We measure the clustering of dark matter halos in a large set of collisionless cosmological simulations of the flat LCDM cosmology. Halos are identified using the spherical overdensity algorithm, which finds the mass around isolated peaks in the density field such that the mean density is Delta times the background. We calibrate fitting functions for the large scale bias that are adaptable to any value of Delta we examine. We find a ~6% scatter about our best fit bias relation. Our fitting functions couple to the halo mass functions of Tinker et. al. (2008) such that bias of all dark matter is normalized to unity. We demonstrate that the bias of massive, rare halos is higher than that predicted in the modified ellipsoidal collapse model of Sheth, Mo, & Tormen (2001), and approaches the predictions of the spherical collapse model for the rarest halos. Halo bias results based on friends-of-friends halos identified with linking length 0.2 are systematically lower than for halos with the canonical Delta=200 o...
Numerical Methods for Solution of the Extended Linear Quadratic Control Problem
Jørgensen, John Bagterp; Frison, Gianluca; Gade-Nielsen, Nicolai Fog
2012-01-01
to the Karush-Kuhn-Tucker system that constitute the majority of computational work in constrained nonlinear and linear model predictive control problems solved by efficient MPC-tailored interior-point and active-set algorithms. We state various methods of solving the extended linear quadratic control problem...
Data for calibration and validation of numerical models at SFR Nuclear Waste Repository
Axelsson, Carl-Lennart [Golder Associates AB (Sweden)
1997-12-01
chemical data within different parts of the SFR facility. Estimated errors in the flow measurements are also discussed. A steady state model should be calibrated on the latest measurement values, with the given accuracy for the respective stations 8 refs, 18 figs, 3 tabs
Numerical flow models and their calibration using tracer based ages: Chapter 10
Sanford, W.
2013-01-01
Any estimate of ‘age’ of a groundwater sample based on environmental tracers requires some form of geochemical model to interpret the tracer chemistry (chapter 3) and is, therefore, referred to in this chapter as a tracer model age. the tracer model age of a groundwater sample can be useful for obtaining information on the residence time and replenishment rate of an aquifer system, but that type of data is most useful when it can be incorporated with all other information that is known about the groundwater system under study. groundwater fl ow models are constructed of aquifer systems because they are usually the best way of incorporating all of the known information about the system in the context of a mathematical framework that constrains the model to follow the known laws of physics and chemistry as they apply to groundwater flow and transport. It is important that the purpose or objective of the study be identified first before choosing the type and complexity of the model to be constructed, and to make sure such a model is necessary. The purpose of a modelling study is most often to characterize the system within a numerical framework, such that the hydrological responses of the system can be tested under potential stresses that might be imposed given future development scenarios. As this manual discusses dating as it applies to old groundwater, most readers are likely to be interested in studying regional groundwater flow systems and their water resource potential.
Omira, Rachid; Ramalho, Ricardo S.; Quartau, Rui; Ramalho, Inês; Madeira, José; Baptista, Maria Ana
2017-04-01
Volcanic Ocean Islands are very prominent and dynamic features involving several constructive and destructive phases during their life-cycles. Large-scale gravitational flank collapses are one of the most destructive processes and can present a major source of hazard, since it has been shown that these events are capable of triggering megatsunamis with significant coastal impact. The Fogo volcanic island, Cape Verde, presents evidence for giant edifice mass-wasting, as attested by both onshore and offshore evidence. A recent study by Ramalho et al. (2015) revealed the presence of tsunamigenic deposits that attest the generation of a megatsunami with devastating impact on the nearby Santiago Island, following Fogo's catastrophic collapse. Evidence from northern Santiago implies local minimum run-ups of 270 m, providing a unique physical framework to test collapse-triggered tsunami numerical simulations. In this study, we investigate the tsunamigenic potential associated with Fogo's flank collapse, and its impact on the Islands of the Cape Verde archipelago using field evidence-calibrated numerical simulations. We first reconstruct the pre-event island morphology, and then employ a multilayer numerical model to simulate the flank failure flow towards and under the sea, the ensuing tsunami generation, propagation and coastal impact. We use a digital elevation model that considers the coastline configuration and the sea level at the time of the event. Preliminary numerical modeling results suggest that collapsed volumes of 90-150 km3, in one single event, generate numerical solutions that are compatible with field evidence. Our simulations suggest that Fogo's collapse triggered a megatsunami that reached the coast of Santiago in 8 min, and with wave heights in excess of 250 m. The tsunami waves propagated with lower amplitudes towards the Cape Verde Islands located northward of Fogo. This study will contribute to more realistically assess the scale of risks associated
Tang, Tie-Qiao; Huang, Hai-Jun; Shang, Hua-Yan
2017-02-01
In this paper, we propose a macro traffic flow model to explore the effects of the driver's bounded rationality on the evolutions of traffic waves (which include shock and rarefaction waves) and small perturbation, and on the fuel consumption and emissions (that include CO, HC and NOX) during the evolution process. The numerical results illustrate that considering the driver's bounded rationality can prominently smooth the wavefront of the traffic waves and improve the stability of traffic flow, which shows that the driver's bounded rationality has positive impacts on traffic flow; but considering the driver's bounded rationality reduces the fuel consumption and emissions only at the upstream of the rarefaction wave while enhances the fuel consumption and emissions under other situations, which shows that the driver's bounded rationality has positive impacts on the fuel consumption and emissions only at the upstream of the rarefaction wave, while negative effects on the fuel consumption and emissions under other situations. In addition, the numerical results show that the driver's bounded rationality has little prominent impact on the total fuel consumption, and emissions during the whole evolution of small perturbation.
Mertz, D F; Swisher, C C; Franzen, J L; Neuffer, F O; Lutz, H
2000-06-01
Sediments of the Eckfeld maar (Eifel, Germany) bear a well-preserved Eocene fauna and flora. Biostratigraphically, Eckfeld corresponds to the Middle Eocene mammal reference level MP (Mammals Paleogene) 13 of the ELMA (European Land Mammal Age) Geiseltalian. In the maar crater, basalt fragments were drilled, representing explosion crater eruption products. By 40Ar/39Ar dating of the basalt, for the first time a direct numerical calibration mark for an Eocene European mammal locality has been established. The Eckfeld basalt inverse isochron date of 44.3 +/- 0.4 Ma suggests an age for the Geiseltalian/Robiacian boundary at 44 Ma and, together with the 1995 time scale of Berggren et al., a time span ranging from 49 to 44 Ma for the Geiseltalian and from 44 to 37 Ma for the Robiacian, respectively. Additional 40Ar/39Ar dating on a genetically related basalt occurrence close to the maar confirms a period of volcanism of ca. 0.6 m.y. in the Eckfeld area, matching the oldest Eocene volcanic activity of the Hocheifel volcanic field.
Krane, M.; Dybbs, A.
1987-01-01
To monitor the high-intensity heat flux conditions that occur in the space shuttle main engine (SSME), it is necessary to use specifically designed heat flux sensors. These sensors, which are of the Gardon-type, are exposed on the measuring face to high-intensity radiative and convective heat fluxes and on the other face to convective cooling. To improve the calibration and measurement accuracy of these gauges, researchers are studing the effect that the thermal boundary conditions have on gauge performance. In particular, they are studying how convective cooling effects the field inside the sensor and the measured heat flux. The first phase of this study involves a numerical study of these effects. Subsequent phases will involve experimental verification. A computer model of the heat transfer around a Garden-type heat flux sensor was developed. Two specific geometries are being considered are: (1) heat flux sensor mounted on a flat-plate; and (2) heat flux sensor mounted at the stagnation point of a circular cylinder. Both of these configurations are representative of the use of heat flux sensors in the components of the SSME. The purpose of the analysis is to obtain a temperature distribution as a function of the boundary conditions.
Cutanda Henriquez, Vicente; Juhl, Peter Møller; Barrera Figueroa, Salvador
2009-01-01
Secondary calibration of microphones in free field is performed by placing the microphone under calibration in an anechoic chamber with a sound source, and exposing it to a controlled sound field. A calibrated microphone is also measured as a reference. While the two measurements are usually made...... consecutively, a variation of this procedure, where the microphones are measured simultaneously, is considered more advantageous from the metrological point of view. However, it must be guaranteed that the two microphones receive the same excitation from the source, although their positions are some distance...... apart to avoid acoustic interaction. As a part of the project Euromet-792, aiming to investigate and improve methods for secondary free-field calibration of microphones, a sound source suitable for simultaneous secondary free-field calibration has been designed using the Boundary Element Method...
Yuan, Xuefei
2012-07-01
Numerical simulations of the four-field extended magnetohydrodynamics (MHD) equations with hyper-resistivity terms present a difficult challenge because of demanding spatial resolution requirements. A time-dependent sequence of . r-refinement adaptive grids obtained from solving a single Monge-Ampère (MA) equation addresses the high-resolution requirements near the . x-point for numerical simulation of the magnetic reconnection problem. The MHD equations are transformed from Cartesian coordinates to solution-defined curvilinear coordinates. After the application of an implicit scheme to the time-dependent problem, the parallel Newton-Krylov-Schwarz (NKS) algorithm is used to solve the system at each time step. Convergence and accuracy studies show that the curvilinear solution requires less computational effort than a pure Cartesian treatment. This is due both to the more optimal placement of the grid points and to the improved convergence of the implicit solver, nonlinearly and linearly. The latter effect, which is significant (more than an order of magnitude in number of inner linear iterations for equivalent accuracy), does not yet seem to be widely appreciated. © 2012 Elsevier Inc.
Wang Yuntao
2015-06-01
Full Text Available Based on the Reynolds-averaged Navier–Stokes (RANS equations and structured grid technology, the calibration and validation of γ-Reθ transition model is preformed with fifth-order weighted compact nonlinear scheme (WCNS, and the purpose of the present work is to improve the numerical accuracy for aerodynamic characteristics simulation of low-speed flow with transition model on the basis of high-order numerical method study. Firstly, the empirical correlation functions involved in the γ-Reθ transition model are modified and calibrated with experimental data of turbulent flat plates. Then, the grid convergence is studied on NLR-7301 two-element airfoil with the modified empirical correlation. At last, the modified empirical correlation is validated with NLR-7301 two-element airfoil and high-lift trapezoidal wing from transition location, velocity profile in boundary layer, surface pressure coefficient and aerodynamic characteristics. The numerical results illustrate that the numerical accuracy of transition length and skin friction behind transition location are improved with modified empirical correlation function, and obviously increases the numerical accuracy of aerodynamic characteristics prediction for typical transport configurations in low-speed range.
Dausman, A.; Langevin, C.; Sukop, M.; Walsh, V.
2006-12-01
The South District Wastewater Treatment Plant (SDWWTP), located in southeastern Miami-Dade County about 1 mi west of the Biscayne Bay coastline, is the largest capacity deep-well injection plant in the United States. Currently, about 100 Mgal/d of partially treated, essentially fresh (less than 1000 mg/L total dissolved solids) effluent is injected through 17 wells (each approximately 2500 ft below land surface) into the highly transmissive, lower-temperature, saline Boulder Zone composed of highly fractured dolomite. A thin confining unit called the Delray Dolomite, which is 8-16 ft thick, overlies the intended injection zone at the site. Although the Delray Dolomite has a vertical hydraulic conductivity estimated between 0.001 and 0.00001 ft/d, well casings for 10 of the 17 wells do not extend beneath the unit. A 700-ft-thick middle confining unit, with estimated vertical hydraulic conductivities between 0.1 and 28 ft/d, overlies the Delray Dolomite and separates it from the Upper Floridan aquifer. Protected by the Safe Drinking Water Act (SDWA), the Upper Floridan aquifer contains water that is less than 10,000 mg/L total dissolved solids. In southern Florida, this aquifer is used for reverse osmosis, blending with other waters, and as a reservoir for aquifer storage and recovery. At the SDWWTP, ammonia concentrations that exceed background conditions have been observed in monitoring wells open in and above the middle confining unit, indicating upward vertical migration of effluent, possibly toward the Upper Floridan aquifer. The U.S. Geological Survey currently is developing a variable-density groundwater flow and solute transport model for the Floridan aquifer system in Miami-Dade County. This model includes the injection of treated wastewater at the SDWWTP. The developed numerical model uses SEAWAT, a code that calculates variable- density flow as a function of salinity, to capture the buoyancy effects at the site and along the coast. Simulation efforts have
Nogueiro, Pedro; Silva, Luís Simões da; Bento, Rita; Simões, Rui
2007-01-01
In this paper, a hysteretic model with pinching is presented that is able to reproduce realistically the cyclic response of generic steel joints. Secondly, the computer implementation and adaptation of the model in a spring element within the computer code SeismoStruct is described. The model is subsequently calibrated using a series of experimental test results for steel joints subjected to cyclic loading. Finally, typical parameters for the various joint configurations are proposed.
8s, a numerical simulator of the challenging optical calibration of the E-ELT adaptive mirror M4
Briguglio, Runa; Pariani, Giorgio; Xompero, Marco; Riccardi, Armando; Tintori, Matteo; Lazzarini, Paolo; Spanò, Paolo
2016-07-01
8s stands for Optical Test TOwer Simulator (with 8 read as in italian 'otto'): it is a simulation tool for the optical calibration of the E-ELT deformable mirror M4 on its test facility. It has been developed to identify possible criticalities in the procedure, evaluate the solutions and estimate the sensitivity to environmental noise. The simulation system is composed by the finite elements model of the tower, the analytic influence functions of the actuators, the ray tracing propagation of the laser beam through the optical surfaces. The tool delivers simulated phasemaps of M4, associated with the current system status: actuator commands, optics alignment and position, beam vignetting, bench temperature and vibrations. It is possible to simulate a single step of the optical test of M4 by changing the system parameters according to a calibration procedure and collect the associated phasemap for performance evaluation. In this paper we will describe the simulation package and outline the proposed calibration procedure of M4.
M. Boumaza
2015-07-01
Full Text Available Transient convection heat transfer is of fundamental interest in many industrial and environmental situations, as well as in electronic devices and security of energy systems. Transient fluid flow problems are among the more difficult to analyze and yet are very often encountered in modern day technology. The main objective of this research project is to carry out a theoretical and numerical analysis of transient convective heat transfer in vertical flows, when the thermal field is due to different kinds of variation, in time and space of some boundary conditions, such as wall temperature or wall heat flux. This is achieved by the development of a mathematical model and its resolution by suitable numerical methods, as well as performing various sensitivity analyses. These objectives are achieved through a theoretical investigation of the effects of wall and fluid axial conduction, physical properties and heat capacity of the pipe wall on the transient downward mixed convection in a circular duct experiencing a sudden change in the applied heat flux on the outside surface of a central zone.
Bjelić Mišo B.
2016-01-01
Full Text Available Simulation models of welding processes allow us to predict influence of welding parameters on the temperature field during welding and by means of temperature field and the influence to the weld geometry and microstructure. This article presents a numerical, finite-difference based model of heat transfer during welding of thin sheets. Unfortunately, accuracy of the model depends on many parameters, which cannot be accurately prescribed. In order to solve this problem, we have used simulated annealing optimization method in combination with presented numerical model. This way, we were able to determine uncertain values of heat source parameters, arc efficiency, emissivity and enhanced conductivity. The calibration procedure was made using thermocouple measurements of temperatures during welding for P355GH steel. The obtained results were used as input for simulation run. The results of simulation showed that represented calibration procedure could significantly improve reliability of heat transfer model. [National CEEPUS Office of Czech Republic (project CIII-HR-0108-07-1314 and to the Ministry of Education and Science of the Republic of Serbia (project TR37020
Brun, J.; Reynard-Carette, C.; Carette, M. [Aix Marseille Universite, CNRS, Universite de Toulon, IM2NP UMR7334, 13397, Marseille (France); Tarchalski, M.; Pytel, K.; Jagielski, J. [National Center of the Nuclear Research (Poland); Lyoussi, A.; Fourmentel, D.; Villard, J.F. [CEA, DEN, DER, Instrumentation Sensors and Dosimetry Laboratory, Cadarache, 13108 Saint- Paul-Lez-Durance (France)
2015-07-01
The nuclear radiation energy deposition rate (usually expressed in W.g{sup -1}) is a key parameter for the thermal design of experiments, on materials and nuclear fuel, carried out in experimental channels of irradiation reactors such as the French OSIRIS reactor in Saclay or inside the Polish MARIA reactor. In particular the quantification of the nuclear heating allows to predicting the heat and thermal conditions induced in the irradiation devices or/and structural materials. Various sensors are used to quantify this parameter, in particular radiometric calorimeters also called in-pile calorimeters. Two main kinds of in-pile calorimeter exist with in particular specific designs: single-cell calorimeter and differential calorimeter. The present work focuses on these two calorimeter kinds from their out-of-pile calibration step (transient and steady experiments respectively) to comparison between numerical and experimental results obtained from two irradiation campaigns (MARIA reactor and OSIRIS reactor respectively). The main aim of this paper is to propose a steady numerical approach to estimate the single-cell calorimeter response under irradiation conditions. (authors)
Špillar, Václav; Dolejš, David
2015-12-01
Mechanical crystal-melt interactions in magmatic systems by separation or accumulation of crystals or by extraction of interstitial melt are expected to modify the spatial distribution of crystals observed as phenocrysts in igneous rocks. Textural analysis of porphyritic products can thus provide a quantitative means of interpreting the magnitude of crystal accumulation or melt loss and reconstructing the initial crystal percentage, at which the process occurred. We present a new three-dimensional numerical model that evaluates the effects of crystal accumulation (or interstitial melt removal) on the spatial distribution of crystals. Both processes lead to increasing apparent crystallinity but also to increasing spatial ordering expressed by the clustering index (R). The trend of progressive crystal packing deviates from a random texture trend, produced by static crystal nucleation and growth, and it is universal for any texture with straight log-linear crystal size distribution. For sparse crystal suspensions (5 vol. % crystals, R = 1.03), up to 97% melt can be extracted, corresponding to a new crystallinity of 65 vol.% and R = 1.32, when the rheological threshold of crystal interlocking is reached. For initially crystal-rich suspensions, the compaction path is shorter, this is because the initial crystal population is more aggregated and it reaches the limit of interlocking sooner. Crystal suspensions with ~ 35 vol.% crystals cannot be compacted without mechanical failure. These results illustrate that the onset of the rheological threshold of magma immobility strongly depends on the spatial configuration of crystals in the mush: the primary rigid percolation threshold (~ 35 vol.% crystals) corresponds to touching or interlocking crystal framework produced by in situ closed-system crystallization, whereas the secondary rigid percolation threshold (~ 35 to ~ 75 vol.% crystals) can be reached by compaction, which is particularly spatially efficient when acting on
Qingdong Zeng
2015-10-01
Full Text Available Fluid-solid coupling is ubiquitous in the process of fluid flow underground and has a significant influence on the development of oil and gas reservoirs. To investigate these phenomena, the coupled mathematical model of solid deformation and fluid flow in fractured porous media is established. In this study, the discrete fracture model (DFM is applied to capture fluid flow in the fractured porous media, which represents fractures explicitly and avoids calculating shape factor for cross flow. In addition, the extended finite element method (XFEM is applied to capture solid deformation due to the discontinuity caused by fractures. More importantly, this model captures the change of fractures aperture during the simulation, and then adjusts fluid flow in the fractures. The final linear equation set is derived and solved for a 2D plane strain problem. Results show that the combination of discrete fracture model and extended finite element method is suited for simulating coupled deformation and fluid flow in fractured porous media.
Parlangeau, Camille; Lacombe, Olivier; Schueller, Sylvie; Daniel, Jean-Marc
2016-04-01
The inversion of calcite twin data is a powerful tool to reconstruct paleostresses sustained by carbonate rocks during their geological history. Following Etchecopar's (1984) pioneering work, this study presents a new technique of inversion of calcite twin data, which allows reconstructing the 5 parameters of the deviatoric stress tensor. In order to determine the applicability domain of the technique as well as to estimate the uncertainties on the reconstructed stress tensors, we first carried out tests on numerically generated calcite twin data and tested the separability of superimposed stress tensors with various degrees of similarity and the influence of optical bias, heterogeneities and occurrence of different grain size classes as met in natural samples. For monophase datasets with homogeneous grain size, the errors on the different stress parameters (orientation of principal stress axes, stress ratio and differential stresses) are negligible except for the differential stress (error of 5%). In cases displaying distinct grain sizes, misfits remain negligible but may reach 20% for the differential stress if the differential stress applied is greater than 60-65 MPa. Incorporation of optical bias slightly increases uncertainties up to 25% for the differential stress, 5% for the stress ratio and 8° for the orientation of the principal stress axes. For polyphase datasets with homogeneous grain size, the misfit on the orientation of the principal stress axes increases up to 10°, the stress ratio remains well constrained and the misfit on differential stress reaches 20% (applied differential stress > 70 MPa). Incorporation of optical bias increases the misfit of the orientation of the principal stress axes (average misfit: 6-8°; maximum: 17°), the misfit on stress ratio (average misfit: 2%; maximum: 26%) and the misfit on the differential stress (average misfit: 15%; maximum: 30%) These tests demonstrate that it is better to analyze twin data from subsets of
Mohamed S. Badawi
2017-03-01
Full Text Available The 4π NaI(Tl γ-ray detectors are consisted of the well cavity with cylindrical cross section, and the enclosing geometry of measurements with large detection angle. This leads to exceptionally high efficiency level and a significant coincidence summing effect, much more than a single cylindrical or coaxial detector especially in very low activity measurements. In the present work, the detection effective solid angle in addition to both full-energy peak and total efficiencies of well-type detectors, were mainly calculated by the new numerical simulation method (NSM and ANGLE4 software. To obtain the coincidence summing correction factors through the previously mentioned methods, the simulation of the coincident emission of photons was modeled mathematically, based on the analytical equations and complex integrations over the radioactive volumetric sources including the self-attenuation factor. The measured full-energy peak efficiencies and correction factors were done by using 152Eu, where an exact adjustment is required for the detector efficiency curve, because neglecting the coincidence summing effect can make the results inconsistent with the whole. These phenomena, in general due to the efficiency calibration process and the coincidence summing corrections, appear jointly. The full-energy peak and the total efficiencies from the two methods typically agree with discrepancy 10%. The discrepancy between the simulation, ANGLE4 and measured full-energy peak after corrections for the coincidence summing effect was on the average, while not exceeding 14%. Therefore, this technique can be easily applied in establishing the efficiency calibration curves of well-type detectors.
Badawi, Mohamed S.; Jovanovic, Slobodan I.; Thabet, Abouzeid A.; El-Khatib, Ahmed M.; Dlabac, Aleksandar D.; Salem, Bohaysa A.; Gouda, Mona M.; Mihaljevic, Nikola N.; Almugren, Kholud S.; Abbas, Mahmoud I.
2017-03-01
The 4π NaI(Tl) γ-ray detectors are consisted of the well cavity with cylindrical cross section, and the enclosing geometry of measurements with large detection angle. This leads to exceptionally high efficiency level and a significant coincidence summing effect, much more than a single cylindrical or coaxial detector especially in very low activity measurements. In the present work, the detection effective solid angle in addition to both full-energy peak and total efficiencies of well-type detectors, were mainly calculated by the new numerical simulation method (NSM) and ANGLE4 software. To obtain the coincidence summing correction factors through the previously mentioned methods, the simulation of the coincident emission of photons was modeled mathematically, based on the analytical equations and complex integrations over the radioactive volumetric sources including the self-attenuation factor. The measured full-energy peak efficiencies and correction factors were done by using 152Eu, where an exact adjustment is required for the detector efficiency curve, because neglecting the coincidence summing effect can make the results inconsistent with the whole. These phenomena, in general due to the efficiency calibration process and the coincidence summing corrections, appear jointly. The full-energy peak and the total efficiencies from the two methods typically agree with discrepancy 10%. The discrepancy between the simulation, ANGLE4 and measured full-energy peak after corrections for the coincidence summing effect was on the average, while not exceeding 14%. Therefore, this technique can be easily applied in establishing the efficiency calibration curves of well-type detectors.
Fukushima, Toshio
2012-04-01
By extending the exponent of floating point numbers with an additional integer as the power index of a large radix, we compute fully normalized associated Legendre functions (ALF) by recursion without underflow problem. The new method enables us to evaluate ALFs of extremely high degree as 232 = 4,294,967,296, which corresponds to around 1 cm resolution on the Earth's surface. By limiting the application of exponent extension to a few working variables in the recursion, choosing a suitable large power of 2 as the radix, and embedding the contents of the basic arithmetic procedure of floating point numbers with the exponent extension directly in the program computing the recurrence formulas, we achieve the evaluation of ALFs in the double-precision environment at the cost of around 10% increase in computational time per single ALF. This formulation realizes meaningful execution of the spherical harmonic synthesis and/or analysis of arbitrary degree and order.
Fukushima, Toshio
2017-06-01
Reviewed are recently developed methods of the numerical integration of the gravitational field of general two- or three-dimensional bodies with arbitrary shape and mass density distribution: (i) an axisymmetric infinitely-thin disc (Fukushima 2016a, MNRAS, 456, 3702), (ii) a general infinitely-thin plate (Fukushima 2016b, MNRAS, 459, 3825), (iii) a plane-symmetric and axisymmetric ring-like object (Fukushima 2016c, AJ, 152, 35), (iv) an axisymmetric thick disc (Fukushima 2016d, MNRAS, 462, 2138), and (v) a general three-dimensional body (Fukushima 2016e, MNRAS, 463, 1500). The key techniques employed are (a) the split quadrature method using the double exponential rule (Takahashi and Mori, 1973, Numer. Math., 21, 206), (b) the precise and fast computation of complete elliptic integrals (Fukushima 2015, J. Comp. Appl. Math., 282, 71), (c) Ridder's algorithm of numerical differentiaion (Ridder 1982, Adv. Eng. Softw., 4, 75), (d) the recursive computation of the zonal toroidal harmonics, and (e) the integration variable transformation to the local spherical polar coordinates. These devices succesfully regularize the Newton kernel in the integrands so as to provide accurate integral values. For example, the general 3D potential is regularly integrated as Φ (\\vec{x}) = - G \\int_0^∞ ( \\int_{-1}^1 ( \\int_0^{2π} ρ (\\vec{x}+\\vec{q}) dψ ) dγ ) q dq, where \\vec{q} = q (√{1-γ^2} cos ψ, √{1-γ^2} sin ψ, γ), is the relative position vector referred to \\vec{x}, the position vector at which the potential is evaluated. As a result, the new methods can compute the potential and acceleration vector very accurately. In fact, the axisymmetric integration reproduces the Miyamoto-Nagai potential with 14 correct digits. The developed methods are applied to the gravitational field study of galaxies and protoplanetary discs. Among them, the investigation on the rotation curve of M33 supports a disc-like structure of the dark matter with a double-power-law surface
Barrera Figueroa, Salvador; Torras Rosell, Antoni; Jacobsen, Finn
2013-01-01
Measurement microphones are typically calibrated in a free field at frequencies up to 50 kHz. This is a sufficiently high frequency for the most sound measurement applications related with noise assessment. However, other applications such as the measurement of noise emitted by ultrasound cleanin...
Groth, Clinton P. T.; Roe, Philip L.
1998-01-01
Six months of funding was received for the proposed three year research program (funding for the period from March 1, 1997 to August 31, 1997). Although the official starting date for the project was March 1, 1997, no funding for the project was received until July 1997. In the funded research period, considerable progress was made on Phase I of the proposed research program. The initial research efforts concentrated on applying the 10-, 20-, and 35-moment Gaussian-based closures to a series of standard two-dimensional non-reacting single species test flow problems, such as the flat plate, couette, channel, and rearward facing step flows, and to some other two-dimensional flows having geometries similar to those encountered in chemical-vapor deposition (CVD) reactors. Eigensystem analyses for these systems for the case of two spatial dimensions was carried out and efficient formulations of approximate Riemann solvers have been formulated using these eigenstructures. Formulations to include rotational non-equilibrium effects into the moment closure models for the treatment of polyatomic gases were explored, as the original formulations of the closure models were developed strictly for gases composed of monatomic molecules. The development of a software library and computer code for solving relaxing hyperbolic systems in two spatial dimensions of the type arising from the closure models was also initiated. The software makes use of high-resolution upwind finite-volumes schemes, multi-stage point implicit time stepping, and automatic adaptive mesh refinement (AMR) to solve the governing conservation equations for the moment closures. The initial phase of the code development was completed and a numerical investigation of the solutions of the 10-moment closure model for the simple two-dimensional test cases mentioned above was initiated. Predictions of the 10-moment model were compared to available theoretical solutions and the results of direct-simulation Monte Carlo
AboulNaga, M.M.; Alteraifi, A.M.
1999-07-01
This paper investigates airflow patterns and behavior of combined roof and extended solar wall-roof chimney incorporated into a typical room with an inlet and outlet. Numerical simulations using the Fluid Dynamics software Package, FIDAP, are exploited to describe and analyze the airflow patterns inside the room, and in the extended solar wall-roof chimneys. FIDAP simulation analyses and results of the airflow streamline, velocity vectors, and temperature distributions are presented. Maximum velocity vector, temperatures, and smooth streamlines were found for better performance at separation of 0.25m. At 0.25m separation, in both wall and roof solar chimneys, the maximum chimney outlet flow rate and smooth streamline were found when the wall chimney is 2.00m high, which corresponds to a wall chimney inlet of 1.60m. Results show that the maximum relative speed in the combined chimneys is higher than the solar roof chimney alone. These findings suggest that the exploitation of an extended solar roof-wall chimney could enhance nighttime natural ventilation and the cooling of buildings. The system is limited to cool low-rise buildings in hot-arid regions such as Al-Ain City, UAE, where energy use is enormous.
Maruo, Katsuhiko; Oota, Tomohiro; Tsurugi, Mitsuhiro; Nakagawa, Takehiro; Arimoto, Hidenobu; Hayakawa, Mineji; Tamura, Mamoru; Ozaki, Yukihiro; Yamada, Yukio
2006-12-01
We have applied a new methodology for noninvasive continuous blood glucose monitoring, proposed in our previous paper, to patients in ICU (intensive care unit), where strict controls of blood glucose levels are required. The new methodology can build calibration models essentially from numerical simulation, while the conventional methodology requires pre-experiments such as sugar tolerance tests, which are impossible to perform on ICU patients in most cases. The in vivo experiments in this study consisted of two stages, the first stage conducted on healthy subjects as preliminary experiments, and the second stage on ICU patients. The prediction performance of the first stage was obtained as a correlation coefficient (r) of 0.71 and standard error of prediction (SEP) of 28.7 mg/dL. Of the 323 total data, 71.5% were in the A zone, 28.5% were in the B zone, and none were in the C, D, and E zones for the Clarke error-grid analysis. The prediction performance of the second stage was obtained as an r of 0.97 and SEP of 27.2 mg/dL. Of the 304 total data, 80.3% were in the A zone, 19.7% were in the B zone, and none were in the C, D, and E zones. These prediction results suggest that the new methodology has the potential to realize a noninvasive blood glucose monitoring system using near-infrared spectroscopy (NIRS) in ICUs. Although the total performance of the present monitoring system has not yet reached a satisfactory level as a stand-alone system, it can be developed as a complementary system to the conventional one used in ICUs for routine blood glucose management, which checks the blood glucose levels of patients every few hours.
Mohammad H. Jabbari
2013-01-01
Full Text Available Using one-dimensional Beji & Nadaoka extended Boussinesq equation, a numerical study of solitary waves over submerged breakwaters has been conducted. Two different obstacles of rectangular as well as circular geometries over the seabed inside a channel have been considered in view of solitary waves passing by. Since these bars possess sharp vertical edges, they cannot directly be modeled by Boussinesq equations. Thus, sharply sloped lines over a short span have replaced the vertical sides, and the interactions of waves including reflection, transmission, and dispersion over the seabed with circular and rectangular shapes during the propagation have been investigated. In this numerical simulation, finite element scheme has been used for spatial discretization. Linear elements along with linear interpolation functions have been utilized for velocity components and the water surface elevation. For time integration, a fourth-order Adams-Bashforth-Moulton predictor-corrector method has been applied. Results indicate that neglecting the vertical edges and ignoring the vortex shedding would have minimal effect on the propagating waves and reflected waves with weak nonlinearity.
NUMERICAL MODELLING OF ROCK FALL USING EXTENDED DDA%利用改进的DDA进行边坡上落石运动的数值模拟
陈光齐
2003-01-01
The reasonable design of protective structures to mitigate the hazards from rock fall depends on the knowledge of motion behaviors of falling stones,such as the falling paths,velocities,jump heights and distances. Numerical simulation is an effective way to gain such kind of knowledge. In this paper,the discontinuous deformation analysis (DDA) is applied to rock fall analysis. In order to obtain more reliable results,the following improvements and extensions are made on the original DDA. (1) Solve the problem of block expansions due to rigid body rotation error. (2) Add the function of modeling the drag resistance from air and plants so that the velocities of falling stones obtained by simulations are good enough in agreement with those by experiments in situ. (3) Add the capability to consider energy loss due to block collisions so that the jumping heights and distances obtained by simulations are good enough in agreement with those by experiments even for the slope with very soft layer on its surface. One of application examples is presented to show that the extended DDA is very effective and useful in rock fall analysis. Therefore,the presented method is expected to be put into wide use in slop stability analysis.
Yucel, I.; Onen, A.; Yilmaz, K. K.; Gochis, D. J.
2015-04-01
A fully-distributed, multi-physics, multi-scale hydrologic and hydraulic modeling system, WRF-Hydro, is used to assess the potential for skillful flood forecasting based on precipitation inputs derived from the Weather Research and Forecasting (WRF) model and the EUMETSAT Multi-sensor Precipitation Estimates (MPEs). Similar to past studies it was found that WRF model precipitation forecast errors related to model initial conditions are reduced when the three dimensional atmospheric data assimilation (3DVAR) scheme in the WRF model simulations is used. A comparative evaluation of the impact of MPE versus WRF precipitation estimates, both with and without data assimilation, in driving WRF-Hydro simulated streamflow is then made. The ten rainfall-runoff events that occurred in the Black Sea Region were used for testing and evaluation. With the availability of streamflow data across rainfall-runoff events, the calibration is only performed on the Bartin sub-basin using two events and the calibrated parameters are then transferred to other neighboring three ungauged sub-basins in the study area. The rest of the events from all sub-basins are then used to evaluate the performance of the WRF-Hydro system with the calibrated parameters. Following model calibration, the WRF-Hydro system was capable of skillfully reproducing observed flood hydrographs in terms of the volume of the runoff produced and the overall shape of the hydrograph. Streamflow simulation skill was significantly improved for those WRF model simulations where storm precipitation was accurately depicted with respect to timing, location and amount. Accurate streamflow simulations were more evident in WRF model simulations where the 3DVAR scheme was used compared to when it was not used. Because of substantial dry bias feature of MPE, as compared with surface rain gauges, streamflow derived using this precipitation product is in general very poor. Overall, root mean squared errors for runoff were reduced by
TWSTFT Link Calibration Report
2015-09-01
box calibrator with unknown but constant total delay during a calibration tour Total Delay: The total electrical delay from the antenna phase center...to the UTCp including all the devices/cables that the satellite and clock signals pass through. It numerically equals the sum of all the sub-delays...PTB. To average out the dimnal effects and measurement noise , 5-7 days of continuous measurements is required. 3 Setups at the Lab(k) The setup
Heydorn, Kaj; Anglov, Thomas
2002-01-01
Methods recommended by the International Standardization Organisation and Eurachem are not satisfactory for the correct estimation of calibration uncertainty. A novel approach is introduced and tested on actual calibration data for the determination of Pb by ICP-AES. The improved calibration unce...
Bieniasz, Leslaw K.; Østerby, Ole; Britz, Dieter
1995-01-01
We extend the analysis of the stepwise numerical stability of the classic explicit, fully implicit and Crank-Nicolson finite difference algorithms for electrochemical kinetic simulations, to the multipoint gradient approximations at the electrode. The discussion is based on the matrix method...... of stability analysis....
Hamzawy, Ayman; Grozdanov, Dimitar N.; Badawi, Mohamed S.; Aliyev, Fuad A.; Thabet, Abouzeid A.; Abbas, Mahmoud I.; Ruskov, Ivan N.; El-Khatib, Ahmed M.; Kopatch, Yuri N.; Gouda, Mona M.
2016-11-01
Scintillation crystals are usually used for detection of energetic photons at room temperature in high energy and nuclear physics research, non-destructive analysis of materials testing, safeguards, nuclear treaty verification, geological exploration, and medical imaging. Therefore, new designs and construction of radioactive beam facilities are coming on-line with these science brunches. A good number of researchers are investigating the efficiency of the γ-ray detectors to improve the models and techniques used in order to deal with the most pressing problems in physics research today. In the present work, a new integrative and uncomplicated numerical simulation method (NSM) is used to compute the full-energy (photo) peak efficiency of a regular hexagonal prism NaI(Tl) gamma-ray detector using radioactive point sources situated non-axial within its front surface boundaries. This simulation method is based on the efficiency transfer method. Most of the mathematical formulas in this work are derived analytically and solved numerically. The main core of the NSM is the calculation of the effective solid angle for radioactive point sources, which are situated non-axially at different distances from the front surface of the detector. The attenuation of the γ-rays through the detector's material and any other materials in-between the source and the detector is taken into account. A remarkable agreement between the experimental and calculated by present formalism results has been observed.
Caldeira, Alexandre D.; Dias, Artur F.; Claro, Luiz H.; Vieira, Wilson J. [Centro Tecnico Aeroespacial (CTA-IEAv), Sao Jose dos Campos, SP (Brazil). Inst. de Estudos Avancados
2000-07-01
This work is part of a research project that involves experimental and numeric theoretical activities of calibration and use of a Long Counter detector in the facilities of the IEAv/CTA electron linear accelerator. The version of the detector considered in this work was based on the project of De Pangher and Nichols to determine the intensity of the sources of fast neutrons that will be produced and to serve as secondary caliper of other neutrons detector in a wide range of energy. The objective is to obtain general information on the scattered neutrons in the experimental atmosphere through preliminary theoretical calculations, using the systems of programs MCNP and DOORS, based on Monte Carlo methods and on the discrete ordinates method, respectively. The results are used to visualize the dimensions of the experimental room, where the influence of neutron scattering in the experimental measurements is the possible smallest. (author)
Abbas, Mahmoud I., E-mail: mabbas@physicist.net [Physics Department, Faculty of Science, Alexandria University, 21511 Alexandria (Egypt); Badawi, M.S. [Physics Department, Faculty of Science, Alexandria University, 21511 Alexandria (Egypt); Ruskov, I.N. [Frank Laboratory of Neutron Physics, Joint Institute for Nuclear Research, 141980 Dubna (Russian Federation); Institute for Nuclear Research and Nuclear Energy, Bulgarian Academy of Sciences, 1784 Sofia (Bulgaria); El-Khatib, A.M. [Physics Department, Faculty of Science, Alexandria University, 21511 Alexandria (Egypt); Grozdanov, D.N. [Frank Laboratory of Neutron Physics, Joint Institute for Nuclear Research, 141980 Dubna (Russian Federation); Institute for Nuclear Research and Nuclear Energy, Bulgarian Academy of Sciences, 1784 Sofia (Bulgaria); Thabet, A.A. [Department of Medical Equipment Technology, Faculty of Allied Medical Sciences, Pharos University in Alexandria (Egypt); Kopatch, Yu.N. [Frank Laboratory of Neutron Physics, Joint Institute for Nuclear Research, 141980 Dubna (Russian Federation); Gouda, M.M. [Physics Department, Faculty of Science, Alexandria University, 21511 Alexandria (Egypt); Skoy, V.R. [Frank Laboratory of Neutron Physics, Joint Institute for Nuclear Research, 141980 Dubna (Russian Federation)
2015-01-21
Gamma-ray detector systems are important instruments in a broad range of science and new setup are continually developing. The most recent step in the evolution of detectors for nuclear spectroscopy is the construction of large arrays of detectors of different forms (for example, conical, pentagonal, hexagonal, etc.) and sizes, where the performance and the efficiency can be increased. In this work, a new direct numerical method (NAM), in an integral form and based on the efficiency transfer (ET) method, is used to calculate the full-energy peak efficiency of a single hexagonal NaI(Tl) detector. The algorithms and the calculations of the effective solid angle ratios for a point (isotropic irradiating) gamma-source situated coaxially at different distances from the detector front-end surface, taking into account the attenuation of the gamma-rays in the detector's material, end-cap and the other materials in-between the gamma-source and the detector, are considered as the core of this (ET) method. The calculated full-energy peak efficiency values by the (NAM) are found to be in a good agreement with the measured experimental data.
TIME CALIBRATED OSCILLOSCOPE SWEEP CIRCUIT
Smith, V.L.; Carstensen, H.K.
1959-11-24
An improved time calibrated sweep circuit is presented, which extends the range of usefulness of conventional oscilloscopes as utilized for time calibrated display applications in accordance with U. S. Patent No. 2,832,002. Principal novelty resides in the provision of a pair of separate signal paths, each of which is phase and amplitude adjustable, to connect a high-frequency calibration oscillator to the output of a sawtooth generator also connected to the respective horizontal deflection plates of an oscilloscope cathode ray tube. The amplitude and phase of the calibration oscillator signals in the two signal paths are adjusted to balance out feedthrough currents capacitively coupled at high frequencies of the calibration oscillator from each horizontal deflection plate to the vertical plates of the cathode ray tube.
Fukushima, Toshio
2012-11-01
We confirm that the first-, second-, and third-order derivatives of fully-normalized Legendre polynomial (LP) and associated Legendre function (ALF) of arbitrary degree and order can be correctly evaluated by means of non-singular fixed-degree formulas (Bosch in Phys Chem Earth 25:655-659, 2000) in the ordinary IEEE754 arithmetic when the values of fully-normalized LP and ALF are obtained without underflow problems, for e.g., using the extended range arithmetic we recently developed (Fukushima in J Geod 86:271-285, 2012). Also, we notice the same correctness for the popular but singular fixed-order formulas unless (1) the order of differentiation is greater than the order of harmonics and (2) the point of evaluation is close to the poles. The new formulation using the fixed-order formulas runs at a negligible extra computational time, i.e., 3-5 % increase in computational time per single ALF when compared with the standard algorithm without the exponent extension. This enables a practical computation of low-order derivatives of spherical harmonics of arbitrary degree and order.
van Weeren, R J; Hardcastle, M J; Shimwell, T W; Rafferty, D A; Sabater, J; Heald, G; Sridhar, S S; Dijkema, T J; Brunetti, G; Brüggen, M; Andrade-Santos, F; Ogrean, G A; Röttgering, H J A; Dawson, W A; Forman, W R; de Gasperin, F; Jones, C; Miley, G K; Rudnick, L; Sarazin, C L; Bonafede, A; Best, P N; Bîrzan, L; Cassano, R; Chyży, K T; Croston, J H; Ensslin, T; Ferrari, C; Hoeft, M; Horellou, C; Jarvis, M J; Kraft, R P; Mevius, M; Intema, H T; Murray, S S; Orrú, E; Pizzo, R; Simionescu, A; Stroe, A; van der Tol, S; White, G J
2016-01-01
LOFAR, the Low-Frequency Array, is a powerful new radio telescope operating between 10 and 240 MHz. LOFAR allows detailed sensitive high-resolution studies of the low-frequency radio sky. At the same time LOFAR also provides excellent short baseline coverage to map diffuse extended emission. However, producing high-quality deep images is challenging due to the presence of direction dependent calibration errors, caused by imperfect knowledge of the station beam shapes and the ionosphere. Furthermore, the large data volume and presence of station clock errors present additional difficulties. In this paper we present a new calibration scheme, which we name facet calibration, to obtain deep high-resolution LOFAR High Band Antenna images using the Dutch part of the array. This scheme solves and corrects the direction dependent errors in a number of facets that cover the observed field of view. Facet calibration provides close to thermal noise limited images for a typical 8 hr observing run at $\\sim$ 5arcsec resolu...
Radiocarbon calibration - past, present and future
Plicht, J. van der E-mail: plicht@phys.rug.nl
2004-08-01
Calibration of the Radiocarbon timescale is traditionally based on tree-rings dated by dendrochronology. At present, the tree-ring curve dates back to about 9900 BC. Beyond this limit, marine datasets extend the present calibration curve INTCAL98 to about 15 600 years ago. Since 1998, a wealth of AMS measurements became available, covering the complete {sup 14}C dating range. No calibration curve can presently be recommended for the older part of the dating range until discrepancies are resolved.
Radiocarbon calibration - past, present and future
van der Plicht, J
2004-01-01
Calibration of the Radiocarbon timescale is traditionally based on tree-rings dated by dendrochronology. At present, the tree-ring curve dates back to about 9900 BC. Beyond this limit, marine datasets extend the present calibration curve INTCAL98 to about 15600 years ago. Since 1998, a wealth of AMS
Radiocarbon calibration - past, present and future
van der Plicht, J
Calibration of the Radiocarbon timescale is traditionally based on tree-rings dated by dendrochronology. At present, the tree-ring curve dates back to about 9900 BC. Beyond this limit, marine datasets extend the present calibration curve INTCAL98 to about 15600 years ago. Since 1998, a wealth of AMS
Pastil, Luisa; Ventosa, Edgar A; Mingozzi, Ines; Dondi, Francesco
2006-05-01
A new procedure for determining the calibration function able to relate retention and operative parameters to molecular weight of the species in thermal field flow (ThFFF) under thermal field programming (TFP) conditions is presented. The procedure involves determining the average values of retention parameters under TFP and determining a numerical function related to the temperature variations that occur during TFP. The calibration parameters are obtained by a procedure fitting the retention and operative parameters that hold true at the beginning of the TFP. The procedure is closely related to the one previously developed to calibrate the retention time axis under TFP ThFFF and, together, they constitute a full calibration procedure. Experimental validation was performed with reference to polystyrene (PS)-decalin and PS-THF systems. The calibration functions here obtained were compared to those derived by the classical procedure at constant thermal field ThFFF to obtain the calibration function at variable cold wall temperatures. Excellent agreement was found in all cases proving "universality" of the ThFFF calibration concept, i.e. it is independent of the particular system on which it was determined and can thus be extended to ThFFF operating under TFP. The new procedure is simpler than the classical one since it requires less precision in setting the instrumentation and can be obtained with fewer experiments. The potential applications for the method are discussed.
An extended car-following model at signalized intersections
Yu, Shaowei; Shi, Zhongke
2014-08-01
To simulate car-following behaviors better when the traffic light is red, three successive car-following data at a signalized intersection of Jinan in China were collected by using a new proposed data acquisition method and then analyzed to select input variables of the extended car-following model. An extended car-following model considering two leading cars' accelerations was proposed, calibrated and verified with field data obtained on the basis of the full velocity difference model and then a comparative model used for comparative research was also proposed and calibrated in the light of the GM model. The results indicate that the extended car-following model could fit measured data well, and that the fitting precision of the extended model is prior to the comparative model, whose mean absolute error is reduced by 22.83%. Finally a theoretical car-following model considering multiple leading cars' accelerations was put forward which has potential applicable to vehicle automation system and vehicle safety early warning system, and then the linear stability analysis and numerical simulations were conducted to analyze some observed physical features existing in the realistic traffic.
Traceable Pyrgeometer Calibrations
Dooraghi, Mike; Kutchenreiter, Mark; Reda, Ibrahim; Habte, Aron; Sengupta, Manajit; Andreas, Afshin; Newman, Martina
2016-05-02
This poster presents the development, implementation, and operation of the Broadband Outdoor Radiometer Calibrations (BORCAL) Longwave (LW) system at the Southern Great Plains Radiometric Calibration Facility for the calibration of pyrgeometers that provide traceability to the World Infrared Standard Group.
Traceable Pyrgeometer Calibrations
Dooraghi, Mike; Kutchenreiter, Mark; Reda, Ibrahim; Habte, Aron; Sengupta, Manajit; Andreas, Afshin; Newman, Martina
2016-05-02
This poster presents the development, implementation, and operation of the Broadband Outdoor Radiometer Calibrations (BORCAL) Longwave (LW) system at the Southern Great Plains Radiometric Calibration Facility for the calibration of pyrgeometers that provide traceability to the World Infrared Standard Group.
Calibration of sound calibrators: an overview
Milhomem, T. A. B.; Soares, Z. M. D.
2016-07-01
This paper presents an overview of calibration of sound calibrators. Initially, traditional calibration methods are presented. Following, the international standard IEC 60942 is discussed emphasizing parameters, target measurement uncertainty and criteria for conformance to the requirements of the standard. Last, Regional Metrology Organizations comparisons are summarized.
On chromatic and geometrical calibration
Folm-Hansen, Jørgen
1999-01-01
of non-uniformity of the illumination of the image plane. Only the image deforming aberrations and the non-uniformity of illumination are included in the calibration models. The topics of the pinhole camera model and the extension to the Direct Linear Transform (DLT) are described. It is shown how......The main subject of the present thesis is different methods for the geometrical and chromatic calibration of cameras in various environments. For the monochromatic issues of the calibration we present the acquisition of monochrome images, the classic monochrome aberrations and the various sources...... the DLT can be extended with non-linear models of the common lens aberrations/errors some of them caused by manufacturing defects like decentering and thin prism distortion. The relation between a warping and the non-linear defects are shown. The issue of making a good resampling of an image by using...
Krueger, Joel; Szanto, Thomas
2016-01-01
Until recently, philosophers and psychologists conceived of emotions as brain- and body-bound affairs. But researchers have started to challenge this internalist and individualist orthodoxy. A rapidly growing body of work suggests that some emotions incorporate external resources and thus extend...... beyond the neurophysiological confines of organisms; some even argue that emotions can be socially extended and shared by multiple agents. Call this the extended emotions thesis (ExE). In this article, we consider different ways of understanding ExE in philosophy, psychology, and the cognitive sciences....... First, we outline the background of the debate and discuss different argumentative strategies for ExE. In particular, we distinguish ExE from cognate but more moderate claims about the embodied and situated nature of cognition and emotion (Section 1). We then dwell upon two dimensions of ExE: emotions...
Müller, Ingo
1993-01-01
Physicists firmly believe that the differential equations of nature should be hyperbolic so as to exclude action at a distance; yet the equations of irreversible thermodynamics - those of Navier-Stokes and Fourier - are parabolic. This incompatibility between the expectation of physicists and the classical laws of thermodynamics has prompted the formulation of extended thermodynamics. After describing the motifs and early evolution of this new branch of irreversible thermodynamics, the authors apply the theory to mon-atomic gases, mixtures of gases, relativistic gases, and "gases" of phonons and photons. The discussion brings into perspective the various phenomena called second sound, such as heat propagation, propagation of shear stress and concentration, and the second sound in liquid helium. The formal mathematical structure of extended thermodynamics is exposed and the theory is shown to be fully compatible with the kinetic theory of gases. The study closes with the testing of extended thermodynamics thro...
BUNDLE ADJUSTMENTS CCD CAMERA CALIBRATION BASED ON COLLINEARITY EQUATION
Liu Changying; Yu Zhijing; Che Rensheng; Ye Dong; Huang Qingcheng; Yang Dingning
2004-01-01
The solid template CCD camera calibration method of bundle adjustments based on collinearity equation is presented considering the characteristics of space large-dimension on-line measurement. In the method, a more comprehensive camera model is adopted which is based on the pinhole model extended with distortions corrections. In the process of calibration, calibration precision is improved by imaging at different locations in the whole measurement space, multi-imaging at the same location and bundle adjustments optimization. The calibration experiment proves that the calibration method is able to fulfill calibration requirement of CCD camera applied to vision measurement.
SURF Model Calibration Strategy
Menikoff, Ralph [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-03-10
SURF and SURFplus are high explosive reactive burn models for shock initiation and propagation of detonation waves. They are engineering models motivated by the ignition & growth concept of high spots and for SURFplus a second slow reaction for the energy release from carbon clustering. A key feature of the SURF model is that there is a partial decoupling between model parameters and detonation properties. This enables reduced sets of independent parameters to be calibrated sequentially for the initiation and propagation regimes. Here we focus on a methodology for tting the initiation parameters to Pop plot data based on 1-D simulations to compute a numerical Pop plot. In addition, the strategy for tting the remaining parameters for the propagation regime and failure diameter is discussed.
Remarks on numerical semigroups
Torres, F
1995-01-01
We extend results on Weierstrass semigroups at ramified points of double covering of curves to any numerical semigroup whose genus is large enough. As an application we strengthen the properties concerning Weierstrass weights in \\cited{To}.
Franceschi, Alessandro
2014-01-01
This book is a clear, detailed and practical guide to learn about designing and deploying you puppet architecture, with informative examples to highlight and explain concepts in a focused manner. This book is designed for users who already have good experience with Puppet, and will surprise experienced users with innovative topics that explore how to design, implement, adapt, and deploy a Puppet architecture. The key to extending Puppet is the development of types and providers, for which you must be familiar with Ruby.
Menouillard, T
2007-09-15
Computerized simulation is nowadays an integrating part of design and validation processes of mechanical structures. Simulation tools are more and more performing allowing a very acute description of the phenomena. Moreover, these tools are not limited to linear mechanics but are developed to describe more difficult behaviours as for instance structures damage which interests the safety domain. A dynamic or static load can thus lead to a damage, a crack and then a rupture of the structure. The fast dynamics allows to simulate 'fast' phenomena such as explosions, shocks and impacts on structure. The application domain is various. It concerns for instance the study of the lifetime and the accidents scenario of the nuclear reactor vessel. It is then very interesting, for fast dynamics codes, to be able to anticipate in a robust and stable way such phenomena: the assessment of damage in the structure and the simulation of crack propagation form an essential stake. The extended finite element method has the advantage to break away from mesh generation and from fields projection during the crack propagation. Effectively, crack is described kinematically by an appropriate strategy of enrichment of supplementary freedom degrees. Difficulties connecting the spatial discretization of this method with the temporal discretization of an explicit calculation scheme has then been revealed; these difficulties are the diagonal writing of the mass matrix and the associated stability time step. Here are presented two methods of mass matrix diagonalization based on the kinetic energy conservation, and studies of critical time steps for various enriched finite elements. The interest revealed here is that the time step is not more penalizing than those of the standard finite elements problem. Comparisons with numerical simulations on another code allow to validate the theoretical works. A crack propagation test in mixed mode has been exploited in order to verify the simulation
A computer game's player is experiencing not only the game as a designer-made artefact, but also a multitude of social and cultural practices and contexts of both computer game play and everyday life. As a truly multidisciplinary anthology, Extending Experiences sheds new light on the mesh...... of possibilities and influences the player engages with. Part one, Experiential Structures of Play, considers some of the key concepts commonly used to address the experience of a computer game player. The second part, Bordering Play, discusses conceptual and practical overlaps of games and everyday life...
Calibration of Nanopositioning Stages
Ning Tan
2015-12-01
Full Text Available Accuracy is one of the most important criteria for the performance evaluation of micro- and nanorobots or systems. Nanopositioning stages are used to achieve the high positioning resolution and accuracy for a wide and growing scope of applications. However, their positioning accuracy and repeatability are not well known and difficult to guarantee, which induces many drawbacks for many applications. For example, in the mechanical characterisation of biological samples, it is difficult to perform several cycles in a repeatable way so as not to induce negative influences on the study. It also prevents one from controlling accurately a tool with respect to a sample without adding additional sensors for closed loop control. This paper aims at quantifying the positioning repeatability and accuracy based on the ISO 9283:1998 standard, and analyzing factors influencing positioning accuracy onto a case study of 1-DoF (Degree-of-Freedom nanopositioning stage. The influence of thermal drift is notably quantified. Performances improvement of the nanopositioning stage are then investigated through robot calibration (i.e., open-loop approach. Two models (static and adaptive models are proposed to compensate for both geometric errors and thermal drift. Validation experiments are conducted over a long period (several days showing that the accuracy of the stage is improved from typical micrometer range to 400 nm using the static model and even down to 100 nm using the adaptive model. In addition, we extend the 1-DoF calibration to multi-DoF with a case study of a 2-DoF nanopositioning robot. Results demonstrate that the model efficiently improved the 2D accuracy from 1400 nm to 200 nm.
Parallel Calibration for Sensor Array Radio Interferometers
Brossard, Martin; Pesavento, Marius; Boyer, Rémy; Larzabal, Pascal; Wijnholds, Stefan J
2016-01-01
In order to meet the theoretically achievable imaging performance, calibration of modern radio interferometers is a mandatory challenge, especially at low frequencies. In this perspective, we propose a novel parallel iterative multi-wavelength calibration algorithm. The proposed algorithm estimates the apparent directions of the calibration sources, the directional and undirectional complex gains of the array elements and their noise powers, with a reasonable computational complexity. Furthermore, the algorithm takes into account the specific variation of the aforementioned parameter values across wavelength. Realistic numerical simulations reveal that the proposed scheme outperforms the mono-wavelength calibration scheme and approaches the derived constrained Cram\\'er-Rao bound even with the presence of non-calibration sources at unknown directions, in a computationally efficient manner.
Trinocular Calibration Method Based on Binocular Calibration
CAO Dan-Dan
2012-10-01
Full Text Available In order to solve the self-occlusion problem in plane-based multi-camera calibration system and expand the measurement range, a tri-camera vision system based on binocular calibration is proposed. The three cameras are grouped into two pairs, while the public camera is taken as the reference to build the global coordinate. By calibration of the measured absolute distance and the true absolute distance, global calibration is realized. The MRE (mean relative error of the global calibration of the two camera pairs in the experiments can be as low as 0.277% and 0.328% respectively. Experiment results show that this method is feasible, simple and effective, and has high precision.
Calibration of space instruments at the Metrology Light Source
Klein, R., E-mail: roman.klein@ptb.de; Fliegauf, R.; Gottwald, A.; Kolbe, M.; Paustian, W.; Reichel, T.; Richter, M.; Thornagel, R.; Ulm, G. [Physikalisch-Technische Bundesanstalt (PTB), Berlin (Germany)
2016-07-27
PTB has more than 20 years of experience in the calibration of space-based instruments using synchrotron radiation to cover the UV, VUV and X-ray spectral range. New instrumentation at the electron storage ring Metrology Light Source (MLS) opens up extended calibration possibilities within this framework. In particular, the set-up of a large vacuum vessel that can accommodate entire space instruments opens up new prospects. Moreover, a new facility for the calibration of radiation transfer source standards with a considerably extended spectral range has been put into operation. Besides, characterization and calibration of single components like e.g. mirrors, filters, gratings, and detectors is continued.
Carrara-Augustenborg, Claudia
2012-01-01
There is no consensus yet regarding a conceptualization of consciousness able to accommodate all the features of such complex phenomenon. Different theoretical and empirical models lend strength to both the occurrence of a non-accessible informational broadcast, and to the mobilization of specific...... brain areas responsible for the emergence of the individual´s explicit and variable access to given segments of such broadcast. Rather than advocating one model over others, this chapter proposes to broaden the conceptualization of consciousness by letting it embrace both mechanisms. Within...... such extended framework, I propose conceptual and functional distinctions between consciousness (global broadcast of information), awareness (individual´s ability to access the content of such broadcast) and unconsciousness (focally isolated neural activations). My hypothesis is that a demarcation in terms...
Numerical Modeling of Piezoelectric Transducers Using Physical Parameters
Cappon, H.; Keesman, K.J.
2012-01-01
Design of ultrasonic equipment is frequently facilitated with numerical models. These numerical models, however, need a calibration step, because usually not all characteristics of the materials used are known. Characterization of material properties combined with numerical simulations and experimen
Sensor modelling and camera calibration for close-range photogrammetry
Luhmann, Thomas; Fraser, Clive; Maas, Hans-Gerd
2016-05-01
Metric calibration is a critical prerequisite to the application of modern, mostly consumer-grade digital cameras for close-range photogrammetric measurement. This paper reviews aspects of sensor modelling and photogrammetric calibration, with attention being focussed on techniques of automated self-calibration. Following an initial overview of the history and the state of the art, selected topics of current interest within calibration for close-range photogrammetry are addressed. These include sensor modelling, with standard, extended and generic calibration models being summarised, along with non-traditional camera systems. Self-calibration via both targeted planar arrays and targetless scenes amenable to SfM-based exterior orientation are then discussed, after which aspects of calibration and measurement accuracy are covered. Whereas camera self-calibration is largely a mature technology, there is always scope for additional research to enhance the models and processes employed with the many camera systems nowadays utilised in close-range photogrammetry.
Astrid-2 EMMA Magnetic Calibration
Merayo, José M.G.; Brauer, Peter; Risbo, Torben
1998-01-01
The Swedish micro-satellite Astrid-2 contains a tri-axial fluxgate magnetometer with the sensor co-located with a Technical University of Denmark (DTU) star camera for absolute attitude, and extended about 0.9 m on a hinged boom. The magnetometer is part of the RIT EMMA electric and magnetic fields...... experiment built as a collaboration between the DTU, Department of Automation and the Department of Plasma Physics, The Alfvenlaboratory, Royal Institute of Technology (RIT), Stockholm. The final magnetic calibration of the Astrid-2 satellite was done at the Lovoe Magnetic Observatory under the Geological...... the magnetometer orthogonalized axes and the star camera optical axes was determined from the observed stellar coordinates related to the Earth magnetic field from the Magnetic Observatory. The magnetic calibration of the magnetometer integrated into the flight configured satellite was done in the (almost...
Brax, Philippe
2015-01-01
We extend the chameleon models by considering Scalar-Fluid theories where the coupling between matter and the scalar field can be represented by a quadratic effective potential with density-dependent minimum and mass. In this context, we study the effects of the scalar field on Solar System tests of gravity and show that models passing these stringent constraints can still induce large modifications of Newton's law on galactic scales. On these scales we analyse models which could lead to a percent deviation of Newton's law outside the virial radius. We then model the dark matter halo as a Navarro-Frenk-White profile and explicitly find that the fifth force can give large contributions around the galactic core in a particular model where the scalar field mass is constant and the minimum of its potential varies linearly with the matter density. At cosmological distances, we find that this model does not alter the growth of large scale structures and therefore would be best tested on galactic scales, where inter...
Design Analysis for Optimal Calibration of Diffusivity in Reactive Multilayers
Vohra, Manav; Weihs, Timothy P; Knio, Omar M
2016-01-01
Calibration of the uncertain Arrhenius diffusion parameters for quantifying mixing rates in Zr-Al nanolaminate foils was performed in a Bayesian setting [Vohra et al., 2014]. The parameters were inferred in a low temperature regime characterized by homogeneous ignition and a high temperature regime characterized by self-propagating reactions in the multilayers. In this work, we extend the analysis to find optimal experimental designs that would provide the best data for inference. We employ a rigorous framework that quantifies the expected in- formation gain in an experiment, and find the optimal design conditions using numerical techniques of Monte Carlo, sparse quadrature, and polynomial chaos surrogates. For the low temperature regime, we find the optimal foil heating rate and pulse duration, and confirm through simulation that the optimal design indeed leads to sharper posterior distributions of the diffusion parameters. For the high temperature regime, we demonstrate potential for increase in the expecte...
Gonta, Igor; Williams, Earle
1994-05-01
Benjamin Franklin devised a simple yet intriguing device to measure electrification in the atmosphere during conditions of foul weather. He constructed a system of bells, one of which was attached to a conductor that was suspended vertically above his house. The device is illustrated in a well-known painting of Franklin (Cohen, 1985). The elevated conductor acquired a potential due to the electric field in the atmosphere and caused a brass ball to oscillate between two bells. The purpose of this study is to extend Franklin's idea by constructing a set of 'chimes' which will operate both in fair and in foul weather conditions. In addition, a mathematical relationship will be established between the frequency of oscillation of a metallic sphere in a simplified geometry and the potential on one plate due to the electrification of the atmosphere. Thus it will be possible to calibrate the 'Franklin Chimes' and to obtain a nearly instantaneous measurement of the potential of the elevated conductor in both fair and foul weather conditions.
Fixsen, D. J.; Chuss, D. T.; Kogut, Alan; Mirel, Paul; Wollack, E. J.
2016-07-01
The FIRAS instrument demonstrated the use of an external calibrator to compare the sky to an instrumented blackbody. The PIXIE calibrator is improved from -35 dB to -65 dB. Another significant improvement is the ability to insert the calibrator into either input of the FTS. This allows detection and correction of additional errors, reduces the effective calibration noise by a factor of 2, eliminates an entire class of systematics and allows continuous observations. This paper presents the design and use of the PIXIE calibrator.
Calibration of Geodetic Instruments
Marek Bajtala
2005-06-01
Full Text Available The problem of metrology and security systems of unification, correctness and standard reproducibilities belong to the preferred requirements of theory and technical practice in geodesy. Requirements on the control and verification of measured instruments and equipments increase and the importance and up-to-date of calibration get into the foreground. Calibration possibilities of length-scales (of electronic rangefinders and angle-scales (of horizontal circles of geodetic instruments. Calibration of electronic rangefinders on the linear comparative baseline in terrain. Primary standard of planar angle – optical traverse and its exploitation for calibration of the horizontal circles of theodolites. The calibration equipment of the Institute of Slovak Metrology in Bratislava. The Calibration process and results from the calibration of horizontal circles of selected geodetic instruments.
Overnight Index Rate: Model, calibration and simulation
Olga Yashkir; Yuri Yashkir
2014-01-01
In this study, the extended Overnight Index Rate (OIR) model is presented. The fitting function for the probability distribution of the OIR daily returns is based on three different Gaussian distributions which provide modelling of the narrow central peak and the wide fat-tailed component. The calibration algorithm for the model is developed and investigated using the historical OIR data.
Overnight Index Rate: Model, calibration and simulation
Olga Yashkir
2014-12-01
Full Text Available In this study, the extended Overnight Index Rate (OIR model is presented. The fitting function for the probability distribution of the OIR daily returns is based on three different Gaussian distributions which provide modelling of the narrow central peak and the wide fat-tailed component. The calibration algorithm for the model is developed and investigated using the historical OIR data.
Overnight Index Rate: Model, Calibration, and Simulation
Olga Yashkir; Yuri Yashkir
2013-01-01
In this study, the extended Overnight Index Rate (OIR) model is presented. The fitting function for the probability distribution of the OIR daily returns is based on three different Gaussian distributions which provide modelling of the narrow central peak and the wide fat-tailed component. The calibration algorithm for the model is developed and investigated using the historical OIR data.
Calibration and equivalency analysis of image plate scanners
Williams, G. Jackson, E-mail: williams270@llnl.gov; Maddox, Brian R.; Chen, Hui [Lawrence Livermore National Laboratory, 7000 East Avenue, Livermore, California 94550 (United States); Kojima, Sadaoki [Institute of Laser Engineering, Osaka University, Yamada-oka, 2-6, Suita, Osaka 565-0871 (Japan); Millecchia, Matthew [Laboratory for Laser Energetics, University of Rochester, 250 East River Road, Rochester, New York 14623 (United States)
2014-11-15
A universal procedure was developed to calibrate image plate scanners using radioisotope sources. Techniques to calibrate scanners and sources, as well as cross-calibrate scanner models, are described to convert image plate dosage into physical units. This allows for the direct comparison of quantitative data between any facility and scanner. An empirical relation was also derived to establish sensitivity response settings for arbitrary gain settings. In practice, these methods may be extended to any image plate scanning system.
Extending LMS to Support IRT-Based Assessment Test Calibration
Fotaris, Panagiotis; Mastoras, Theodoros; Mavridis, Ioannis; Manitsaris, Athanasios
Developing unambiguous and challenging assessment material for measuring educational attainment is a time-consuming, labor-intensive process. As a result Computer Aided Assessment (CAA) tools are becoming widely adopted in academic environments in an effort to improve the assessment quality and deliver reliable results of examinee performance. This paper introduces a methodological and architectural framework which embeds a CAA tool in a Learning Management System (LMS) so as to assist test developers in refining items to constitute assessment tests. An Item Response Theory (IRT) based analysis is applied to a dynamic assessment profile provided by the LMS. Test developers define a set of validity rules for the statistical indices given by the IRT analysis. By applying those rules, the LMS can detect items with various discrepancies which are then flagged for review of their content. Repeatedly executing the aforementioned procedure can improve the overall efficiency of the testing process.
Distributed Radio Interferometric Calibration
Yatawatta, Sarod
2015-01-01
Increasing data volumes delivered by a new generation of radio interferometers require computationally efficient and robust calibration algorithms. In this paper, we propose distributed calibration as a way of improving both computational cost as well as robustness in calibration. We exploit the data parallelism across frequency that is inherent in radio astronomical observations that are recorded as multiple channels at different frequencies. Moreover, we also exploit the smoothness of the variation of calibration parameters across frequency. Data parallelism enables us to distribute the computing load across a network of compute agents. Smoothness in frequency enables us reformulate calibration as a consensus optimization problem. With this formulation, we enable flow of information between compute agents calibrating data at different frequencies, without actually passing the data, and thereby improving robustness. We present simulation results to show the feasibility as well as the advantages of distribute...
Mureşan, Ioana Cristina; Bâlc, Roxana
2017-07-01
This paper presents an experimental investigation of a statically monotonic loaded extended end-plate connection, with preloaded high strength bolts, that was carried out at Laboratory of Faculty of Civil Engineering from Cluj-Napoca. A finite element model using the software package Abaqus [1] was developed in parallel. In order to calibrate the numerical model, the results were analyzed on the basis of moment-rotation curves, stress distribution state and the failure mode of connection. Then, a study was conducted on the numerical model by using a high strength steel (HSS) and changing the stiffness and strength characteristics of some elements. Validation of the numerical modeling was performed against the experimental results and it can be seen that good agreements exist in general.
A Novel Calibrator for Electronic Transformers Based on IEC 61850
Baoxiang PAN
2013-01-01
Full Text Available It is necessary for electronic transformer to make calibration before putting it into practice. To solve the problems in actual calibration process, a novel electronic transformer calibrator is designed. In principle, this system adopts both the direct method and the difference method, which are two popular methods for electronic transformer calibration, by this way the application of the system is extended with its reliability improved. In the system design, based on virtual instrument technology, LabVIEW and WinPCap toolkit are used to develop the application software, and it is able to calibrate those electronic transformers following the standard of IEC 61850. In the calculation of ratio and phase error based on fast Fourier transform, a new window function is introduced, and thus the accuracy of calibration, influenced by the frequency vibration, is improved. This research provides theoretic support and practical reference to the development of intelligent calibrator for electronic transformers.
Increased Automation in Stereo Camera Calibration Techniques
Brandi House
2006-08-01
Full Text Available Robotic vision has become a very popular field in recent years due to the numerous promising applications it may enhance. However, errors within the cameras and in their perception of their environment can cause applications in robotics to fail. To help correct these internal and external imperfections, stereo camera calibrations are performed. There are currently many accurate methods of camera calibration available; however, most or all of them are time consuming and labor intensive. This research seeks to automate the most labor intensive aspects of a popular calibration technique developed by Jean-Yves Bouguet. His process requires manual selection of the extreme corners of a checkerboard pattern. The modified process uses embedded LEDs in the checkerboard pattern to act as active fiducials. Images are captured of the checkerboard with the LEDs on and off in rapid succession. The difference of the two images automatically highlights the location of the four extreme corners, and these corner locations take the place of the manual selections. With this modification to the calibration routine, upwards of eighty mouse clicks are eliminated per stereo calibration. Preliminary test results indicate that accuracy is not substantially affected by the modified procedure. Improved automation to camera calibration procedures may finally penetrate the barriers to the use of calibration in practice.
Increased Automation in Stereo Camera Calibration Techniques
Brandi House
2006-08-01
Full Text Available Robotic vision has become a very popular field in recent years due to the numerous promising applications it may enhance. However, errors within the cameras and in their perception of their environment can cause applications in robotics to fail. To help correct these internal and external imperfections, stereo camera calibrations are performed. There are currently many accurate methods of camera calibration available; however, most or all of them are time consuming and labor intensive. This research seeks to automate the most labor intensive aspects of a popular calibration technique developed by Jean-Yves Bouguet. His process requires manual selection of the extreme corners of a checkerboard pattern. The modified process uses embedded LEDs in the checkerboard pattern to act as active fiducials. Images are captured of the checkerboard with the LEDs on and off in rapid succession. The difference of the two images automatically highlights the location of the four extreme corners, and these corner locations take the place of the manual selections. With this modification to the calibration routine, upwards of eighty mouse clicks are eliminated per stereo calibration. Preliminary test results indicate that accuracy is not substantially affected by the modified procedure. Improved automation to camera calibration procedures may finally penetrate the barriers to the use of calibration in practice.
Kent, S. M.
2016-05-01
This paper presents a broad overview of the many issues involved in calibrating astronomical data, covering the full electromagnetic spectrum from radio waves to gamma rays, and considering both ground-based and space-based missions. These issues include the science drivers for absolute and relative calibration, the physics behind calibration and the mechanisms used to transfer it from the laboratory to an astronomical source, the need for networks of calibrated astronomical standards, and some of the challenges faced by large surveys and missions.
Sequential intrinsic and extrinsic geometry calibration in fluoro CT imaging with a mobile C-arm
Cheryauka, Arvi; Breham, Sebastien; Christensen, Wayne
2006-03-01
Design of C-arm equipment with 3D imaging capabilitys involves retrieval of repeatable gantry positioning information along the acquisition trajectory. Inaccurate retrieval or improper use of positioning information may cause degradation of the reconstruction results, appearance of image artifacts, or indicate false structures. The geometry misrepresentation can also lead to the errors in relative pose assessment of anatomy-of-interest and interventional tools. Comprehensive C gantry calibration with an extended set of misalignment and motion parameters suffers from ambiguity caused by parameter cross-correlation and significant computational complexity. We deploy the concept of a waterfall calibration that comprises sequential intrinsic and extrinsic geometry calibration delineation steps. Following the image-based framework, the first step in our method is intrinsic calibration that deals with delineation of geometry of the X-ray tube-Detector assembly. Extrinsic parameters define motion of the C-arm assembly in 3D space and relate the Camera and World coordinate systems. We formulate both intrinsic and extrinsic calibration problems in vectorized form with total variation constraints. The proposed method has been verified by numerical design and validated by experimental studies. Sequential delineation of intrinsic and extrinsic geometries has demonstrated very efficient performance. The method eliminates the cross-correlation between cone-beam projection parameters, provides significantly better accuracy and computational speed, simplifies the structures of calibration targets used, and avoids the unnecessary workflow and image processing steps. It appears to be adequate for quality and cost derivations in an interventional surgery settings using a mobile C-arm.
Handheld temperature calibrator
Martella, Melanie
2003-01-01
... you sign on. What are you waiting for? JOFRA ETC Series dry-block calibrators from AMETEK Test & Calibration Instruments, Largo, FL, are small enough to be handheld and feature easy-to-read displays, multiple bore blocks, programmable test setup, RS-232 communications, and software. Two versions are available: the ETC 125A that ranges from -10[degrees]C to 125[d...
Markham, Brian; Morfitt, Ron; Kvaran, Geir; Biggar, Stuart; Leisso, Nathan; Czapla-Myers, Jeff
2011-01-01
Goals: (1) Present an overview of the pre-launch radiance, reflectance & uniformity calibration of the Operational Land Imager (OLI) (1a) Transfer to orbit/heliostat (1b) Linearity (2) Discuss on-orbit plans for radiance, reflectance and uniformity calibration of the OLI
WFPC2 Polarization Calibration
Biretta, J.; McMaster, M.
1997-12-01
We derive a detailed calibration for WFPC2 polarization data which is accurate to about 1.5%. We begin by computing polarizer flats, and show how they are applied to data. A physical model for the polarization effects of the WFPC2 optics is then created using Mueller matricies. This model includes corrections for the instrumental polarization (diattenuation and phase retardance) of the pick-off mirror, as well as the high cross-polarization transmission of the polarizer filter. We compare this model against the on-orbit observations of polarization calibrators, and show it predicts relative counts in the different polarizer/aperture settings to 1.5% RMS accuracy. We then show how this model can be used to calibrate GO data, and present two WWW tools which allow observers to easily calibrate their data. Detailed examples are given illustrationg the calibration and display of WFPC2 polarization data. In closing we describe future plans and possible improvements.
Sandia WIPP calibration traceability
Schuhen, M.D. [Sandia National Labs., Albuquerque, NM (United States); Dean, T.A. [RE/SPEC, Inc., Albuquerque, NM (United States)
1996-05-01
This report summarizes the work performed to establish calibration traceability for the instrumentation used by Sandia National Laboratories at the Waste Isolation Pilot Plant (WIPP) during testing from 1980-1985. Identifying the calibration traceability is an important part of establishing a pedigree for the data and is part of the qualification of existing data. In general, the requirement states that the calibration of Measuring and Test equipment must have a valid relationship to nationally recognized standards or the basis for the calibration must be documented. Sandia recognized that just establishing calibration traceability would not necessarily mean that all QA requirements were met during the certification of test instrumentation. To address this concern, the assessment was expanded to include various activities.
Numerical Methods for Multilattices
Abdulle, Assyr; Shapeev, Alexander V
2011-01-01
Among the efficient numerical methods based on atomistic models, the quasicontinuum (QC) method has attracted growing interest in recent years. The QC method was first developed for crystalline materials with Bravais lattice and was later extended to multilattices (Tadmor et al, 1999). Another existing numerical approach to modeling multilattices is homogenization. In the present paper we review the existing numerical methods for multilattices and propose another concurrent macro-to-micro method in the homogenization framework. We give a unified mathematical formulation of the new and the existing methods and show their equivalence. We then consider extensions of the proposed method to time-dependent problems and to random materials.
Astronomical calibration of the Maastrichtian (Late Cretaceous)
Husson, Dorothée; Galbrun, Bruno; Laskar, Jacques;
2011-01-01
Recent improvements to astronomical modeling of the Solar System have contributed to important refinements of the Cenozoic time scale through astronomical calibration of sedimentary series. We extend this astronomical calibration into the Cretaceous, on the base of the 405 ka orbital eccentricity......, with the presence of cycles corresponding to forcing by precession, obliquity and orbital eccentricity variations. Identification of these cycles leads to the definition of a detailed cyclostratigraphic frame covering nearly 8 Ma, from the upper Campanian to the Cretaceous/Paleogene (K/Pg) boundary. Durations...
Cumulative sum quality control for calibrated breast density measurements
Heine, John J.; Cao Ke; Beam, Craig [Cancer Prevention and Control Division, Moffitt Cancer Center, 12902 Magnolia Drive, Tampa, Florida 33612 (United States); Division of Epidemiology and Biostatistics, School of Public Health, University of Illinois at Chicago, 1603 W. Taylor St., Chicago, Illinois 60612 (United States)
2009-12-15
Purpose: Breast density is a significant breast cancer risk factor. Although various methods are used to estimate breast density, there is no standard measurement for this important factor. The authors are developing a breast density standardization method for use in full field digital mammography (FFDM). The approach calibrates for interpatient acquisition technique differences. The calibration produces a normalized breast density pixel value scale. The method relies on first generating a baseline (BL) calibration dataset, which required extensive phantom imaging. Standardizing prospective mammograms with calibration data generated in the past could introduce unanticipated error in the standardized output if the calibration dataset is no longer valid. Methods: Sample points from the BL calibration dataset were imaged approximately biweekly over an extended timeframe. These serial samples were used to evaluate the BL dataset reproducibility and quantify the serial calibration accuracy. The cumulative sum (Cusum) quality control method was used to evaluate the serial sampling. Results: There is considerable drift in the serial sample points from the BL calibration dataset that is x-ray beam dependent. Systematic deviation from the BL dataset caused significant calibration errors. This system drift was not captured with routine system quality control measures. Cusum analysis indicated that the drift is a sign of system wear and eventual x-ray tube failure. Conclusions: The BL calibration dataset must be monitored and periodically updated, when necessary, to account for sustained system variations to maintain the calibration accuracy.
Frederix, Rikkert
2015-01-01
We consider improving POWHEG+MINLO simulations, so as to also render them NLO accurate in the description of observables receiving contributions from events with lower parton multiplicity than present in their underlying NLO calculation. On a conceptual level we follow the strategy of the so-called MINLO' programs. Whereas the existing MINLO' framework requires explicit analytic input from higher order resummation, here we derive an effective numerical approximation to these ingredients, by imposing unitarity. This offers a way of extending the MINLO' method to more complex processes, complementary to the known route which uses explicit computations of high-accuracy resummation inputs. Specifically, we have focused on Higgs-plus-two-jet production (HJJ) and related processes. We also consider how one can cover three units of multiplicity at NLO accuracy, i.e. we consider how the HJJ-MINLO simulation may yield NLO accuracy for inclusive H, HJ, and HJJ quantities. We perform a feasibility study assessing the po...
Segment Based Camera Calibration
马颂德; 魏国庆; 等
1993-01-01
The basic idea of calibrating a camera system in previous approaches is to determine camera parmeters by using a set of known 3D points as calibration reference.In this paper,we present a method of camera calibration in whih camera parameters are determined by a set of 3D lines.A set of constraints is derived on camea parameters in terms of perspective line mapping.Form these constraints,the same perspective transformation matrix as that for point mapping can be computed linearly.The minimum number of calibration lines is 6.This result generalizes that of Liu,Huang and Faugeras[12] for camera location determination in which at least 8 line correspondences are required for linear computation of camera location.Since line segments in an image can be located easily and more accurately than points,the use of lines as calibration reference tends to ease the computation in inage preprocessing and to improve calibration accuracy.Experimental results on the calibration along with stereo reconstruction are reported.
A Review of Sensor Calibration Monitoring for Calibration Interval Extension in Nuclear Power Plants
Coble, Jamie B.; Meyer, Ryan M.; Ramuhalli, Pradeep; Bond, Leonard J.; Hashemian, Hash; Shumaker, Brent; Cummins, Dara
2012-08-31
Currently in the United States, periodic sensor recalibration is required for all safety-related sensors, typically occurring at every refueling outage, and it has emerged as a critical path item for shortening outage duration in some plants. Online monitoring can be employed to identify those sensors that require calibration, allowing for calibration of only those sensors that need it. International application of calibration monitoring, such as at the Sizewell B plant in United Kingdom, has shown that sensors may operate for eight years, or longer, within calibration tolerances. This issue is expected to also be important as the United States looks to the next generation of reactor designs (such as small modular reactors and advanced concepts), given the anticipated longer refueling cycles, proposed advanced sensors, and digital instrumentation and control systems. The U.S. Nuclear Regulatory Commission (NRC) accepted the general concept of online monitoring for sensor calibration monitoring in 2000, but no U.S. plants have been granted the necessary license amendment to apply it. This report presents a state-of-the-art assessment of online calibration monitoring in the nuclear power industry, including sensors, calibration practice, and online monitoring algorithms. This assessment identifies key research needs and gaps that prohibit integration of the NRC-approved online calibration monitoring system in the U.S. nuclear industry. Several needs are identified, including the quantification of uncertainty in online calibration assessment; accurate determination of calibration acceptance criteria and quantification of the effect of acceptance criteria variability on system performance; and assessment of the feasibility of using virtual sensor estimates to replace identified faulty sensors in order to extend operation to the next convenient maintenance opportunity. Understanding the degradation of sensors and the impact of this degradation on signals is key to
Gómez Arranz, Paula; Vesth, Allan
This report describes the site calibration carried out at Østerild, during a given period. The site calibration was performed with two Windcube WLS7 (v1) lidars at ten measurements heights. The lidar is not a sensor approved by the current version of the IEC 61400-12-1 [1] and therefore the site...... calibration with lidars does not comply with the standard. However, the measurements are carried out following the guidelines of IEC 61400-12-1 where possible, but with some deviations presented in the following chapters....
Fernandez Garcia, Sergio; Villanueva, Héctor
This report presents the result of the lidar to lidar calibration performed for ground-based lidar. Calibration is here understood as the establishment of a relation between the reference lidar wind speed measurements with measurement uncertainties provided by measurement standard and corresponding...... lidar wind speed indications with associated measurement uncertainties. The lidar calibration concerns the 10 minute mean wind speed measurements. The comparison of the lidar measurements of the wind direction with that from the reference lidar measurements are given for information only....
Georgieva Yankova, Ginka; Courtney, Michael
This report presents the result of the lidar to lidar calibration performed for ground-based lidar. Calibration is here understood as the establishment of a relation between the reference lidar wind speed measurements with measurement uncertainties provided by measurement standard and corresponding...... lidar wind speed indications with associated measurement uncertainties. The lidar calibration concerns the 10 minute mean wind speed measurements. The comparison of the lidar measurements of the wind direction with that from the reference lidar measurements are given for information only....
Calibration Fixture For Anemometer Probes
Lewis, Charles R.; Nagel, Robert T.
1993-01-01
Fixture facilitates calibration of three-dimensional sideflow thermal anemometer probes. With fixture, probe oriented at number of angles throughout its design range. Readings calibrated as function of orientation in airflow. Calibration repeatable and verifiable.
Design and Implementation of A Circuit Board Calibration System
Bai Hang
2016-01-01
Full Text Available With the development of science and technology, the traditional artificial detection methods cannot meet the requirements of modern equipment testing and calibration. Combined with the actual demand, a kind of circuit boards calibration system are put forward. It can to realize automatic testing and calibration of the circuit boards. Many functions of the calibration system such as automatic testing, self-test and monitoring are summarized. The hardware is introduced which including the industrial computer system, calibration adapter and so on. Then, development platform, the thought of program design and the structure of the software are introduced in detail. The function of automatic calibration to specific circuit boards are realized. Because the system has good commonality and easy to extend to upgrade, the development ideas and experiences can be applied to similar circuit boards automatic testing system.
Courtney, Michael
Nacelle mounted, forward looking wind lidars are beginning to be used to provide reference wind speed measurements for the power performance testing of wind turbines. In such applications, a formal calibration procedure with a corresponding uncertainty assessment will be necessary. This report...... presents four concepts for performing such a nacelle lidar calibration. Of the four methods, two are found to be immediately relevant and are pursued in some detail. The first of these is a line of sight calibration method in which both lines of sight (for a two beam lidar) are individually calibrated...... a representative distribution of radial wind speeds. An alternative method is to place the nacelle lidar on the ground and incline the beams upwards to bisect a mast equipped with reference instrumentation at a known height and range. This method will be easier and faster to implement and execute but the beam...
U.S. Environmental Protection Agency — an UV calibration curve for SRHA quantitation. This dataset is associated with the following publication: Chang, X., and D. Bouchard. Surfactant-Wrapped Multiwalled...
Federal Laboratory Consortium — This facility is for low altitude subsonic altimeter system calibrations of air vehicles. Mission is a direct support of the AFFTC mission. Postflight data merge is...
Patterson E.
2010-06-01
Full Text Available The results are presented using the procedure outlined by the Standardisation Project for Optical Techniques of Strain measurement to calibrate a digital image correlation system. The process involves comparing the experimental data obtained with the optical measurement system to the theoretical values for a specially designed specimen. The standard states the criteria which must be met in order to achieve successful calibration, in addition to quantifying the measurement uncertainty in the system. The system was evaluated at three different displacement load levels, generating strain ranges from 289 µstrain to 2110 µstrain. At the 289 µstrain range, the calibration uncertainty was found to be 14.1 µstrain, and at the 2110 µstrain range it was found to be 28.9 µstrain. This calibration procedure was performed without painting a speckle pattern on the surface of the metal. Instead, the specimen surface was prepared using different grades of grit paper to produce the desired texture.
C. Ahlers; H. Liu
2000-03-12
The purpose of this Analysis/Model Report (AMR) is to document the Calibrated Properties Model that provides calibrated parameter sets for unsaturated zone (UZ) flow and transport process models for the Yucca Mountain Site Characterization Project (YMP). This work was performed in accordance with the ''AMR Development Plan for U0035 Calibrated Properties Model REV00. These calibrated property sets include matrix and fracture parameters for the UZ Flow and Transport Model (UZ Model), drift seepage models, drift-scale and mountain-scale coupled-processes models, and Total System Performance Assessment (TSPA) models as well as Performance Assessment (PA) and other participating national laboratories and government agencies. These process models provide the necessary framework to test conceptual hypotheses of flow and transport at different scales and predict flow and transport behavior under a variety of climatic and thermal-loading conditions.
Traceable Pyrgeometer Calibrations
Dooraghi, Mike; Kutchenreiter, Mark; Reda, Ibrahim; Habte, Aron; Sengupta, Manajit; Andreas, Afshin; Newman, Martina; Webb, Craig
2016-05-02
This presentation provides a high-level overview of the progress on the Broadband Outdoor Radiometer Calibrations for all shortwave and longwave radiometers that are deployed by the Atmospheric Radiation Measurement program.
Khabaza, I M
1960-01-01
Numerical Analysis is an elementary introduction to numerical analysis, its applications, limitations, and pitfalls. Methods suitable for digital computers are emphasized, but some desk computations are also described. Topics covered range from the use of digital computers in numerical work to errors in computations using desk machines, finite difference methods, and numerical solution of ordinary differential equations. This book is comprised of eight chapters and begins with an overview of the importance of digital computers in numerical analysis, followed by a discussion on errors in comput
Courtney, Michael
2013-01-01
Nacelle mounted, forward looking wind lidars are beginning to be used to provide reference wind speed measurements for the power performance testing of wind turbines. In such applications, a formal calibration procedure with a corresponding uncertainty assessment will be necessary. This report presents four concepts for performing such a nacelle lidar calibration. Of the four methods, two are found to be immediately relevant and are pursued in some detail.The first of these is a line of sight...
Scanner calibration revisited.
Pozhitkov, Alexander E
2010-07-01
Calibration of a microarray scanner is critical for accurate interpretation of microarray results. Shi et al. (BMC Bioinformatics, 2005, 6, Art. No. S11 Suppl. 2.) reported usage of a Full Moon BioSystems slide for calibration. Inspired by the Shi et al. work, we have calibrated microarray scanners in our previous research. We were puzzled however, that most of the signal intensities from a biological sample fell below the sensitivity threshold level determined by the calibration slide. This conundrum led us to re-investigate the quality of calibration provided by the Full Moon BioSystems slide as well as the accuracy of the analysis performed by Shi et al. Signal intensities were recorded on three different microarray scanners at various photomultiplier gain levels using the same calibration slide from Full Moon BioSystems. Data analysis was conducted on raw signal intensities without normalization or transformation of any kind. Weighted least-squares method was used to fit the data. We found that initial analysis performed by Shi et al. did not take into account autofluorescence of the Full Moon BioSystems slide, which led to a grossly distorted microarray scanner response. Our analysis revealed that a power-law function, which is explicitly accounting for the slide autofluorescence, perfectly described a relationship between signal intensities and fluorophore quantities. Microarray scanners respond in a much less distorted fashion than was reported by Shi et al. Full Moon BioSystems calibration slides are inadequate for performing calibration. We recommend against using these slides.
Pozhitkov Alexander E
2010-07-01
Full Text Available Abstract Background Calibration of a microarray scanner is critical for accurate interpretation of microarray results. Shi et al. (BMC Bioinformatics, 2005, 6, Art. No. S11 Suppl. 2. reported usage of a Full Moon BioSystems slide for calibration. Inspired by the Shi et al. work, we have calibrated microarray scanners in our previous research. We were puzzled however, that most of the signal intensities from a biological sample fell below the sensitivity threshold level determined by the calibration slide. This conundrum led us to re-investigate the quality of calibration provided by the Full Moon BioSystems slide as well as the accuracy of the analysis performed by Shi et al. Methods Signal intensities were recorded on three different microarray scanners at various photomultiplier gain levels using the same calibration slide from Full Moon BioSystems. Data analysis was conducted on raw signal intensities without normalization or transformation of any kind. Weighted least-squares method was used to fit the data. Results We found that initial analysis performed by Shi et al. did not take into account autofluorescence of the Full Moon BioSystems slide, which led to a grossly distorted microarray scanner response. Our analysis revealed that a power-law function, which is explicitly accounting for the slide autofluorescence, perfectly described a relationship between signal intensities and fluorophore quantities. Conclusions Microarray scanners respond in a much less distorted fashion than was reported by Shi et al. Full Moon BioSystems calibration slides are inadequate for performing calibration. We recommend against using these slides.
Approximation Behooves Calibration
da Silva Ribeiro, André Manuel; Poulsen, Rolf
2013-01-01
Calibration based on an expansion approximation for option prices in the Heston stochastic volatility model gives stable, accurate, and fast results for S&P500-index option data over the period 2005–2009.......Calibration based on an expansion approximation for option prices in the Heston stochastic volatility model gives stable, accurate, and fast results for S&P500-index option data over the period 2005–2009....
Cruzalebes, P; Sacuto, S; Bonneau, D; 10.1051/0004-6361/200913686
2010-01-01
Context. Accurate long-baseline interferometric measurements require careful calibration with reference stars. Small calibrators with high angular diameter accuracy ensure the true visibility uncertainty to be dominated by the measurement errors. Aims. We review some indirect methods for estimating angular diameter, using various types of input data. Each diameter estimate, obtained for the test-case calibrator star lambda Gru, is compared with the value 2.71 mas found in the Bord\\'e calibrator catalogue published in 2002. Methods. Angular size estimations from spectral type, spectral index, in-band magnitude, broadband photometry, and spectrophotometry give close estimates of the angular diameter, with slightly variable uncertainties. Fits on photometry and spectrophotometry need physical atmosphere models with "plausible" stellar parameters. Angular diameter uncertainties were estimated by means of residual bootstrapping confidence intervals. All numerical results and graphical outputs presented in this pap...
Energy calibration via correlation
Maier, Daniel
2015-01-01
The main task of an energy calibration is to find a relation between pulse-height values and the corresponding energies. Doing this for each pulse-height channel individually requires an elaborated input spectrum with an excellent counting statistics and a sophisticated data analysis. This work presents an easy to handle energy calibration process which can operate reliably on calibration measurements with low counting statistics. The method uses a parameter based model for the energy calibration and concludes on the optimal parameters of the model by finding the best correlation between the measured pulse-height spectrum and multiple synthetic pulse-height spectra which are constructed with different sets of calibration parameters. A CdTe-based semiconductor detector and the line emissions of an 241 Am source were used to test the performance of the correlation method in terms of systematic calibration errors for different counting statistics. Up to energies of 60 keV systematic errors were measured to be le...
Courtney, M.
2013-01-15
Nacelle mounted, forward looking wind lidars are beginning to be used to provide reference wind speed measurements for the power performance testing of wind turbines. In such applications, a formal calibration procedure with a corresponding uncertainty assessment will be necessary. This report presents four concepts for performing such a nacelle lidar calibration. Of the four methods, two are found to be immediately relevant and are pursued in some detail. The first of these is a line of sight calibration method in which both lines of sight (for a two beam lidar) are individually calibrated by accurately aligning the beam to pass close to a reference wind speed sensor. A testing procedure is presented, reporting requirements outlined and the uncertainty of the method analysed. It is seen that the main limitation of the line of sight calibration method is the time required to obtain a representative distribution of radial wind speeds. An alternative method is to place the nacelle lidar on the ground and incline the beams upwards to bisect a mast equipped with reference instrumentation at a known height and range. This method will be easier and faster to implement and execute but the beam inclination introduces extra uncertainties. A procedure for conducting such a calibration is presented and initial indications of the uncertainties given. A discussion of the merits and weaknesses of the two methods is given together with some proposals for the next important steps to be taken in this work. (Author)
Ferraris, Chiara F; Geiker, Mette Rica; Martys, Nicos S
2007-01-01
inapplicable here. This paper presents the analysis of a modified parallel plate rheometer for measuring cement mortar and propose a methodology for calibration using standard oils and numerical simulation of the flow. A lattice Boltzmann method was used to simulate the flow in the modified rheometer, thus...
Numerical simulation of dusty plasmas
Winske, D.
1995-09-01
The numerical simulation of physical processes in dusty plasmas is reviewed, with emphasis on recent results and unresolved issues. Three areas of research are discussed: grain charging, weak dust-plasma interactions, and strong dust-plasma interactions. For each area, we review the basic concepts that are tested by simulations, present some appropriate examples, and examine numerical issues associated with extending present work.
An Extended Keyword Extraction Method
Hong, Bao; Zhen, Deng
Among numerous Chinese keyword extraction methods, Chinese characteristics were shortly considered. This phenomenon going against the precision enhancement of the Chinese keyword extraction. An extended term frequency based method(Extended TF) is proposed in this paper which combined Chinese linguistic characteristics with basic TF method. Unary, binary and ternary grammars for the candidate keyword extraction as well as other linguistic features were all taken into account. The method establishes classification model using support vector machine. Tests show that the proposed extraction method improved key words precision and recall rate significantly. We applied the key words extracted by the extended TF method into the text file classification. Results show that the key words extracted by the proposed method contributed greatly to raising the precision of text file classification.
Huentemeyer, Petra; Dingus, Brenda
2009-01-01
The High-Altitude Water Cherenkov (HAWC) Experiment is a second-generation highsensitivity gamma-ray and cosmic-ray detector that builds on the experience and technology of the Milagro observatory. Like Milagro, HAWC utilizes the water Cherenkov technique to measure extensive air showers. Instead of a pond filled with water (as in Milagro) an array of closely packed water tanks is used. The event direction will be reconstructed using the times when the PMTs in each tank are triggered. Therefore, the timing calibration will be crucial for reaching an angular resolution as low as 0.25 degrees.We propose to use a laser calibration system, patterned after the calibration system in Milagro. Like Milagro, the HAWC optical calibration system will use ~1 ns laser light pulses. Unlike Milagro, the PMTs are optically isolated and require their own optical fiber calibration. For HAWC the laser light pulses will be directed through a series of optical fan-outs and fibers to illuminate the PMTs in approximately one half o...
Calibration Under Uncertainty.
Swiler, Laura Painton; Trucano, Timothy Guy
2005-03-01
This report is a white paper summarizing the literature and different approaches to the problem of calibrating computer model parameters in the face of model uncertainty. Model calibration is often formulated as finding the parameters that minimize the squared difference between the model-computed data (the predicted data) and the actual experimental data. This approach does not allow for explicit treatment of uncertainty or error in the model itself: the model is considered the %22true%22 deterministic representation of reality. While this approach does have utility, it is far from an accurate mathematical treatment of the true model calibration problem in which both the computed data and experimental data have error bars. This year, we examined methods to perform calibration accounting for the error in both the computer model and the data, as well as improving our understanding of its meaning for model predictability. We call this approach Calibration under Uncertainty (CUU). This talk presents our current thinking on CUU. We outline some current approaches in the literature, and discuss the Bayesian approach to CUU in detail.
Photometric calibrations for 21st century science
Kent, Stephen; /Fermilab; Kaiser, Mary Elizabeth; /Johns Hopkins U.; Deustua, Susana E.; /Baltimore, Space Telescope Sci.; Smith, J.Allyn; /Austin Peay State U.; Adelman, Saul; /Citadel Military Coll.; Allam, Sahar S.; /Fermilab; Baptista, Brian; /Indiana U.; Bohlin, Ralph C.; /Baltimore, Space Telescope Sci.; Clem, James L.; /Louisiana State U.; Conley, Alex; /Colorado U.; Edelstein, Jerry; /UC, Berkeley, Space Sci. Dept. /NOAO, Tucson
2009-02-01
The answers to fundamental science questions in astrophysics, ranging from the history of the expansion of the universe to the sizes of nearby stars, hinge on our ability to make precise measurements of diverse astronomical objects. As our knowledge of the underlying physics of objects improves along with advances in detectors and instrumentation, the limits on our capability to extract science from measurements is set, not by our lack of understanding of the nature of these objects, but rather by the most mundane of all issues: the precision with which we can calibrate observations in physical units. In principle, photometric calibration is a solved problem - laboratory reference standards such as blackbody furnaces achieve precisions well in excess of those needed for astrophysics. In practice, however, transferring the calibration from these laboratory standards to astronomical objects of interest is far from trivial - the transfer must reach outside the atmosphere, extend over 4{pi} steradians of sky, cover a wide range of wavelengths, and span an enormous dynamic range in intensity. Virtually all spectrophotometric observations today are calibrated against one or more stellar reference sources, such as Vega, which are themselves tied back to laboratory standards in a variety of ways. This system's accuracy is not uniform. Selected regions of the electromagnetic spectrum are calibrated extremely well, but discontinuities of a few percent still exist, e.g., between the optical and infrared. Independently, model stellar atmospheres are used to calibrate the spectra of selected white dwarf stars, e.g. the HST system, but the ultimate accuracy of this system should be verified against laboratory sources. Our traditional standard star systems, while sufficient until now, need to be improved and extended in order to serve future astrophysics experiments. This white paper calls for a program to improve upon and expand the current networks of
L. Barazzetti
2012-09-01
Full Text Available In photogrammetry a camera is considered calibrated if its interior orientation parameters are known. These encompass the principal distance, the principal point position and some Additional Parameters used to model possible systematic errors. The current state of the art for automated camera calibration relies on the use of coded targets to accurately determine the image correspondences. This paper presents a new methodology for the efficient and rigorous photogrammetric calibration of digital cameras which does not require any longer the use of targets. A set of images depicting a scene with a good texture are sufficient for the extraction of natural corresponding image points. These are automatically matched with feature-based approaches and robust estimation techniques. The successive photogrammetric bundle adjustment retrieves the unknown camera parameters and their theoretical accuracies. Examples, considerations and comparisons with real data and different case studies are illustrated to show the potentialities of the proposed methodology.
Calibration Systems Final Report
Myers, Tanya L.; Broocks, Bryan T.; Phillips, Mark C.
2006-02-01
The Calibration Systems project at Pacific Northwest National Laboratory (PNNL) is aimed towards developing and demonstrating compact Quantum Cascade (QC) laser-based calibration systems for infrared imaging systems. These on-board systems will improve the calibration technology for passive sensors, which enable stand-off detection for the proliferation or use of weapons of mass destruction, by replacing on-board blackbodies with QC laser-based systems. This alternative technology can minimize the impact on instrument size and weight while improving the quality of instruments for a variety of missions. The potential of replacing flight blackbodies is made feasible by the high output, stability, and repeatability of the QC laser spectral radiance.
Bird, A.J.; Barlow, E.J.; Tikkanen, T. [Southampton Univ., School of Physics and Astronomy (United Kingdom); Bazzano, A.; Del Santo, M.; Ubertini, P. [Istituto di Astrofisica Spaziale e Fisica Cosmica - IASF/CNR, Roma (Italy); Blondel, C.; Laurent, P.; Lebrun, F. [CEA Saclay - Sap, 91 - Gif sur Yvette (France); Di Cocco, G.; Malaguti, E. [Istituto di Astrofisica Spaziale e Fisica-Bologna - IASF/CNR (Italy); Gabriele, M.; La Rosa, G.; Segreto, A. [Istituto di Astrofisica Spaziale e Fisica- IASF/CNR, Palermo (Italy); Quadrini, E. [Istituto di Astrofisica Spaziale e Fisica-Cosmica, EASF/CNR, Milano (Italy); Volkmer, R. [Institut fur Astronomie und Astrophysik, Tubingen (Germany)
2003-11-01
We present an overview of results obtained from IBIS ground calibrations. The spectral and spatial characteristics of the detector planes and surrounding passive materials have been determined through a series of calibration campaigns. Measurements of pixel gain, energy resolution, detection uniformity, efficiency and imaging capability are presented. The key results obtained from the ground calibration have been: - optimization of the instrument tunable parameters, - determination of energy linearity for all detection modes, - determination of energy resolution as a function of energy through the range 20 keV - 3 MeV, - demonstration of imaging capability in each mode, - measurement of intrinsic detector non-uniformity and understanding of the effects of passive materials surrounding the detector plane, and - discovery (and closure) of various leakage paths through the passive shielding system.
Radio interferometric gain calibration as a complex optimization problem
Smirnov, Oleg
2015-01-01
Recent developments in optimization theory have extended some traditional algorithms for least-squares optimization of real-valued functions (Gauss-Newton, Levenberg-Marquardt, etc.) into the domain of complex functions of a complex variable. This employs a formalism called the Wirtinger derivative, and derives a full-complex Jacobian counterpart to the conventional real Jacobian. We apply these developments to the problem of radio interferometric gain calibration, and show how the general complex Jacobian formalism, when combined with conventional optimization approaches, yields a whole new family of calibration algorithms, including those for the polarized and direction-dependent gain regime. We further extend the Wirtinger calculus to an operator-based matrix calculus for describing the polarized calibration regime. Using approximate matrix inversion results in computationally efficient implementations; we show that some recently proposed calibration algorithms such as StefCal and peeling can be understood...
Rao, G Shanker
2006-01-01
About the Book: This book provides an introduction to Numerical Analysis for the students of Mathematics and Engineering. The book is designed in accordance with the common core syllabus of Numerical Analysis of Universities of Andhra Pradesh and also the syllabus prescribed in most of the Indian Universities. Salient features: Approximate and Numerical Solutions of Algebraic and Transcendental Equation Interpolation of Functions Numerical Differentiation and Integration and Numerical Solution of Ordinary Differential Equations The last three chapters deal with Curve Fitting, Eigen Values and Eigen Vectors of a Matrix and Regression Analysis. Each chapter is supplemented with a number of worked-out examples as well as number of problems to be solved by the students. This would help in the better understanding of the subject. Contents: Errors Solution of Algebraic and Transcendental Equations Finite Differences Interpolation with Equal Intervals Interpolation with Unequal Int...
Frederick Schauer
2017-09-01
Full Text Available Objective to study the notion and essence of legal judgments calibration the possibilities of using it in the lawenforcement activity to explore the expenses and advantages of using it. Methods dialectic approach to the cognition of social phenomena which enables to analyze them in historical development and functioning in the context of the integrity of objective and subjective factors it determined the choice of the following research methods formallegal comparative legal sociological methods of cognitive psychology and philosophy. Results In ordinary life people who assess other peoplersaquos judgments typically take into account the other judgments of those they are assessing in order to calibrate the judgment presently being assessed. The restaurant and hotel rating website TripAdvisor is exemplary because it facilitates calibration by providing access to a raterrsaquos previous ratings. Such information allows a user to see whether a particular rating comes from a rater who is enthusiastic about every place she patronizes or instead from someone who is incessantly hard to please. And even when less systematized as in assessing a letter of recommendation or college transcript calibration by recourse to the decisional history of those whose judgments are being assessed is ubiquitous. Yet despite the ubiquity and utility of such calibration the legal system seems perversely to reject it. Appellate courts do not openly adjust their standard of review based on the previous judgments of the judge whose decision they are reviewing nor do judges in reviewing legislative or administrative decisions magistrates in evaluating search warrant representations or jurors in assessing witness perception. In most legal domains calibration by reference to the prior decisions of the reviewee is invisible either because it does not exist or because reviewing bodies are unwilling to admit using what they in fact know and employ. Scientific novelty for the first
Iterative Magnetometer Calibration
Sedlak, Joseph
2006-01-01
This paper presents an iterative method for three-axis magnetometer (TAM) calibration that makes use of three existing utilities recently incorporated into the attitude ground support system used at NASA's Goddard Space Flight Center. The method combines attitude-independent and attitude-dependent calibration algorithms with a new spinning spacecraft Kalman filter to solve for biases, scale factors, nonorthogonal corrections to the alignment, and the orthogonal sensor alignment. The method is particularly well-suited to spin-stabilized spacecraft, but may also be useful for three-axis stabilized missions given sufficient data to provide observability.
On generalized extending modules
ZENG Qing-yi
2007-01-01
A module M is called generalized extending if for any submodule N of M, there is a direct summand K of M such that N≤K and K/N is singular. Any extending module and any singular module are generalized extending. Any homomorphic image of a generalized extending module is generalized extending. Any direct sum of a singular (uniform) module and a semi-simple module is generalized extending. A ring R is a right Co-H-ring ifand only ifall right R modules are generalized extending modules.
Radioactive standards and calibration methods for contamination monitoring instruments
Yoshida, Makoto [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment
1997-06-01
Contamination monitoring in the facilities for handling unsealed radioactive materials is one of the most important procedures for radiation protection as well as radiation dose monitoring. For implementation of the proper contamination monitoring, radiation measuring instruments should not only be suitable to the purpose of monitoring, but also be well calibrated for the objective qualities of measurement. In the calibration of contamination monitoring instruments, quality reference activities need to be used. They are supplied in different such as extended sources, radioactive solutions or radioactive gases. These reference activities must be traceable to the national standards or equivalent standards. On the other hand, the appropriate calibration methods must be applied for each type of contamination monitoring instruments. In this paper, the concepts of calibration for contamination monitoring instruments, reference sources, determination methods of reference quantities and practical calibration methods of contamination monitoring instruments, including the procedures carried out in Japan Atomic Energy Research Institute and some relevant experimental data. (G.K.)
Chirally extended quantum chromodynamics
Brower, R C; Tan, C I; Richard C Brower; Yue Shen; Chung-I Tan
1994-01-01
We propose an extended Quantum Chromodynamics (XQCD) Lagrangian in which the fermions are coupled to elementary scalar %\\sigma and \\pi fields through a Yukawa coupling which preserves chiral invariance. Our principle motivation is to find a new lattice formulation for QCD which avoids the source of critical slowing down usually encountered as the bare quark mass is tuned to the chiral limit. The phase diagram and the weak coupling limit for XQCD are studied. They suggest a conjecture that the continuum limit of XQCD is the same as the continuum limit of conventional lattice formulation of QCD. As examples of such universality, we present the large N solutions of two prototype models for XQCD, in which the mass of the spurious pion and sigma resonance go to infinity with the cut-off. Even if the universality conjecture turns out to be false, we believe that XQCD will still be useful as a low energy effective action for QCD phenomenology on the lattice. Numerical simulations are recommended to further investiga...
朱杰; 方从启
2013-01-01
依据非均匀锈胀理论提出钢筋锈胀作用的计算方法,应用扩展有限元法(XFEM)建立了钢筋锈胀保护层开裂的有限元模型.数值分析表明:采用XFEM与混凝土黏聚力模型能有效模拟混凝土开裂及裂纹扩展,避免了网格重剖分的问题;预裂纹的存在抑制了混凝土裂纹萌生,却加速了裂纹扩展贯通保护层,且萌生始于预裂纹尖端,而非钢筋-混凝土锈蚀层界面处;初始无损伤结构裂纹萌生位置对称分布于锈蚀层界面一定范围内,裂尖距交界面距离越大,单元受锈胀影响越小,最终贯通保护层主要是锈胀位移与锈蚀产物渗入裂缝产生作用力共同作用的结果,且裂纹扩展角趋于120°;提高混凝土等级和增大保护层厚度能有效延缓锈胀裂缝的产生与发展,有利于提高结构耐久性.%Based on the theory of non-uniform corrosion expansion, a method for calculating the effect of reinforcement rust expansion was given. Also, a finite element model for simulating cracking propagation of the protection layer on the base of extended finite element method(XFEM) was established. The simulation analysis shows that implementation of XFEM and cohesive crack model for the analysis of concrete fracture and propagation are effective, and capable of simulating crack initiation and extension path without remeshing. Existence of pre-crack restrains crack initiation, which begins in the pre-crack tips instead of the interface of reinforcement and concrete, accelerates crack propagation through the cover. Nevertheless, the positions of crack initiation are distributed in the interface symmetrically within a certain distance for the non-defective structures. The greater of distance between crack tips and interface is, the weaker of damage of concrete element around the crack-tips. Furthermore, the rust expansion and forces produced by the infiltration into crack of the corrosion products coefficiently lead to the breakthrough of
Smart Calibration of Excavators
Bro, Marie; Døring, Kasper; Ellekilde, Lars-Peter
2005-01-01
Excavators dig holes. But where is the bucket? The purpose of this report is to treat four different problems concerning calibrations of position indicators for excavators in operation at concrete construction sites. All four problems are related to the question of how to determine the precise ge...
Calibration with Absolute Shrinkage
Øjelund, Henrik; Madsen, Henrik; Thyregod, Poul
2001-01-01
is suggested to cope with the singular design matrix most often seen in chemometric calibration. Furthermore, the proposed algorithm may be generalized to all convex norms like Sigma/beta (j)/(gamma) where gamma greater than or equal to 1, i.e. a method that continuously varies from ridge regression...
Calibrating Communication Competencies
Surges Tatum, Donna
2016-11-01
The Many-faceted Rasch measurement model is used in the creation of a diagnostic instrument by which communication competencies can be calibrated, the severity of observers/raters can be determined, the ability of speakers measured, and comparisons made between various groups.
NVLAP calibration laboratory program
Cigler, J.L.
1993-12-31
This paper presents an overview of the progress up to April 1993 in the development of the Calibration Laboratories Accreditation Program within the framework of the National Voluntary Laboratory Accreditation Program (NVLAP) at the National Institute of Standards and Technology (NIST).
CALIBRATION OF PHOSWICH DETECTORS
LEEGTE, HKW; KOLDENHOF, EE; BOONSTRA, AL; WILSCHUT, HW
1992-01-01
Two important aspects for the calibration of phoswich detector arrays have been investigated. It is shown that common gate ADCs can be used: The loss in particle identification due to fluctuations in the gate timing in multi-hit events can be corrected for by a simple procedure using the measured ti
Measurement System & Calibration report
Kock, Carsten Weber; Vesth, Allan
This Measurement System & Calibration report is describing DTU’s measurement system installed at a specific wind turbine. A major part of the sensors has been installed by others (see [1]) the rest of the sensors have been installed by DTU. The results of the measurements, described in this report...
Entropic calibration revisited
Brody, Dorje C. [Blackett Laboratory, Imperial College, London SW7 2BZ (United Kingdom)]. E-mail: d.brody@imperial.ac.uk; Buckley, Ian R.C. [Centre for Quantitative Finance, Imperial College, London SW7 2AZ (United Kingdom); Constantinou, Irene C. [Blackett Laboratory, Imperial College, London SW7 2BZ (United Kingdom); Meister, Bernhard K. [Blackett Laboratory, Imperial College, London SW7 2BZ (United Kingdom)
2005-04-11
The entropic calibration of the risk-neutral density function is effective in recovering the strike dependence of options, but encounters difficulties in determining the relevant greeks. By use of put-call reversal we apply the entropic method to the time reversed economy, which allows us to obtain the spot price dependence of options and the relevant greeks.
John F. Schabron; Joseph F. Rovani; Susan S. Sorini
2007-03-31
The Clean Air Mercury Rule (CAMR) which was published in the Federal Register on May 18, 2005, requires that calibration of mercury continuous emissions monitors (CEMs) be performed with NIST-traceable standards. Western Research Institute (WRI) is working closely with the Electric Power Research Institute (EPRI), the National Institute of Standards and Technology (NIST), and the Environmental Protection Agency (EPA) to facilitate the development of the experimental criteria for a NIST traceability protocol for dynamic elemental mercury vapor generators. The traceability protocol will be written by EPA. Traceability will be based on the actual analysis of the output of each calibration unit at several concentration levels ranging from about 2-40 ug/m{sup 3}, and this analysis will be directly traceable to analyses by NIST using isotope dilution inductively coupled plasma/mass spectrometry (ID ICP/MS) through a chain of analyses linking the calibration unit in the power plant to the NIST ID ICP/MS. Prior to this project, NIST did not provide a recommended mercury vapor pressure equation or list mercury vapor pressure in its vapor pressure database. The NIST Physical and Chemical Properties Division in Boulder, Colorado was subcontracted under this project to study the issue in detail and to recommend a mercury vapor pressure equation that the vendors of mercury vapor pressure calibration units can use to calculate the elemental mercury vapor concentration in an equilibrium chamber at a particular temperature. As part of this study, a preliminary evaluation of calibration units from five vendors was made. The work was performed by NIST in Gaithersburg, MD and Joe Rovani from WRI who traveled to NIST as a Visiting Scientist.
Unnikrishnan, A; Manoj, N.T.
Various numerical models used to study the dynamics and horizontal distribution of salinity in Mandovi-Zuari estuaries, Goa, India is discussed in this chapter. Earlier, a one-dimensional network model was developed for representing the complex...
Scott, L Ridgway
2011-01-01
Computational science is fundamentally changing how technological questions are addressed. The design of aircraft, automobiles, and even racing sailboats is now done by computational simulation. The mathematical foundation of this new approach is numerical analysis, which studies algorithms for computing expressions defined with real numbers. Emphasizing the theory behind the computation, this book provides a rigorous and self-contained introduction to numerical analysis and presents the advanced mathematics that underpin industrial software, including complete details that are missing from m
Field calibration of cup anemometers
Schmidt Paulsen, Uwe; Mortensen, Niels Gylling; Hansen, Jens Carsten
2007-01-01
A field calibration method and results are described along with the experience gained with the method. The cup anemometers to be calibrated are mounted in a row on a 10-m high rig and calibrated in the free wind against a reference cup anemometer. The method has been reported [1] to improve...... the statistical bias on the data relative to calibrations carried out in a wind tunnel. The methodology is sufficiently accurate for calibration of cup anemometers used for wind resource assessments and provides a simple, reliable and cost-effective solution to cup anemometer calibration, especially suited...
Calibrating echelle spectrographs with Fabry-Perot etalons
Bauer, Florian F; Reiners, Ansgar
2015-01-01
Over the past decades hollow-cathode lamps have been calibration standards for spectroscopic measurements. Advancing to cm/s radial velocity precisions with the next generation of instruments requires more suitable calibration sources with more lines and less dynamic range problems. Fabry-Perot interferometers provide a regular and dense grid of lines and homogeneous amplitudes making them good candidates for next generation calibrators. We investigate the usefulness of Fabry-Perot etalons in wavelength calibration, present an algorithm to incorporate the etalon spectrum in the wavelength solution and examine potential problems. The quasi periodic pattern of Fabry-Perot lines is used along with a hollow-cathode lamp to anchor the numerous spectral features on an absolute scale. We test our method with the HARPS spectrograph and compare our wavelength solution to the one derived from a laser frequency comb. The combined hollow-cathode lamp/etalon calibration overcomes large distortion (50 m/s) in the wavelengt...
John Schabron; Eric Kalberer; Joseph Rovani; Mark Sanderson; Ryan Boysen; William Schuster
2009-03-11
U.S. Environmental Protection Agency (EPA) Performance Specification 12 in the Clean Air Mercury Rule (CAMR) states that a mercury CEM must be calibrated with National Institute for Standards and Technology (NIST)-traceable standards. In early 2009, a NIST traceable standard for elemental mercury CEM calibration still does not exist. Despite the vacature of CAMR by a Federal appeals court in early 2008, a NIST traceable standard is still needed for whatever regulation is implemented in the future. Thermo Fisher is a major vendor providing complete integrated mercury continuous emissions monitoring (CEM) systems to the industry. WRI is participating with EPA, EPRI, NIST, and Thermo Fisher towards the development of the criteria that will be used in the traceability protocols to be issued by EPA. An initial draft of an elemental mercury calibration traceability protocol was distributed for comment to the participating research groups and vendors on a limited basis in early May 2007. In August 2007, EPA issued an interim traceability protocol for elemental mercury calibrators. Various working drafts of the new interim traceability protocols were distributed in late 2008 and early 2009 to participants in the Mercury Standards Working Committee project. The protocols include sections on qualification and certification. The qualification section describes in general terms tests that must be conducted by the calibrator vendors to demonstrate that their calibration equipment meets the minimum requirements to be established by EPA for use in CAMR monitoring. Variables to be examined include linearity, ambient temperature, back pressure, ambient pressure, line voltage, and effects of shipping. None of the procedures were described in detail in the draft interim documents; however they describe what EPA would like to eventually develop. WRI is providing the data and results to EPA for use in developing revised experimental procedures and realistic acceptance criteria based on
The Astro-E/XRS Blocking Filter Calibration
Audley, Michael D.; Arnaud, Keith A.; Gendreau, Keith C.; Boyce, Kevin R.; Fleetwood, Charles M.; Kelley, Richard L.; Keski-Kuha, Ritva A.; Porter, F. Scott; Stahle, Caroline K.; Szymkowiak, Andrew E.
1999-01-01
We describe the transmission calibration of the Astro-E XRS blocking filters. The XRS instrument has five aluminized polyimide blocking filters. These filters are located at thermal stages ranging from 200 K to 60 mK. They are each about 1000 A thick. XRS will have high energy resolution which will enable it to see some of the extended fine structure around the oxygen and aluminum K edges of these filters. Thus, we are conducting a high spectral resolution calibration of the filters near these energies to resolve out extended flue structure and absorption lines.
Design of the ERIS calibration unit
Dolci, Mauro; Valentini, Angelo; Di Rico, Gianluca; Esposito, Simone; Ferruzzi, Debora; Riccardi, Armando; Spanò, Paolo; Antichi, Jacopo
2016-08-01
The Enhanced Resolution Imager and Spectrograph (ERIS) is a new-generation instrument for the Cassegrain focus of the ESO UT4/VLT, aimed at performing AO-assisted imaging and medium resolution spectroscopy in the 1-5 micron wavelength range. ERIS consists of the 1-5 micron imaging camera NIX, the 1-2.5 micron integral field spectrograph SPIFFIER (a modified version of SPIFFI, currently operating on SINFONI), the AO module and the internal Calibration Unit (ERIS CU). The purpose of this unit is to provide facilities to calibrate the scientific instruments in the 1-2.5 micron and to perform troubleshooting and periodic maintenance tests of the AO module (e.g. NGS and LGS WFS internal calibrations and functionalities, ERIS differential flexures) in the 0.5 - 1 μm range. The ERIS CU must therefore be designed in order to provide, over the full 0.5 - 2.5 μm range, the following capabilities: 1) illumination of both the telescope focal plane and the telescope pupil with a high-degree of uniformity; 2) artificial point-like and extended sources onto the telescope focal plane, with high accuracy in both positioning and FWHM; 3) wavelength calibration; 4) high stability of these characteristics. In this paper the design of the ERIS CU, and the solutions adopted to fulfill all these requirements, is described. The ERIS CU construction is foreseen to start at the end of 2016.
The Calibration Home Base for Imaging Spectrometers
Johannes Felix Simon Brachmann
2016-08-01
Full Text Available The Calibration Home Base (CHB is an optical laboratory designed for the calibration of imaging spectrometers for the VNIR/SWIR wavelength range. Radiometric, spectral and geometric calibration as well as the characterization of sensor signal dependency on polarization are realized in a precise and highly automated fashion. This allows to carry out a wide range of time consuming measurements in an ecient way. The implementation of ISO 9001 standards in all procedures ensures a traceable quality of results. Spectral measurements in the wavelength range 380–1000 nm are performed to a wavelength uncertainty of +- 0.1 nm, while an uncertainty of +-0.2 nm is reached in the wavelength range 1000 – 2500 nm. Geometric measurements are performed at increments of 1.7 µrad across track and 7.6 µrad along track. Radiometric measurements reach an absolute uncertainty of +-3% (k=1. Sensor artifacts, such as caused by stray light will be characterizable and correctable in the near future. For now, the CHB is suitable for the characterization of pushbroom sensors, spectrometers and cameras. However, it is planned to extend the CHBs capabilities in the near future such that snapshot hyperspectral imagers can be characterized as well. The calibration services of the CHB are open to third party customers from research institutes as well as industry.
Polymers for Traveling Wave Ion Mobility Spectrometry Calibration
Duez, Quentin; Chirot, Fabien; Liénard, Romain; Josse, Thomas; Choi, ChangMin; Coulembier, Olivier; Dugourd, Philippe; Cornil, Jérôme; Gerbaux, Pascal; De Winter, Julien
2017-07-01
One of the main issues when using traveling wave ion mobility spectrometry (TWIMS) for the determination of collisional cross-section (CCS) concerns the need for a robust calibration procedure built from referent ions of known CCS. Here, we implement synthetic polymer ions as CCS calibrants in positive ion mode. Based on their intrinsic polydispersities, polymers offer in a single sample the opportunity to generate, upon electrospray ionization, numerous ions covering a broad mass range and a large CCS window for different charge states at a time. In addition, the key advantage of polymer ions as CCS calibrants lies in the robustness of their gas-phase structure with respect to the instrumental conditions, making them less prone to collisional-induced unfolding (CIU) than protein ions. In this paper, we present a CCS calibration procedure using sodium cationized polylactide and polyethylene glycol, PLA and PEG, as calibrants with reference CCS determined on a home-made drift tube. Our calibration procedure is further validated by testing the polymer calibration to determine CCS of numerous different ions for which CCS are reported in the literature. [Figure not available: see fulltext.
The Calibration Reference Data System
Greenfield, P.; Miller, T.
2016-07-01
We describe a software architecture and implementation for using rules to determine which calibration files are appropriate for calibrating a given observation. This new system, the Calibration Reference Data System (CRDS), replaces what had been previously used for the Hubble Space Telescope (HST) calibration pipelines, the Calibration Database System (CDBS). CRDS will be used for the James Webb Space Telescope (JWST) calibration pipelines, and is currently being used for HST calibration pipelines. CRDS can be easily generalized for use in similar applications that need a rules-based system for selecting the appropriate item for a given dataset; we give some examples of such generalizations that will likely be used for JWST. The core functionality of the Calibration Reference Data System is available under an Open Source license. CRDS is briefly contrasted with a sampling of other similar systems used at other observatories.
Calibration Against the Moon. I: A Disk-Resolved Lunar Model for Absolute Reflectance Calibration
2010-01-01
the (nearly) full Moon for calibration. The Hillier et al. empirical representation described the mean solar phase functions /(at) for observation... full Moon . Thermal emission contrib- utes up to 2-3% of the signal at 2.26 tun, and the thermal compo- nent rapidly diminishes at shorter...ROLO chip dataset and to a sample of the ROLO imagery near full Moon . The final model was extended to the shortwave IR (0.35-2.45 urn) and is able
Ejsing Jørgensen, Hans; Mikkelsen, T.; Streicher, J.
1997-01-01
A series of atmospheric aerosol diffusion experiments combined with lidar detection was conducted to evaluate and calibrate an existing retrieval algorithm for aerosol backscatter lidar systems. The calibration experiments made use of two (almost) identical mini-lidar systems for aerosol cloud...... detection to test the reproducibility and uncertainty of lidars. Lidar data were obtained from both single-ended and double-ended Lidar configurations. A backstop was introduced in one of the experiments and a new method was developed where information obtained from the backstop can be used in the inversion...... algorithm. Independent in-situ aerosol plume concentrations were obtained from a simultaneous tracer gas experiment with SF6, and comparisons with the two lidars were made. The study shows that the reproducibility of the lidars is within 15%, including measurements from both sides of a plume...
HIRDLS monochromator calibration equipment
Hepplewhite, Christopher L.; Barnett, John J.; Djotni, Karim; Whitney, John G.; Bracken, Justain N.; Wolfenden, Roger; Row, Frederick; Palmer, Christopher W. P.; Watkins, Robert E. J.; Knight, Rodney J.; Gray, Peter F.; Hammond, Geoffory
2003-11-01
A specially designed and built monochromator was developed for the spectral calibration of the HIRDLS instrument. The High Resolution Dynamics Limb Sounder (HIRDLS) is a precision infra-red remote sensing instrument with very tight requirements on the knowledge of the response to received radiation. A high performance, vacuum compatible monochromator, was developed with a wavelength range from 4 to 20 microns to encompass that of the HIRDLS instrument. The monochromator is integrated into a collimating system which is shared with a set of tiny broad band sources used for independent spatial response measurements (reported elsewhere). This paper describes the design and implementation of the monochromator and the performance obtained during the period of calibration of the HIRDLS instrument at Oxford University in 2002.
Optical tweezers absolute calibration
Dutra, R S; Neto, P A Maia; Nussenzveig, H M
2014-01-01
Optical tweezers are highly versatile laser traps for neutral microparticles, with fundamental applications in physics and in single molecule cell biology. Force measurements are performed by converting the stiffness response to displacement of trapped transparent microspheres, employed as force transducers. Usually, calibration is indirect, by comparison with fluid drag forces. This can lead to discrepancies by sizable factors. Progress achieved in a program aiming at absolute calibration, conducted over the past fifteen years, is briefly reviewed. Here we overcome its last major obstacle, a theoretical overestimation of the peak stiffness, within the most employed range for applications, and we perform experimental validation. The discrepancy is traced to the effect of primary aberrations of the optical system, which are now included in the theory. All required experimental parameters are readily accessible. Astigmatism, the dominant effect, is measured by analyzing reflected images of the focused laser spo...
In situ ``artificial plasma'' calibration of tokamak magnetic sensors
Shiraki, D.; Levesque, J. P.; Bialek, J.; Byrne, P. J.; DeBono, B. A.; Mauel, M. E.; Maurer, D. A.; Navratil, G. A.; Pedersen, T. S.; Rath, N.
2013-06-01
A unique in situ calibration technique has been used to spatially calibrate and characterize the extensive new magnetic diagnostic set and close-fitting conducting wall of the High Beta Tokamak-Extended Pulse (HBT-EP) experiment. A new set of 216 Mirnov coils has recently been installed inside the vacuum chamber of the device for high-resolution measurements of magnetohydrodynamic phenomena including the effects of eddy currents in the nearby conducting wall. The spatial positions of these sensors are calibrated by energizing several large in situ calibration coils in turn, and using measurements of the magnetic fields produced by the various coils to solve for each sensor's position. Since the calibration coils are built near the nominal location of the plasma current centroid, the technique is referred to as an "artificial plasma" calibration. The fitting procedure for the sensor positions is described, and results of the spatial calibration are compared with those based on metrology. The time response of the sensors is compared with the evolution of the artificial plasma current to deduce the eddy current contribution to each signal. This is compared with simulations using the VALEN electromagnetic code, and the modeled copper thickness profiles of the HBT-EP conducting wall are adjusted to better match experimental measurements of the eddy current decay. Finally, the multiple coils of the artificial plasma system are also used to directly calibrate a non-uniformly wound Fourier Rogowski coil on HBT-EP.
Calibration Facilities for NIF
Perry, T.S.
2000-06-15
The calibration facilities will be dynamic and will change to meet the needs of experiments. Small sources, such as the Manson Source should be available to everyone at any time. Carrying out experiments at Omega is providing ample opportunity for practice in pre-shot preparation. Hopefully, the needs that are demonstrated in these experiments will assure the development of (or keep in service) facilities at each of the laboratories that will be essential for in-house preparation for experiments at NIF.
Mesoscale hybrid calibration artifact
Tran, Hy D.; Claudet, Andre A.; Oliver, Andrew D.
2010-09-07
A mesoscale calibration artifact, also called a hybrid artifact, suitable for hybrid dimensional measurement and the method for make the artifact. The hybrid artifact has structural characteristics that make it suitable for dimensional measurement in both vision-based systems and touch-probe-based systems. The hybrid artifact employs the intersection of bulk-micromachined planes to fabricate edges that are sharp to the nanometer level and intersecting planes with crystal-lattice-defined angles.
Astrid-2 SSC ASUMagnetic Calibration
Primdahl, Fritz
1997-01-01
Report of the inter calibration between the starcamera and the fluxgate magnetometer onboard the ASTRID-2 satellite. This calibration was performed in the night between the 15. and 16. May 1997 at the Lovö magnetic observatory.......Report of the inter calibration between the starcamera and the fluxgate magnetometer onboard the ASTRID-2 satellite. This calibration was performed in the night between the 15. and 16. May 1997 at the Lovö magnetic observatory....
The MIRI Medium Resolution Spectrometer calibration pipeline
Labiano, A; Bailey, J I; Beard, S; Dicken, D; García-Marín, M; Geers, V; Glasse, A; Glauser, A; Gordon, K; Justtanont, K; Klaassen, P; Lahuis, F; Law, D; Morrison, J; Müller, M; Rieke, G; Vandenbussche, B; Wright, G
2016-01-01
The Mid-Infrared Instrument (MIRI) Medium Resolution Spectrometer (MRS) is the only mid-IR Integral Field Spectrometer on board James Webb Space Telescope. The complexity of the MRS requires a very specialized pipeline, with some specific steps not present in other pipelines of JWST instruments, such as fringe corrections and wavelength offsets, with different algorithms for point source or extended source data. The MRS pipeline has also two different variants: the baseline pipeline, optimized for most foreseen science cases, and the optimal pipeline, where extra steps will be needed for specific science cases. This paper provides a comprehensive description of the MRS Calibration Pipeline from uncalibrated slope images to final scientific products, with brief descriptions of its algorithms, input and output data, and the accessory data and calibration data products necessary to run the pipeline.
Calibration of Underwater Sound Transducers
H.R.S. Sastry
1983-07-01
Full Text Available The techniques of calibration of underwater sound transducers for farfield, near-field and closed environment conditions are reviewed in this paper .The design of acoustic calibration tank is mentioned. The facilities available at Naval Physical & Oceanographic Laboratory, Cochin for calibration of transducers are also listed.
Brezinski, C
2012-01-01
Numerical analysis has witnessed many significant developments in the 20th century. This book brings together 16 papers dealing with historical developments, survey papers and papers on recent trends in selected areas of numerical analysis, such as: approximation and interpolation, solution of linear systems and eigenvalue problems, iterative methods, quadrature rules, solution of ordinary-, partial- and integral equations. The papers are reprinted from the 7-volume project of the Journal of Computational and Applied Mathematics on '/homepage/sac/cam/na2000/index.html<
Internet-based calibration of a multifunction calibrator
BUNTING BACA,LISA A.; DUDA JR.,LEONARD E.; WALKER,RUSSELL M.; OLDHAM,NILE; PARKER,MARK
2000-04-17
A new way of providing calibration services is evolving which employs the Internet to expand present capabilities and make the calibration process more interactive. Sandia National Laboratories and the National Institute of Standards and Technology are collaborating to set up and demonstrate a remote calibration of multifunction calibrators using this Internet-based technique that is becoming known as e-calibration. This paper describes the measurement philosophy and the Internet resources that can provide real-time audio/video/data exchange, consultation and training, as well as web-accessible test procedures, software and calibration reports. The communication system utilizes commercial hardware and software that should be easy to integrate into most calibration laboratories.
CERN radiation protection (RP) calibration facilities
Pozzi, Fabio
2016-04-14
, the facility was commissioned by measuring the calibration quantities of interest, e.g. H*(10), as a function of the source-to-detector distance. In the case of neutron measurements, a comparison with the Monte Carlo results was carried out; in fact, the neutron scattering can be an important issue and the Monte Carlo method can contribute to its estimation and optimization. Neutron calibrations often need to be performed at neutron energies or spectra very much different from those generated by radioactive sources employed in standard calibration laboratories. Unfortunately, fields with a broad neutron spectrum extending to a few GeVs are very rare and the scientific community is calling for worldwide sharing of the existing facilities. The CERN RP group has been managing the CERN-EU high-energy Reference Field (CERF) facility for 20 years, which is a unique calibration field in its kind. CERF is a workplace field that reproduces the neutron spectrum encountered in the vicinity of high-energy accelerators and at commercial flight altitudes. Within the context of providing a well-characterized workplace field to the scientific community, Monte Carlo simulations were performed with the present development version of the FLUKA code. The simulations were compared with experimental measurements showing promising results for the future ISO accreditation of the facility as workplace reference facility. Even though the accreditation process is fairly long, the work achieved so far is setting the bases to start this process in the right way.
Geometric calibration of the circle-plus-arc trajectory.
Hoppe, Stefan; Noo, Frédéric; Dennerlein, Frank; Lauritsch, Günter; Hornegger, Joachim
2007-12-07
In this paper, a novel geometric calibration method for C-arm cone-beam scanners is presented which allows the calibration of the circle-plus-arc trajectory. The main idea is the separation of the trajectory into two circular segments (circle segment and arc segment) which are calibrated independently. This separation makes it possible to reuse a calibration phantom which has been successfully applied in clinical environments to calibrate numerous routinely used C-arm systems. For each trajectory segment, the phantom is placed in an optimal position. The two calibration results are then combined by computing the transformation the phantom underwent between the independent calibration runs. This combination can be done in a post-processing step by using standard linear algebra. The method is not limited to circle-plus-arc trajectories and works for any calibration procedure in which the phantom has a preferred orientation with respect to a trajectory segment. Results are presented for both simulated as well as real data acquired with a C-arm system. We also present the first image reconstruction results for the circle-plus-arc trajectory using real C-arm data.
1992-12-01
fisica matematica. ABSTRACT - We consider a new method for the numerical solution both of non- linear systems of equations and of cornplementauity...8217 Universith di Rama "La Sapienza- 00185 Roma, Italy Maria Cristina Recchioni Istituto Nazionale di Alta Matematica "’F. Severi" pia.ale Aldo Moro 5 00185
Baker, John G.
2009-01-01
Recent advances in numerical relativity have fueled an explosion of progress in understanding the predictions of Einstein's theory of gravity, General Relativity, for the strong field dynamics, the gravitational radiation wave forms, and consequently the state of the remnant produced from the merger of compact binary objects. I will review recent results from the field, focusing on mergers of two black holes.
Sozio, Gerry
2009-01-01
Senior secondary students cover numerical integration techniques in their mathematics courses. In particular, students would be familiar with the "midpoint rule," the elementary "trapezoidal rule" and "Simpson's rule." This article derives these techniques by methods which secondary students may not be familiar with and an approach that…
Migliore, Juan
2012-01-01
An unpublished example due to Joe Harris from 1983 (or earlier) gave two smooth space curves with the same Hilbert function, but one of the curves was arithmetically Cohen-Macaulay (ACM) and the other was not. Starting with an arbitrary homogeneous ideal in any number of variables, we give two constructions, each of which produces, in a finite number of steps, an ideal with the Hilbert function of a codimension two ACM subscheme. We call the subscheme associated to such an ideal "numerically ACM." We study the connections between these two constructions, and in particular show that they produce ideals with the same Hilbert function. We call the resulting ideal from either construction a "numerical Macaulification" of the original ideal. Specializing to the case where the ideals are unmixed of codimension two, we show that (a) every even liaison class, $\\mathcal L$, contains numerically ACM subschemes, (b) the subset, $\\mathcal M$, of numerically ACM subschemes in $\\mathcal L$ has, by itself, a Lazarsfeld-Rao ...
Baker, John G.
2009-01-01
Recent advances in numerical relativity have fueled an explosion of progress in understanding the predictions of Einstein's theory of gravity, General Relativity, for the strong field dynamics, the gravitational radiation wave forms, and consequently the state of the remnant produced from the merger of compact binary objects. I will review recent results from the field, focusing on mergers of two black holes.
Comparative study of camera calibration algorithms with application to spacecraft navigation
Poelzleitner, Wolfgang; Ulm, Michael
1994-10-01
This paper deals with the problem of camera calibration based on 3D feature measurements. It occurs in industrial 3D measurement systems, as well as in autonomous navigation systems, where the estimation of motion parameters is required. We have selected the problem of extrinsic calibration (exterior orientation) of a camera that is looking at flat or almost flat surfaces (or terrain). This situation causes numerical and stability problems to many of the known calibration methods. To study the impact of flatness of the reference surface (or calibration target) on the calibration errors we have done a comparative study using sixteen available calibration procedures. The major emphasis was on robustness with respect to 3D measurement errors and sensitivity to flatness. A new calibration method is also investigated, which can be used independently of whether the calibration reference surface is flat, almost flat, or rugged.
Cruzalèbes, P.; Jorissen, A.; Sacuto, S.; Bonneau, D.
2010-06-01
Context. Accurate long-baseline interferometric measurements require careful calibration with reference stars. Small calibrators with high angular diameter accuracy ensure the true visibility uncertainty to be dominated by the measurement errors. Aims: We review some indirect methods for estimating angular diameter, using various types of input data. Each diameter estimate, obtained for the test-case calibrator star λ Gru, is compared with the value 2.71 mas found in the Bordé calibrator catalogue published in 2002. Methods: Angular size estimations from spectral type, spectral index, in-band magnitude, broadband photometry, and spectrophotometry give close estimates of the angular diameter, with slightly variable uncertainties. Fits on photometry and spectrophotometry need physical atmosphere models with “plausible” stellar parameters. Angular diameter uncertainties were estimated by means of residual bootstrapping confidence intervals. All numerical results and graphical outputs presented in this paper were obtained using the routines developed under PV-WAVE®, which compose the modular software suite SPIDAST, created to calibrate and interprete spectroscopic and interferometric measurements, particularly those obtained with VLTI-AMBER. Results: The final angular diameter estimate 2.70 mas of λ Gru, with 68% confidence interval 2.65-2.81 mas, is obtained by fit of the MARCS model on the ISO-SWS 2.38-27.5 μm spectrum, with the stellar parameters Te = 4250 K, log g = 2.0, z = 0.0 dex, M = 1.0 M⊙, and ξ_t = 2.0 km s-1.
Calibrating ensemble reliability whilst preserving spatial structure
Jonathan Flowerdew
2014-03-01
Full Text Available Ensemble forecasts aim to improve decision-making by predicting a set of possible outcomes. Ideally, these would provide probabilities which are both sharp and reliable. In practice, the models, data assimilation and ensemble perturbation systems are all imperfect, leading to deficiencies in the predicted probabilities. This paper presents an ensemble post-processing scheme which directly targets local reliability, calibrating both climatology and ensemble dispersion in one coherent operation. It makes minimal assumptions about the underlying statistical distributions, aiming to extract as much information as possible from the original dynamic forecasts and support statistically awkward variables such as precipitation. The output is a set of ensemble members preserving the spatial, temporal and inter-variable structure from the raw forecasts, which should be beneficial to downstream applications such as hydrological models. The calibration is tested on three leading 15-d ensemble systems, and their aggregation into a simple multimodel ensemble. Results are presented for 12 h, 1° scale over Europe for a range of surface variables, including precipitation. The scheme is very effective at removing unreliability from the raw forecasts, whilst generally preserving or improving statistical resolution. In most cases, these benefits extend to the rarest events at each location within the 2-yr verification period. The reliability and resolution are generally equivalent or superior to those achieved using a Local Quantile-Quantile Transform, an established calibration method which generalises bias correction. The value of preserving spatial structure is demonstrated by the fact that 3×3 averages derived from grid-scale precipitation calibration perform almost as well as direct calibration at 3×3 scale, and much better than a similar test neglecting the spatial relationships. Some remaining issues are discussed regarding the finite size of the output
John Schabron; Joseph Rovani; Mark Sanderson
2008-02-29
Mercury continuous emissions monitoring systems (CEMS) are being implemented in over 800 coal-fired power plant stacks. The power industry desires to conduct at least a full year of monitoring before the formal monitoring and reporting requirement begins on January 1, 2009. It is important for the industry to have available reliable, turnkey equipment from CEM vendors. Western Research Institute (WRI) is working closely with the Electric Power Research Institute (EPRI), the National Institute of Standards and Technology (NIST), and the Environmental Protection Agency (EPA) to facilitate the development of the experimental criteria for a NIST traceability protocol for dynamic elemental mercury vapor generators. The generators are used to calibrate mercury CEMs at power plant sites. The Clean Air Mercury Rule (CAMR) which was published in the Federal Register on May 18, 2005 requires that calibration be performed with NIST-traceable standards (Federal Register 2007). Traceability procedures will be defined by EPA. An initial draft traceability protocol was issued by EPA in May 2007 for comment. In August 2007, EPA issued an interim traceability protocol for elemental mercury generators (EPA 2007). The protocol is based on the actual analysis of the output of each calibration unit at several concentration levels ranging initially from about 2-40 {micro}g/m{sup 3} elemental mercury, and in the future down to 0.2 {micro}g/m{sup 3}, and this analysis will be directly traceable to analyses by NIST. The document is divided into two separate sections. The first deals with the qualification of generators by the vendors for use in mercury CEM calibration. The second describes the procedure that the vendors must use to certify the generator models that meet the qualification specifications. The NIST traceable certification is performance based, traceable to analysis using isotope dilution inductively coupled plasma/mass spectrometry performed by NIST in Gaithersburg, MD. The
An Extended Prolog Architecture for Integrated Symbolic and Numerical Executions
1988-06-14
clauses. The Data 10ur current experimental system is a Xenologic model X-1 [5) co-processor with a Sun 3/160 host. The X-1 is an improved, commercial...stimulating interactions with Professor William Kahan, Robert Owen of Bipolar Integrated Technology incoporated, the staff at Xenologic incoporated, and
Numerical Solution of the Extended Nonlinear Schrodinger Equation
2006-09-01
processing environment. With this system, matrix calculations are distributed from a client machine to multiple “workers” on a “ Beowulf ” cluster and...processed in parallel. The new software will also run on a stand-alone MATLAB version 7 client (without connections to the Beowulf ) as a conventional MATLAB...Processing without Diffraction With diffraction disabled, the application shows a modest speed-up when 8 workers on the cluster are used (see below
Calibration of the Herschel SPIRE Fourier Transform Spectrometer
Swinyard, B M; Hopwood, R; Valtchanov, I; Lu, N; Fulton, T; Benielli, D; Imhof, P; Marchili, N; Baluteau, J -P; Bendo, G J; Ferlet, M; Griffin, M J; Lim, T L; Makiwa, G; Naylor, D A; Orton, G S; Papageorgiou, A; Pearson, C P; Schulz, B; Sidher, S D; Spencer, L D; van der Wiel, M H D; Wu, R
2014-01-01
The Herschel SPIRE instrument consists of an imaging photometric camera and an imaging Fourier Transform Spectrometer (FTS), both operating over a frequency range of 450-1550 GHz. In this paper, we briefly review the FTS design, operation, and data reduction, and describe in detail the approach taken to relative calibration (removal of instrument signatures) and absolute calibration against standard astronomical sources. The calibration scheme assumes a spatially extended source and uses the Herschel telescope as primary calibrator. Conversion from extended to point-source calibration is carried out using observations of the planet Uranus. The model of the telescope emission is shown to be accurate to within 6% and repeatable to better than 0.06% and, by comparison with models of Mars and Neptune, the Uranus model is shown to be accurate to within 3%. Multiple observations of a number of point-like sources show that the repeatability of the calibration is better than 1%, if the effects of the satellite absolu...
Self-Calibrating Pressure Transducer
Lueck, Dale E. (Inventor)
2006-01-01
A self-calibrating pressure transducer is disclosed. The device uses an embedded zirconia membrane which pumps a determined quantity of oxygen into the device. The associated pressure can be determined, and thus, the transducer pressure readings can be calibrated. The zirconia membrane obtains oxygen .from the surrounding environment when possible. Otherwise, an oxygen reservoir or other source is utilized. In another embodiment, a reversible fuel cell assembly is used to pump oxygen and hydrogen into the system. Since a known amount of gas is pumped across the cell, the pressure produced can be determined, and thus, the device can be calibrated. An isolation valve system is used to allow the device to be calibrated in situ. Calibration is optionally automated so that calibration can be continuously monitored. The device is preferably a fully integrated MEMS device. Since the device can be calibrated without removing it from the process, reductions in costs and down time are realized.
A SVD-based method to assess the uniqueness and accuracy of SPECT geometrical calibration.
Ma, Tianyu; Yao, Rutao; Shao, Yiping; Zhou, Rong
2009-12-01
Geometrical calibration is critical to obtaining high resolution and artifact-free reconstructed image for SPECT and CT systems. Most published calibration methods use analytical approach to determine the uniqueness condition for a specific calibration problem, and the calibration accuracy is often evaluated through empirical studies. In this work, we present a general method to assess the characteristics of both the uniqueness and the quantitative accuracy of the calibration. The method uses a singular value decomposition (SVD) based approach to analyze the Jacobian matrix from a least-square cost function for the calibration. With this method, the uniqueness of the calibration can be identified by assessing the nonsingularity of the Jacobian matrix, and the estimation accuracy of the calibration parameters can be quantified by analyzing the SVD components. A direct application of this method is that the efficacy of a calibration configuration can be quantitatively evaluated by choosing a figure-of-merit, e.g., the minimum required number of projection samplings to achieve desired calibration accuracy. The proposed method was validated with a slit-slat SPECT system through numerical simulation studies and experimental measurements with point sources and an ultra-micro hot-rod phantom. The predicted calibration accuracy from the numerical studies was confirmed by the experimental point source calibrations at approximately 0.1 mm for both the center of rotation (COR) estimation of a rotation stage and the slit aperture position (SAP) estimation of a slit-slat collimator by an optimized system calibration protocol. The reconstructed images of a hot rod phantom showed satisfactory spatial resolution with a proper calibration and showed visible resolution degradation with artificially introduced 0.3 mm COR estimation error. The proposed method can be applied to other SPECT and CT imaging systems to analyze calibration method assessment and calibration protocol
Ozone measurement systems: associated instrumentation and calibration
J. Bellido
2006-01-01
Full Text Available The harmful effects produced by ozone have lead to a vast regulation to define and establish the quality goals of ambient air, based on common methods and criteria. The surveillance nets of atmospheric pollution are worldwide extended systems and the applied technology for the ozone measurement is nowadays quite standardized. The aim of this paper is to give a general view of the most common systems used in the ozone measurement in ambient air from a practical point of view. The used instrumentation and the usual calibration methods will be described.
Sezar Gülbaz
2015-01-01
Full Text Available The land development and increase in urbanization in a watershed affect water quantityand water quality. On one hand, urbanization provokes the adjustment of geomorphicstructure of the streams, ultimately raises peak flow rate which causes flood; on theother hand, it diminishes water quality which results in an increase in Total SuspendedSolid (TSS. Consequently, sediment accumulation in downstream of urban areas isobserved which is not preferred for longer life of dams. In order to overcome thesediment accumulation problem in dams, the amount of TSS in streams and inwatersheds should be taken under control. Low Impact Development (LID is a BestManagement Practice (BMP which may be used for this purpose. It is a land planningand engineering design method which is applied in managing storm water runoff inorder to reduce flooding as well as simultaneously improve water quality. LID includestechniques to predict suspended solid loads in surface runoff generated over imperviousurban surfaces. In this study, the impact of LID-BMPs on surface runoff and TSS isinvestigated by employing a calibrated hydrodynamic model for Sazlidere Watershedwhich is located in Istanbul, Turkey. For this purpose, a calibrated hydrodynamicmodel was developed by using Environmental Protection Agency Storm WaterManagement Model (EPA SWMM. For model calibration and validation, we set up arain gauge and a flow meter into the field and obtain rainfall and flow rate data. Andthen, we select several LID types such as retention basins, vegetative swales andpermeable pavement and we obtain their influence on peak flow rate and pollutantbuildup and washoff for TSS. Consequently, we observe the possible effects ofLID on surface runoff and TSS in Sazlidere Watershed.
Dynamic Torque Calibration Unit
Agronin, Michael L.; Marchetto, Carl A.
1989-01-01
Proposed dynamic torque calibration unit (DTCU) measures torque in rotary actuator components such as motors, bearings, gear trains, and flex couplings. Unique because designed specifically for testing components under low rates. Measures torque in device under test during controlled steady rotation or oscillation. Rotor oriented vertically, supported by upper angular-contact bearing and lower radial-contact bearing that floats axially to prevent thermal expansion from loading bearings. High-load capacity air bearing available to replace ball bearings when higher load capacity or reduction in rate noise required.
ALTEA: The instrument calibration
Zaconte, V. [INFN and University of Rome Tor Vergata, Department of Physics, Via della Ricerca Scientifica 1, 00133 Rome (Italy)], E-mail: livio.narici@roma2.infn.it; Belli, F.; Bidoli, V.; Casolino, M.; Di Fino, L.; Narici, L.; Picozza, P.; Rinaldi, A. [INFN and University of Rome Tor Vergata, Department of Physics, Via della Ricerca Scientifica 1, 00133 Rome (Italy); Sannita, W.G. [DISM, University of Genova, Genova (Italy); Department of Psychiatry, SUNY, Stoony Brook, NY (United States); Finetti, N.; Nurzia, G.; Rantucci, E.; Scrimaglio, R.; Segreto, E. [Department of Physics, University and INFN, L' Aquila (Italy); Schardt, D. [GSI/Biophysik, Darmstadt (Germany)
2008-05-15
The ALTEA program is an international and multi-disciplinary project aimed at studying particle radiation in space environment and its effects on astronauts' brain functions, as the anomalous perception of light flashes first reported during Apollo missions. The ALTEA space facility includes a 6-silicon telescopes particle detector, and is onboard the International Space Station (ISS) since July 2006. In this paper, the detector calibration at the heavy-ion synchrotron SIS18 at GSI Darmstadt will be presented and compared to the Geant 3 Monte Carlo simulation. Finally, the results of a neural network analysis that was used for ion discrimination on fragmentation data will also be presented.
2016-06-06
ELC – Extended Life Coolant SCA – Supplemental Coolant Additive SOW – Scope of Work SwRI – Southwest Research Institute TARDEC – Tank Automotive...ethylene or propylene glycol and 35% extended life coolant #1 (ELC1) with a balance of water. At a higher ELC1 content of 45% or 50%, the mass loss...UNCLASSIFIED TABLE OF CONTENTS EXTENDED LIFE COOLANT TESTING INTERIM REPORT TFLRF No. 478 by Gregory A. T. Hansen Edwin A
Extended icosahedral structures
Jaric, Marko V
1989-01-01
Extended Icosahedral Structures discusses the concepts about crystal structures with extended icosahedral symmetry. This book is organized into six chapters that focus on actual modeling of extended icosahedral crystal structures. This text first presents a tiling approach to the modeling of icosahedral quasiperiodic crystals. It then describes the models for icosahedral alloys based on random connections between icosahedral units, with particular emphasis on diffraction properties. Other chapters examine the glassy structures with only icosahedral orientational order and the extent of tra
Fine tuning consensus optimization for distributed radio interferometric calibration
Yatawatta, Sarod
2016-01-01
We recently proposed the use of consensus optimization as a viable and effective way to improve the quality of calibration of radio interferometric data. We showed that it is possible to obtain far more accurate calibration solutions and also to distribute the compute load across a network of computers by using this technique. A crucial aspect in any consensus optimization problem is the selection of the penalty parameter used in the alternating direction method of multipliers (ADMM) iterations. This affects the convergence speed as well as the accuracy. In this paper, we use the Hessian of the cost function used in calibration to appropriately select this penalty. We extend our results to a multi-directional calibration setting, where we propose to use a penalty scaled by the squared intensity of each direction.
M. Wei
2013-09-01
Full Text Available A number of real-time ocean model forecasts were carried out successfully at Naval Research Laboratory (NRL to provide modeling support and numerical guidance to the CARTHE GLAD at-sea experiment during summer 2012. Two RELO ensembles and three single models using NCOM and HYCOM with different resolutions were carried out. A calibrated ensemble system with enhanced spread and reliability was developed to better support this experiment. The calibrated ensemble is found to outperform the un-calibrated ensemble in forecasting accuracy, skill, and reliability for all the variables and observation spaces evaluated. The metrics used in this paper include RMS error, anomaly correlation, PECA, Brier score, spread reliability, and Talagrand rank histogram. It is also found that even the un-calibrated ensemble outperforms the single forecast from the model with the same resolution. The advantages of the ensembles are further extended to the Lagrangian framework. In contrast to a single model forecast, the RELO ensemble provides not only the most likely Lagrangian trajectory for a particle in the ocean, but also an uncertainty estimate that directly reflects the complicated ocean dynamics, which is valuable for decision makers. The examples show that the calibrated ensemble with more reliability can capture trajectories in different, even opposite, directions, which would be missed by the un-calibrated ensemble. The ensembles are applied to compute the repelling and attracting Lagrangian coherent structures (LCSs, and the uncertainties of the LCSs, which are hard to obtain from a single model forecast, are estimated. It is found that the spatial scales of the LCSs depend on the model resolution. The model with the highest resolution produces the finest, small-scale, LCS structures, while the model with lowest resolution generates only large-scale LCSs. The repelling and attracting LCSs are found to intersect at many locations and create complex mesoscale
Workflow to numerically reproduce laboratory ultrasonic datasets
A. Biryukov; N. Tisato; G. Grasselli
2014-01-01
The risks and uncertainties related to the storage of high-level radioactive waste (HLRW) can be reduced thanks to focused studies and investigations. HLRWs are going to be placed in deep geological re-positories, enveloped in an engineered bentonite barrier, whose physical conditions are subjected to change throughout the lifespan of the infrastructure. Seismic tomography can be employed to monitor its physical state and integrity. The design of the seismic monitoring system can be optimized via con-ducting and analyzing numerical simulations of wave propagation in representative repository geometry. However, the quality of the numerical results relies on their initial calibration. The main aim of this paper is to provide a workflow to calibrate numerical tools employing laboratory ultrasonic datasets. The finite difference code SOFI2D was employed to model ultrasonic waves propagating through a laboratory sample. Specifically, the input velocity model was calibrated to achieve a best match between experi-mental and numerical ultrasonic traces. Likely due to the imperfections of the contact surfaces, the resultant velocities of P- and S-wave propagation tend to be noticeably lower than those a priori assigned. Then, the calibrated model was employed to estimate the attenuation in a montmorillonite sample. The obtained low quality factors (Q) suggest that pronounced inelastic behavior of the clay has to be taken into account in geophysical modeling and analysis. Consequently, this contribution should be considered as a first step towards the creation of a numerical tool to evaluate wave propagation in nuclear waste repositories.
Numerical differential protection
Ziegler, Gerhard
2012-01-01
Differential protection is a fast and selective method of protection against short-circuits. It is applied in many variants for electrical machines, trans?formers, busbars, and electric lines.Initially this book covers the theory and fundamentals of analog and numerical differential protection. Current transformers are treated in detail including transient behaviour, impact on protection performance, and practical dimensioning. An extended chapter is dedicated to signal transmission for line protection, in particular, modern digital communication and GPS timing.The emphasis is then pla
A Simple Accelerometer Calibrator
Salam, R. A.; Islamy, M. R. F.; Munir, M. M.; Latief, H.; Irsyam, M.; Khairurrijal
2016-08-01
High possibility of earthquake could lead to the high number of victims caused by it. It also can cause other hazards such as tsunami, landslide, etc. In that case it requires a system that can examine the earthquake occurrence. Some possible system to detect earthquake is by creating a vibration sensor system using accelerometer. However, the output of the system is usually put in the form of acceleration data. Therefore, a calibrator system for accelerometer to sense the vibration is needed. In this study, a simple accelerometer calibrator has been developed using 12 V DC motor, optocoupler, Liquid Crystal Display (LCD) and AVR 328 microcontroller as controller system. The system uses the Pulse Wave Modulation (PWM) form microcontroller to control the motor rotational speed as response to vibration frequency. The frequency of vibration was read by optocoupler and then those data was used as feedback to the system. The results show that the systems could control the rotational speed and the vibration frequencies in accordance with the defined PWM.
Calibration of a slimehole density sonde using MCNPX
Won, Byeongho; Hwang, Seho; Shin, Jehyun; Kim, Jongman
2014-05-01
The density log is a well logging tool that can continuously record bulk density of the formation. This is widely applied for a variety of fields such as the petroleum exploitation, mineral exploration, and geotechnical survey and so on. The density log is normally applied to open holes. But there are frequently difficult conditions such as cased boreholes, the variation of borehole diameter, the borehole fluid salinity, and the stand-off and so on. So we need a density correction curves for the various borehole conditions. The primary calibration curve by manufacturer is used for the formation density calculation. In case of density log used for the oil industry, the calibration curves for various borehole environments are applied to the density correction, but commonly used slim-hole density logging sonde normally have a calibration curve for the variation of borehole diameter. In order to correct the various borehole environmental conditions, it is necessary to make the primary calibration curve of density sonde using numerical modeling. Numerical modeling serves as a low-cost substitute for experimental test pits. We have performed numerical modeling using the MCNP based on Monte-Carlo methods can record average behaviors of radiation particles. In this study, the work for matching the primary calibration curve of FDGS (Formation Density Gamma Sonde) for slime borehole with a 100 mCi 137 Cs gamma source was performed. On the basis of this work, correction curves in various borehole environments were produced.
2005-01-01
A means or artefact for calibrating the height/depth or Z axis of a microscope, such as a confocal microscope, an interference microscope or a Scanning Electron Microscope. The artefact comprises a number of tapering or pie-shaped, parallel surfaces each extending from a central axis, whereby all...
A Bayesian Estimator for Linear Calibration Error Effects in Thermal Remote Sensing
Morgan, J A
2005-01-01
The Bayesian Land Surface Temperature estimator previously developed has been extended to include the effects of imperfectly known gain and offset calibration errors. It is possible to treat both gain and offset as nuisance parameters and, by integrating over an uninformative range for their magnitudes, eliminate the dependence of surface temperature and emissivity estimates upon the exact calibration error.
Engelbracht, C W; Su, K Y L; Rho, J; Rieke, G H; Muzerolle, J; Padgett, D L; Hines, D C; Gordon, K D; Fadda, D; Noriega-Crespo, A; Kelly, D M; Latter, W B; Hinz, J L; Misselt, K A; Morrison, J E; Stansberry, J A; Shupe, D L; Stolovy, S; Wheaton, Wm A; Young, E T; Neugebauer, G; Wachter, S; Pérez-González, P G; Frayer, D T; Marleau, F R
2007-01-01
We present the stellar calibrator sample and the conversion from instrumental to physical units for the 24 micron channel of the Multiband Imaging Photometer for Spitzer (MIPS). The primary calibrators are A stars, and the calibration factor based on those stars is 4.54*10^{-2} MJy sr^{-1} (DN/s)^{-1}, with a nominal uncertainty of 2%. We discuss the data-reduction procedures required to attain this accuracy; without these procdures, the calibration factor obtained using the automated pipeline at the Spitzer Science Center is 1.6% +/- 0.6% lower. We extend this work to predict 24 micron flux densities for a sample of 238 stars which covers a larger range of flux densities and spectral types. We present a total of 348 measurements of 141 stars at 24 micron. This sample covers a factor of ~460 in 24 micron flux density, from 8.6 mJy up to 4.0 Jy. We show that the calibration is linear over that range with respect to target flux and background level. The calibration is based on observations made using 3-second e...
Quantum Extended Supersymmetries
Grigore, D R; Grigore, Dan Radu; Scharf, Gunter
2003-01-01
We analyse some quantum multiplets associated with extended supersymmetries. We study in detail the general form of the causal (anti)commutation relations. The condition of positivity of the scalar product imposes severe restrictions on the (quantum) model. It is problematic if one can find out quantum extensions of the standard model with extended supersymmetries.
A reference material for dynamic displacement calibration
Davighi, A.; Hack, E.; Patterson, E.; Whelan, M.
2010-06-01
Calibration of displacement and strain measurement systems is an essential step in providing traceability and confidence in stress and strain distributions obtained from experiment and used to validate simulations employed in engineering design. Reference materials provide a simple, well-defined distribution of the measured quantity that can be traced to an international standard and can be used to assess the uncertainty associated with the measurement system. Previous work has established a reference material and procedure for calibrating optical systems for measuring static, in-plane strain distributions and also demonstrated its use. A new effort is in progress to extend this work to the measurement of three-dimensional displacement distributions induced by cyclic and dynamic loading, including transients and large-scale deformation. The first step in this effort has been to define both the essential and desirable attributes of a reference material for calibrating systems capable of measurements of dynamic displacement and strain. An international consortium of research laboratories, system designers, manufacturers and end-users has identified a list of attributes and members of the experimental mechanics community have been asked to weight the importance of these attributes. The attributes are being utilised to evaluate candidate designs for the reference material which have been generated through a series of brain-storming sessions within the consortium.
Extended Theories of Gravitation
Fatibene Lorenzo
2013-09-01
Full Text Available Extended theories of gravitation are naturally singled out by an analysis inspired by the Ehelers-Pirani-Schild framework. In this framework the structure of spacetime is described by a Weyl geometry which is enforced by dynamics. Standard General Relativity is just one possible theory within the class of extended theories of gravitation. Also all Palatini f(R theories are shown to be extended theories of gravitation. This more general setting allows a more general interpretation scheme and more general possible couplings between gravity and matter. The definitions and constructions of extended theories will be reviewed. A general interpretation scheme will be considered for extended theories and some examples will be considered.
DOA estimation and mutual coupling calibration with the SAGE algorithm
Xiong Kunlai; Liu Zhangmeng; Liu Zheng; Jiang Wenli
2014-01-01
In this paper, a novel algorithm is presented for direction of arrival (DOA) estimation and array self-calibration in the presence of unknown mutual coupling. In order to highlight the relationship between the array output and mutual coupling coefficients, we present a novel model of the array output with the unknown mutual coupling coefficients. Based on this model, we use the space alternating generalized expectation-maximization (SAGE) algorithm to jointly estimate the DOA parameters and the mutual coupling coefficients. Unlike many existing counterparts, our method requires neither calibration sources nor initial calibration information. At the same time, our proposed method inherits the characteristics of good convergence and high estimation precision of the SAGE algorithm. By numerical experiments we demonstrate that our proposed method outperforms the existing method for DOA estimation and mutual coupling calibration.
DOA estimation and mutual coupling calibration with the SAGE algorithm
Xiong Kunlai
2014-12-01
Full Text Available In this paper, a novel algorithm is presented for direction of arrival (DOA estimation and array self-calibration in the presence of unknown mutual coupling. In order to highlight the relationship between the array output and mutual coupling coefficients, we present a novel model of the array output with the unknown mutual coupling coefficients. Based on this model, we use the space alternating generalized expectation-maximization (SAGE algorithm to jointly estimate the DOA parameters and the mutual coupling coefficients. Unlike many existing counterparts, our method requires neither calibration sources nor initial calibration information. At the same time, our proposed method inherits the characteristics of good convergence and high estimation precision of the SAGE algorithm. By numerical experiments we demonstrate that our proposed method outperforms the existing method for DOA estimation and mutual coupling calibration.
Optical Tweezer Assembly and Calibration
Collins, Timothy M.
2004-01-01
An Optical Tweezer, as the name implies, is a useful tool for precision manipulation of micro and nano scale objects. Using the principle of electromagnetic radiation pressure, an optical tweezer employs a tightly focused laser beam to trap and position objects of various shapes and sizes. These devices can trap micrometer and nanometer sized objects. An exciting possibility for optical tweezers is its future potential to manipulate and assemble micro and nano sized sensors. A typical optical tweezer makes use of the following components: laser, mirrors, lenses, a high quality microscope, stage, Charge Coupled Device (CCD) camera, TV monitor and Position Sensitive Detectors (PSDs). The laser wavelength employed is typically in the visible or infrared spectrum. The laser beam is directed via mirrors and lenses into the microscope. It is then tightly focused by a high magnification, high numerical aperture microscope objective into the sample slide, which is mounted on a translating stage. The sample slide contains a sealed, small volume of fluid that the objects are suspended in. The most common objects trapped by optical tweezers are dielectric spheres. When trapped, a sphere will literally snap into and center itself in the laser beam. The PSD s are mounted in such a way to receive the backscatter after the beam has passed through the trap. PSD s used with the Differential Interference Contrast (DIC) technique provide highly precise data. Most optical tweezers employ lasers with power levels ranging from 10 to 100 miliwatts. Typical forces exerted on trapped objects are in the pico-newton range. When PSDs are employed, object movement can be resolved on a nanometer scale in a time range of milliseconds. Such accuracy, however, can only by utilized by calibrating the optical tweezer. Fortunately, an optical tweezer can be modeled accurately as a simple spring. This allows Hook s Law to be used. My goal this summer at NASA Glenn Research Center is the assembly and
The Landsat Data Continuity Mission Operational Land Imager (OLI) Radiometric Calibration
Markham, Brian L.; Dabney, Philip W.; Murphy-Morris, Jeanine E.; Knight, Edward J.; Kvaran, Geir; Barsi, Julia A.
2010-01-01
The Operational Land Imager (OLI) on the Landsat Data Continuity Mission (LDCM) has a comprehensive radiometric characterization and calibration program beginning with the instrument design, and extending through integration and test, on-orbit operations and science data processing. Key instrument design features for radiometric calibration include dual solar diffusers and multi-lamped on-board calibrators. The radiometric calibration transfer procedure from NIST standards has multiple checks on the radiometric scale throughout the process and uses a heliostat as part of the transfer to orbit of the radiometric calibration. On-orbit lunar imaging will be used to track the instruments stability and side slither maneuvers will be used in addition to the solar diffuser to flat field across the thousands of detectors per band. A Calibration Validation Team is continuously involved in the process from design to operations. This team uses an Image Assessment System (IAS), part of the ground system to characterize and calibrate the on-orbit data.
Skew redundant MEMS IMU calibration using a Kalman filter
Jafari, M.; Sahebjameyan, M.; Moshiri, B.; Najafabadi, T. A.
2015-10-01
In this paper, a novel calibration procedure for skew redundant inertial measurement units (SRIMUs) based on micro-electro mechanical systems (MEMS) is proposed. A general model of the SRIMU measurements is derived which contains the effects of bias, scale factor error and misalignments. For more accuracy, the effect of lever arms of the accelerometers to the center of the table are modeled and compensated in the calibration procedure. Two separate Kalman filters (KFs) are proposed to perform the estimation of error parameters for gyroscopes and accelerometers. The predictive error minimization (PEM) stochastic modeling method is used to simultaneously model the effect of bias instability and random walk noise on the calibration Kalman filters to diminish the biased estimations. The proposed procedure is simulated numerically and has expected experimental results. The calibration maneuvers are applied using a two-axis angle turntable in a way that the persistency of excitation (PE) condition for parameter estimation is met. For this purpose, a trapezoidal calibration profile is utilized to excite different deterministic error parameters of the accelerometers and a pulse profile is used for the gyroscopes. Furthermore, to evaluate the performance of the proposed KF calibration method, a conventional least squares (LS) calibration procedure is derived for the SRIMUs and the simulation and experimental results compare the functionality of the two proposed methods with each other.
Douglas, M R; Lukic, S; Reinbacher, R; Douglas, Michael R.; Karp, Robert L.; Lukic, Sergio; Reinbacher, Rene
2006-01-01
We develop numerical methods for approximating Ricci flat metrics on Calabi-Yau hypersurfaces in projective spaces. Our approach is based on finding balanced metrics, and builds on recent theoretical work by Donaldson. We illustrate our methods in detail for a one parameter family of quintics. We also suggest several ways to extend our results.
Internal Water Vapor Photoacoustic Calibration
Pilgrim, Jeffrey S.
2009-01-01
Water vapor absorption is ubiquitous in the infrared wavelength range where photoacoustic trace gas detectors operate. This technique allows for discontinuous wavelength tuning by temperature-jumping a laser diode from one range to another within a time span suitable for photoacoustic calibration. The use of an internal calibration eliminates the need for external calibrated reference gases. Commercial applications include an improvement of photoacoustic spectrometers in all fields of use.
Spontaneous Breakup of Extended Monodisperse Polymer Melts
Rasmussen, Henrik K.; Yu, Kaijia
2011-01-01
We apply continuum mechanical based, numerical modeling to study the dynamics of extended monodisperse polymer melts during the relaxation. The computations are within the ideas of the microstructural ‘‘interchain pressure’’ theory. The computations show a delayed necking resulting in a rupture...
Q-Method Extended Kalman Filter
Zanetti, Renato; Ainscough, Thomas; Christian, John; Spanos, Pol D.
2012-01-01
A new algorithm is proposed that smoothly integrates non-linear estimation of the attitude quaternion using Davenport s q-method and estimation of non-attitude states through an extended Kalman filter. The new method is compared to a similar existing algorithm showing its similarities and differences. The validity of the proposed approach is confirmed through numerical simulations.
Fugal, Mario
2012-10-01
In order to create precision magnets for an experiment at Oak Ridge National Laboratory, a new reverse engineering method has been proposed that uses the magnetic scalar potential to solve for the currents necessary to produce the desired field. To make the magnet it is proposed to use a copper coated G10 form, upon which a drill, mounted on a robotic arm, will carve wires. The accuracy required in the manufacturing of the wires exceeds nominal robot capabilities. However, due to the rigidity as well as the precision servo motor and harmonic gear drivers, there are robots capable of meeting this requirement with proper calibration. Improving the accuracy of an RX130 to be within 35 microns (the accuracy necessary of the wires) is the goal of this project. Using feedback from a displacement sensor, or camera and inverse kinematics it is possible to achieve this accuracy.
Radiological Calibration and Standards Facility
Federal Laboratory Consortium — PNNL maintains a state-of-the-art Radiological Calibration and Standards Laboratory on the Hanford Site at Richland, Washington. Laboratory staff provide expertise...
Field calibration of cup anemometers
Kristensen, L.; Jensen, G.; Hansen, A.
2001-01-01
An outdoor calibration facility for cup anemometers, where the signals from 10 anemometers of which at least one is a reference can be can be recorded simultaneously, has been established. The results are discussed with special emphasis on the statisticalsignificance of the calibration expressions....... It is concluded that the method has the advantage that many anemometers can be calibrated accurately with a minimum of work and cost. The obvious disadvantage is that the calibration of a set of anemometersmay take more than one month in order to have wind speeds covering a sufficiently large magnitude range...
Uncertainty modelling and code calibration for composite materials
Toft, Henrik Stensgaard; Branner, Kim; Mishnaevsky, Leon, Jr
2013-01-01
between risk of failure and cost of the structure. Consideration related to calibration of partial safety factors for composite material is described, including the probability of failure, format for the partial safety factor method and weight factors for different load cases. In a numerical example...
Visual Aids Improve Diagnostic Inferences and Metacognitive Judgment Calibration
Rocio eGarcia-Retamero
2015-07-01
Full Text Available Visual aids can improve comprehension of risks associated with medical treatments, screenings, and lifestyles. Do visual aids also help decision makers accurately assess their risk comprehension? That is, do visual aids help them become well calibrated? To address these questions, we investigated the benefits of visual aids displaying numerical information and measured accuracy of self-assessment of diagnostic inferences (i.e., metacognitive judgment calibration controlling for individual differences in numeracy. Participants included 108 patients who made diagnostic inferences about three medical tests on the basis of information about the sensitivity and false-positive rate of the tests and disease prevalence. Half of the patients received the information in numbers without a visual aid, while the other half received numbers along with a grid representing the numerical information. In the numerical condition, many patients --especially those with low numeracy-- misinterpreted the predictive value of the tests and profoundly overestimated the accuracy of their inferences. Metacognitive judgment calibration mediated the relationship between numeracy and accuracy of diagnostic inferences. In contrast, in the visual aid condition, patients at all levels of numeracy showed high-levels of inferential accuracy and metacognitive judgment calibration. Results indicate that accurate metacognitive assessment may explain the beneficial effects of visual aids and numeracy --a result that accords with theory suggesting that metacognition is an essential part of risk literacy. We conclude that well-designed risk communications can inform patients about health-relevant numerical information while helping them assess the quality of their own risk comprehension.
Calibration and validation of full-field techniques
Thalmann R.
2010-06-01
Full Text Available We review basic metrological terms related to the use of measurement equipment for verification of numerical model calculations. We address three challenges that are faced when performing measurements in experimental mechanics with optical techniques: the calibration of a measuring instrument that (i measures strain values, (ii provides full-field data, and (iii is dynamic.
Semi-Empirical Calibration of the Integral Equation Model for Co-Polarized L-Band Backscattering
Nicolas Baghdadi
2015-10-01
Full Text Available The objective of this paper is to extend the semi-empirical calibration of the backscattering Integral Equation Model (IEM initially proposed for Synthetic Aperture Radar (SAR data at C- and X-bands to SAR data at L-band. A large dataset of radar signal and in situ measurements (soil moisture and surface roughness over bare soil surfaces were used. This dataset was collected over numerous agricultural study sites in France, Luxembourg, Belgium, Germany and Italy using various SAR sensors (AIRSAR, SIR-C, JERS-1, PALSAR-1, ESAR. Results showed slightly better simulations with exponential autocorrelation function than with Gaussian function and with HH than with VV. Using the exponential autocorrelation function, the mean difference between experimental data and Integral Equation Model (IEM simulations is +0.4 dB in HH and −1.2 dB in VV with a Root Mean Square Error (RMSE about 3.5 dB. In order to improve the modeling results of the IEM for a better use in the inversion of SAR data, a semi-empirical calibration of the IEM was performed at L-band in replacing the correlation length derived from field experiments by a fitting parameter. Better agreement was observed between the backscattering coefficient provided by the SAR and that simulated by the calibrated version of the IEM (RMSE about 2.2 dB.
New Attitude Sensor Alignment Calibration Algorithms
Hashmall, Joseph A.; Sedlak, Joseph E.; Harman, Richard (Technical Monitor)
2002-01-01
Accurate spacecraft attitudes may only be obtained if the primary attitude sensors are well calibrated. Launch shock, relaxation of gravitational stresses and similar effects often produce large enough alignment shifts so that on-orbit alignment calibration is necessary if attitude accuracy requirements are to be met. A variety of attitude sensor alignment algorithms have been developed to meet the need for on-orbit calibration. Two new algorithms are presented here: ALICAL and ALIQUEST. Each of these has advantages in particular circumstances. ALICAL is an attitude independent algorithm that uses near simultaneous measurements from two or more sensors to produce accurate sensor alignments. For each set of simultaneous observations the attitude is overdetermined. The information content of the extra degrees of freedom can be combined over numerous sets to provide the sensor alignments. ALIQUEST is an attitude dependent algorithm that combines sensor and attitude data into a loss function that has the same mathematical form as the Wahba problem. Alignments can then be determined using any of the algorithms (such as the QUEST quaternion estimator) that have been developed to solve the Wahba problem for attitude. Results from the use of these methods on active missions are presented.
朱金台; 董晓龙; 林文明; 朱迪
2013-01-01
Rotating Fan-beam SCATterometer (RFSCAT) is a new radar scatterometer system for ocean surface vector wind measurement. Compared with other available scatterometers, RFSCAT can provide more combination of azimuth and incidence angles for a single surface resolution cell. To achieve the required wind vector accuracy, radar scatterometry measurement of backscattering coefficient ( )s must be calibrated within a few tenths of a 0 decibel. In this paper, the method for external calibration of RFSCAT is proposed, based on the system parameters of the scatterometer onboard the Chinese French Oceanography SATellite (CFOSAT), and is verified by simulations. Then QuikSCAT L2A data and SIR of several large homogenous areas are analyzed to check the stability and azimuthal dependence of thes over these areas. A new calibration mask is generated and will be 0 used as a reference for the calibration of RFSCAT.% 旋转扫描扇形波束散射计(Rotating Fan-beam SCATterometer, RFSCAT)是一种新体制的海洋风场测量雷达散射计。RFSCAT对同一观测面元能够提供更多的方位角和入射角观测组合，改善海面风矢量场的反演精度。为了达到设计的风场反演精度，系统要求定标精度为0.5 dB。该文基于中法海洋卫星(Chinese French Oceanography SATellite, CFOSAT)雷达散射计的系统参数，考虑了在轨测量的主要误差源，分析了地面扩展目标在轨外定标的特点，给出了可行的RFSCAT在轨外定标方法，并利用仿真数据对该方法进行验证。利用QuikSCAT散射计的L2A 数据和图像重构(SIR)数据，针对地球表面归一化雷达后向散射系数0()s稳定的区域，给出了定标地图，为RFSCAT在轨定标提供参考。
Numerical Propulsion System Simulation
Naiman, Cynthia
2006-01-01
The NASA Glenn Research Center, in partnership with the aerospace industry, other government agencies, and academia, is leading the effort to develop an advanced multidisciplinary analysis environment for aerospace propulsion systems called the Numerical Propulsion System Simulation (NPSS). NPSS is a framework for performing analysis of complex systems. The initial development of NPSS focused on the analysis and design of airbreathing aircraft engines, but the resulting NPSS framework may be applied to any system, for example: aerospace, rockets, hypersonics, power and propulsion, fuel cells, ground based power, and even human system modeling. NPSS provides increased flexibility for the user, which reduces the total development time and cost. It is currently being extended to support the NASA Aeronautics Research Mission Directorate Fundamental Aeronautics Program and the Advanced Virtual Engine Test Cell (AVETeC). NPSS focuses on the integration of multiple disciplines such as aerodynamics, structure, and heat transfer with numerical zooming on component codes. Zooming is the coupling of analyses at various levels of detail. NPSS development includes capabilities to facilitate collaborative engineering. The NPSS will provide improved tools to develop custom components and to use capability for zooming to higher fidelity codes, coupling to multidiscipline codes, transmitting secure data, and distributing simulations across different platforms. These powerful capabilities extend NPSS from a zero-dimensional simulation tool to a multi-fidelity, multidiscipline system-level simulation tool for the full development life cycle.
The dialogically extended mind
Fusaroli, Riccardo; Gangopadhyay, Nivedita; Tylén, Kristian
2014-01-01
A growing conceptual and empirical literature is advancing the idea that language extends our cognitive skills. One of the most influential positions holds that language – qua material symbols – facilitates individual thought processes by virtue of its material properties. Extending upon this model......, we argue that language enhances our cognitive capabilities in a much more radical way: The skilful engagement of public material symbols facilitates evolutionarily unprecedented modes of collective perception, action and reasoning (interpersonal synergies) creating dialogically extended minds. We...... relate our approach to other ideas about collective minds and review a number of empirical studies to identify the mechanisms enabling the constitution of interpersonal cognitive systems....
The Extended Enterprise concept
Larsen, Lars Bjørn; Vesterager, Johan; Gobbi, Chiara
1999-01-01
This paper provides an overview of the work that has been done regarding the Extended Enterprise concept in the Common Concept team of Globeman 21 including references to results deliverables concerning the development of the Extended Enterprise concept. The first section presents the basic concept...... picture from Globeman21, which illustrates the Globeman21 way of realising the Extended Enterprise concept. The second section presents the Globeman21 EE concept in a life cycle perspective, which to a large extent is based on the thoughts and ideas behind GERAM (ISO/DIS 15704)....
Sin, Gürkan; Van Hulle, Stijn W H; De Pauw, Dirk J W; van Griensven, Ann; Vanrolleghem, Peter A
2005-07-01
Modelling activated sludge systems has gained an increasing momentum after the introduction of activated sludge models (ASMs) in 1987. Application of dynamic models for full-scale systems requires essentially a calibration of the chosen ASM to the case under study. Numerous full-scale model applications have been performed so far which were mostly based on ad hoc approaches and expert knowledge. Further, each modelling study has followed a different calibration approach: e.g. different influent wastewater characterization methods, different kinetic parameter estimation methods, different selection of parameters to be calibrated, different priorities within the calibration steps, etc. In short, there was no standard approach in performing the calibration study, which makes it difficult, if not impossible, to (1) compare different calibrations of ASMs with each other and (2) perform internal quality checks for each calibration study. To address these concerns, systematic calibration protocols have recently been proposed to bring guidance to the modeling of activated sludge systems and in particular to the calibration of full-scale models. In this contribution four existing calibration approaches (BIOMATH, HSG, STOWA and WERF) will be critically discussed using a SWOT (Strengths, Weaknesses, Opportunities, Threats) analysis. It will also be assessed in what way these approaches can be further developed in view of further improving the quality of ASM calibration. In this respect, the potential of automating some steps of the calibration procedure by use of mathematical algorithms is highlighted.
Jacques, Ian
1987-01-01
This book is primarily intended for undergraduates in mathematics, the physical sciences and engineering. It introduces students to most of the techniques forming the core component of courses in numerical analysis. The text is divided into eight chapters which are largely self-contained. However, with a subject as intricately woven as mathematics, there is inevitably some interdependence between them. The level of difficulty varies and, although emphasis is firmly placed on the methods themselves rather than their analysis, we have not hesitated to include theoretical material when we consider it to be sufficiently interesting. However, it should be possible to omit those parts that do seem daunting while still being able to follow the worked examples and to tackle the exercises accompanying each section. Familiarity with the basic results of analysis and linear algebra is assumed since these are normally taught in first courses on mathematical methods. For reference purposes a list of theorems used in the t...
AN ALTERNATIVE CALIBRATION OF CR-39 DETECTORS FOR RADON DETECTION BEYOND THE SATURATION LIMIT.
Franci, Daniele; Aureli, Tommaso; Cardellini, Francesco
2016-12-01
Time-integrated measurements of indoor radon levels are commonly carried out using solid-state nuclear track detectors (SSNTDs), due to the numerous advantages offered by this radiation detection technique. However, the use of SSNTD also presents some problems that may affect the accuracy of the results. The effect of overlapping tracks often results in the underestimation of the detected track density, which leads to the reduction of the counting efficiency for increasing radon exposure. This article aims to address the effect of overlapping tracks by proposing an alternative calibration technique based on the measurement of the fraction of the detector surface covered by alpha tracks. The method has been tested against a set of Monte Carlo data and then applied to a set of experimental data collected at the radon chamber of the Istituto Nazionale di Metrologia delle Radiazioni Ionizzanti, at the ENEA centre in Casaccia, using CR-39 detectors. It has been proved that the method allows to extend the detectable range of radon exposure far beyond the intrinsic limit imposed by the standard calibration based on the track density. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Radio interferometric gain calibration as a complex optimization problem
Smirnov, O. M.; Tasse, C.
2015-05-01
Recent developments in optimization theory have extended some traditional algorithms for least-squares optimization of real-valued functions (Gauss-Newton, Levenberg-Marquardt, etc.) into the domain of complex functions of a complex variable. This employs a formalism called the Wirtinger derivative, and derives a full-complex Jacobian counterpart to the conventional real Jacobian. We apply these developments to the problem of radio interferometric gain calibration, and show how the general complex Jacobian formalism, when combined with conventional optimization approaches, yields a whole new family of calibration algorithms, including those for the polarized and direction-dependent gain regime. We further extend the Wirtinger calculus to an operator-based matrix calculus for describing the polarized calibration regime. Using approximate matrix inversion results in computationally efficient implementations; we show that some recently proposed calibration algorithms such as STEFCAL and peeling can be understood as special cases of this, and place them in the context of the general formalism. Finally, we present an implementation and some applied results of COHJONES, another specialized direction-dependent calibration algorithm derived from the formalism.
Rational extended thermodynamics
Müller, Ingo
1998-01-01
Ordinary thermodynamics provides reliable results when the thermodynamic fields are smooth, in the sense that there are no steep gradients and no rapid changes. In fluids and gases this is the domain of the equations of Navier-Stokes and Fourier. Extended thermodynamics becomes relevant for rapidly varying and strongly inhomogeneous processes. Thus the propagation of high frequency waves, and the shape of shock waves, and the regression of small-scale fluctuation are governed by extended thermodynamics. The field equations of ordinary thermodynamics are parabolic while extended thermodynamics is governed by hyperbolic systems. The main ingredients of extended thermodynamics are • field equations of balance type, • constitutive quantities depending on the present local state and • entropy as a concave function of the state variables. This set of assumptions leads to first order quasi-linear symmetric hyperbolic systems of field equations; it guarantees the well-posedness of initial value problems and f...
Model calibration and validation of an impact test simulation
Hemez, F. M. (François M.); Wilson, A. C. (Amanda C.); Havrilla, G. N. (George N.)
2001-01-01
This paper illustrates the methodology being developed at Los Alamos National Laboratory for the validation of numerical simulations for engineering structural dynamics. The application involves the transmission of a shock wave through an assembly that consists of a steel cylinder and a layer of elastomeric (hyper-foam) material. The assembly is mounted on an impact table to generate the shock wave. The input acceleration and three output accelerations are measured. The main objective of the experiment is to develop a finite element representation of the system capable of reproducing the test data with acceptable accuracy. Foam layers of various thicknesses and several drop heights are considered during impact testing. Each experiment is replicated several times to estimate the experimental variability. Instead of focusing on the calibration of input parameters for a single configuration, the numerical model is validated for its ability to predict the response of three different configurations (various combinations of foam thickness and drop height). Design of Experiments is implemented to perform parametric and statistical variance studies. Surrogate models are developed to replace the computationally expensive numerical simulation. Variables of the finite element model are separated into calibration variables and control variables, The models are calibrated to provide numerical simulations that correctly reproduce the statistical variation of the test configurations. The calibration step also provides inference for the parameters of a high strain-rate dependent material model of the hyper-foam. After calibration, the validity of the numerical simulation is assessed through its ability to predict the response of a fourth test setup.
Symmetric Extended Ockham Algebras
T.S. Blyth; Jie Fang
2003-01-01
The variety eO of extended Ockham algebras consists of those algealgebra with an additional endomorphism k such that the unary operations f and k commute. Here, we consider the cO-algebras which have a property of symmetry. We show that there are thirty two non-isomorphic subdirectly irreducible symmetric extended MS-algebras and give a complete description of them.2000 Mathematics Subject Classification: 06D15, 06D30
Tectonic calibrations in molecular dating
Ullasa KODANDARAMAIAH
2011-01-01
Molecular dating techniques require the use of calibrations, which are usually fossil or geological vicariance-based.Fossil calibrations have been criticised because they result only in minimum age estimates. Based on a historical biogeographic perspective, Ⅰ suggest that vicariance-based calibrations are more dangerous. Almost all analytical methods in historical biogeography are strongly biased towards inferring vicariance, hence vicariance identified through such methods is unreliable. Other studies, especially of groups found on Gondwanan fragments, have simply assumed vicariance. Although it was previously believed that vicariance was the predominant mode of speciation, mounting evidence now indicates that speciation by dispersal is common, dominating vicariance in several groups. Moreover, the possibility of speciation having occurred before the said geological event cannot be precluded. Thus, geological calibrations can under- or overestimate times, whereas fossil calibrations always result in minimum estimates. Another major drawback of vicariant calibrations is the problem of circular reasoning when the resulting estimates are used to infer ages of biogeographic events. Ⅰ argue that fossil-based dating is a superior alternative to vicariance, primarily because the strongest assumption in the latter, that speciation was caused by the said geological process, is more often than not the most tenuous. When authors prefer to use a combination of fossil and vicariant calibrations, one suggestion is to report results both with and without inclusion of the geological constraints. Relying solely on vicariant calibrations should be strictly avoided.
UVIS G280 Wavelength Calibration
Bushouse, Howard
2009-07-01
Wavelength calibration of the UVIS G280 grism will be established using observations of the Wolf Rayet star WR14. Accompanying direct exposures will provide wavelength zeropoints for dispersed exposures. The calibrations will be obtained at the central position of each CCD chip and at the center of the UVIS field. No additional field-dependent variations will be obtained.
Rizvi, H.M.
1999-12-03
The data obtained from these tests determine the dose rate of the two cobalt sources in SRTC. Building 774-A houses one of these sources while the other resides in room C-067 of Building 773-A. The data from this experiment shows the following: (1) The dose rate of the No.2 cobalt source in Building 774-A measured 1.073 x 10{sup 5} rad/h (June 17, 1999). The dose rate of the Shepherd Model 109 Gamma cobalt source in Building 773-A measured 9.27 x 10{sup 5} rad/h (June 25, 1999). These rates come from placing the graduated cylinder containing the dosimeter solution in the center of the irradiation chamber. (2) Two calibration tests in the 774-A source placed the graduated cylinder with the dosimeter solution approximately 1.5 inches off center in the axial direction. This movement of the sample reduced the measured dose rate 0.92% from 1.083 x 10{sup 5} rad/h to 1.073 x 10{sup 5} rad/h. and (3) A similar test in the cobalt source in 773-A placed the graduated cylinder approximately 2.0 inches off center in the axial direction. This change in position reduced the measured dose rate by 10.34% from 1.036 x 10{sup 6} to 9.27 x 10{sup 5}. This testing used chemical dosimetry to measure the dose rate of a radioactive source. In this method, one determines the dose by the chemical change that takes place in the dosimeter. For this calibration experiment, the author used a Fricke (ferrous ammonium sulfate) dosimeter. This solution works well for dose rates to 10{sup 7} rad/h. During irradiation of the Fricke dosimeter solution the Fe{sup 2+} ions ionize to Fe{sup 3+}. When this occurs, the solution acquires a slightly darker tint (not visible to the human eye). To determine the magnitude of the change in Fe ions, one places the solution in an UV-VIS Spectrophotometer. The UV-VIS Spectrophotometer measures the absorbency of the solution. Dividing the absorbency by the total time (in minutes) of exposure yields the dose rate.
Calibration of Gurson-type models for porous sheet metals with anisotropic non-quadratic plasticity
Gologanu, M.; Kami, A.; Comsa, D. S.; Banabic, D.
2016-08-01
The growth and coalescence of voids in sheet metals are not only the main active mechanisms in the final stages of fracture in a necking band, but they also contribute to the forming limits via changes in the normal directions to the yield surface. A widely accepted method to include void effects is the development of a Gurson-type model for the appropriate yield criterion, based on an approximate limit analysis of a unit cell containing a single spherical, spheroidal or ellipsoidal void. We have recently [2] obtained dissipation functions and Gurson-type models for porous sheet metals with ellipsoidal voids and anisotropic non-quadratic plasticity, including yield criteria based on linear transformations (Yld91 and Yld2004-18p) and a pure plane stress yield criteria (BBC2005). These Gurson-type models contain several parameters that depend on the void and cell geometries and on the selected yield criterion. Best results are obtained when these key parameters are calibrated via numerical simulations using the same unit cell and a few representative loading conditions. The single most important such loading condition corresponds to a pure hydrostatic macroscopic stress (pure pressure) and the corresponding velocity field found during the solution of the limit analysis problem describes the expansion of the cavity. However, for the case of sheet metals, the condition of plane stress precludes macroscopic stresses with large triaxiality or ratio of mean stress to equivalent stress, including the pure hydrostatic case. Also, pure plane stress yield criteria like BBC2005 must first be extended to 3D stresses before attempting to develop a Gurson-type model and such extensions are purely phenomenological with no due account for the out- of-plane anisotropic properties of the sheet. Therefore, we propose a new calibration method for Gurson- type models that uses only boundary conditions compatible with the plane stress requirement. For each such boundary condition we use
Automated calibration of multistatic arrays
Henderer, Bruce
2017-03-14
A method is disclosed for calibrating a multistatic array having a plurality of transmitter and receiver pairs spaced from one another along a predetermined path and relative to a plurality of bin locations, and further being spaced at a fixed distance from a stationary calibration implement. A clock reference pulse may be generated, and each of the transmitters and receivers of each said transmitter/receiver pair turned on at a monotonically increasing time delay interval relative to the clock reference pulse. Ones of the transmitters and receivers may be used such that a previously calibrated transmitter or receiver of a given one of the transmitter/receiver pairs is paired with a subsequently un-calibrated one of the transmitters or receivers of an immediately subsequently positioned transmitter/receiver pair, to calibrate the transmitter or receiver of the immediately subsequent transmitter/receiver pair.
Liquid Krypton Calorimeter Calibration Software
Hughes, Christina Lindsay
2013-01-01
Calibration of the liquid krypton calorimeter (LKr) of the NA62 experiment is managed by a set of standalone programs, or an online calibration driver. These programs are similar to those used by NA48, but have been updated to utilize classes and translated to C++ while maintaining a common functionality. A set of classes developed to handle communication with hardware was used to develop the three standalone programs as well as the main driver program for online calibration between bursts. The main calibration driver has been designed to respond to run control commands and receive burst data, both transmitted via DIM. In order to facilitate the process of reading in calibration parameters, a serializable class has been introduced, allowing the replacement of standard text files with XML configuration files.
The Advanced LIGO Photon Calibrators
Karki, S; Kandhasamy, S; Abbott, B P; Abbott, T D; Anders, E H; Berliner, J; Betzwieser, J; Daveloza, H P; Cahillane, C; Canete, L; Conley, C; Gleason, J R; Goetz, E; Kissel, J S; Izumi, K; Mendell, G; Quetschke, V; Rodruck, M; Sachdev, S; Sadecki, T; Schwinberg, P B; Sottile, A; Wade, M; Weinstein, A J; West, M; Savage, R L
2016-01-01
The two interferometers of the Laser Interferometry Gravitaional-wave Observatory (LIGO) recently detected gravitational waves from the mergers of binary black hole systems. Accurate calibration of the output of these detectors was crucial for the observation of these events, and the extraction of parameters of the sources. The principal tools used to calibrate the responses of the second-generation (Advanced) LIGO detectors to gravitational waves are systems based on radiation pressure and referred to as Photon Calibrators. These systems, which were completely redesigned for Advanced LIGO, include several significant upgrades that enable them to meet the calibration requirements of second-generation gravitational wave detectors in the new era of gravitational-wave astronomy. We report on the design, implementation, and operation of these Advanced LIGO Photon Calibrators that are currently providing fiducial displacements on the order of $10^{-18}$ m/$\\sqrt{\\textrm{Hz}}$ with accuracy and precision of better ...
The Advanced LIGO photon calibrators
Karki, S.; Tuyenbayev, D.; Kandhasamy, S.; Abbott, B. P.; Abbott, T. D.; Anders, E. H.; Berliner, J.; Betzwieser, J.; Cahillane, C.; Canete, L.; Conley, C.; Daveloza, H. P.; De Lillo, N.; Gleason, J. R.; Goetz, E.; Izumi, K.; Kissel, J. S.; Mendell, G.; Quetschke, V.; Rodruck, M.; Sachdev, S.; Sadecki, T.; Schwinberg, P. B.; Sottile, A.; Wade, M.; Weinstein, A. J.; West, M.; Savage, R. L.
2016-11-01
The two interferometers of the Laser Interferometry Gravitational-wave Observatory (LIGO) recently detected gravitational waves from the mergers of binary black hole systems. Accurate calibration of the output of these detectors was crucial for the observation of these events and the extraction of parameters of the sources. The principal tools used to calibrate the responses of the second-generation (Advanced) LIGO detectors to gravitational waves are systems based on radiation pressure and referred to as photon calibrators. These systems, which were completely redesigned for Advanced LIGO, include several significant upgrades that enable them to meet the calibration requirements of second-generation gravitational wave detectors in the new era of gravitational-wave astronomy. We report on the design, implementation, and operation of these Advanced LIGO photon calibrators that are currently providing fiducial displacements on the order of 1 0-18m /√{Hz } with accuracy and precision of better than 1%.
Antenna Calibration and Measurement Equipment
Rochblatt, David J.; Cortes, Manuel Vazquez
2012-01-01
A document describes the Antenna Calibration & Measurement Equipment (ACME) system that will provide the Deep Space Network (DSN) with instrumentation enabling a trained RF engineer at each complex to perform antenna calibration measurements and to generate antenna calibration data. This data includes continuous-scan auto-bore-based data acquisition with all-sky data gathering in support of 4th order pointing model generation requirements. Other data includes antenna subreflector focus, system noise temperature and tipping curves, antenna efficiency, reports system linearity, and instrument calibration. The ACME system design is based on the on-the-fly (OTF) mapping technique and architecture. ACME has contributed to the improved RF performance of the DSN by approximately a factor of two. It improved the pointing performances of the DSN antennas and productivity of its personnel and calibration engineers.
Mercury Continuous Emmission Monitor Calibration
John Schabron; Eric Kalberer; Ryan Boysen; William Schuster; Joseph Rovani
2009-03-12
Mercury continuous emissions monitoring systems (CEMs) are being implemented in over 800 coal-fired power plant stacks throughput the U.S. Western Research Institute (WRI) is working closely with the Electric Power Research Institute (EPRI), the National Institute of Standards and Technology (NIST), and the Environmental Protection Agency (EPA) to facilitate the development of the experimental criteria for a NIST traceability protocol for dynamic elemental mercury vapor calibrators/generators. These devices are used to calibrate mercury CEMs at power plant sites. The Clean Air Mercury Rule (CAMR) which was published in the Federal Register on May 18, 2005 and vacated by a Federal appeals court in early 2008 required that calibration be performed with NIST-traceable standards. Despite the vacature, mercury emissions regulations in the future will require NIST traceable calibration standards, and EPA does not want to interrupt the effort towards developing NIST traceability protocols. The traceability procedures will be defined by EPA. An initial draft traceability protocol was issued by EPA in May 2007 for comment. In August 2007, EPA issued a conceptual interim traceability protocol for elemental mercury calibrators. The protocol is based on the actual analysis of the output of each calibration unit at several concentration levels ranging initially from about 2-40 {micro}g/m{sup 3} elemental mercury, and in the future down to 0.2 {micro}g/m{sup 3}, and this analysis will be directly traceable to analyses by NIST. The EPA traceability protocol document is divided into two separate sections. The first deals with the qualification of calibrator models by the vendors for use in mercury CEM calibration. The second describes the procedure that the vendors must use to certify the calibrators that meet the qualification specifications. The NIST traceable certification is performance based, traceable to analysis using isotope dilution inductively coupled plasma
Japyassú, Hilton F; Laland, Kevin N
2017-05-01
There is a tension between the conception of cognition as a central nervous system (CNS) process and a view of cognition as extending towards the body or the contiguous environment. The centralised conception requires large or complex nervous systems to cope with complex environments. Conversely, the extended conception involves the outsourcing of information processing to the body or environment, thus making fewer demands on the processing power of the CNS. The evolution of extended cognition should be particularly favoured among small, generalist predators such as spiders, and here, we review the literature to evaluate the fit of empirical data with these contrasting models of cognition. Spiders do not seem to be cognitively limited, displaying a large diversity of learning processes, from habituation to contextual learning, including a sense of numerosity. To tease apart the central from the extended cognition, we apply the mutual manipulability criterion, testing the existence of reciprocal causal links between the putative elements of the system. We conclude that the web threads and configurations are integral parts of the cognitive systems. The extension of cognition to the web helps to explain some puzzling features of spider behaviour and seems to promote evolvability within the group, enhancing innovation through cognitive connectivity to variable habitat features. Graded changes in relative brain size could also be explained by outsourcing information processing to environmental features. More generally, niche-constructed structures emerge as prime candidates for extending animal cognition, generating the selective pressures that help to shape the evolving cognitive system.
Ševkušić-Mandić Slavica G.
2002-01-01
Full Text Available The paper presents the results of a pilot project evaluation, carried out as an action investigation whose aim was to provide a better quality extended day for primary school students. The project included the training of teachers involved in extended day program, designing of special activities performed by teachers with children once a week as well as changes and equipping of premises where children stay. The aims of the program were conception and performance of activities in a less formal way than during regular instructional days, linking of learning at school and acquired knowledge to everyday experiences, and work on contents contributing to the development of child's interests and creativity. The program was accomplished in a Belgrade primary school during the 2001/2002 academic year, comprising students of 1st and 2nd grades (N=77. The effects of the program were monitored throughout the academic year (observation and teachers' reports on accomplished workshops and at the end of the academic year (teachers and students' opinions of the program, academic achievement and creativity of students attending the extended day program compared with students not attending it. Findings about positive effects of the program on students' broadening of interests and willingness to express themselves creatively, indicate unequivocally that there is a need for developing special extended day programs. The extended day program is an opportunity for school to exert greater educational influence that has yet to be tapped.
Extending the unambiguous velocity range using multiple carrier frequencies
Zhang, Z.; Jakobsson, A.; Nikolov, Svetoslav;
2005-01-01
Typically, velocity estimators based on the estimation of the Doppler shift will suffer from a limited unambiguous velocity range. Proposed are two novel multiple carrier based velocity estimators extending the velocity range above the Nyquist velocity limit. Numerical simulations indicate...
Mexican national pyronometer network calibration
VAldes, M.; Villarreal, L.; Estevez, H.; Riveros, D.
2013-12-01
In order to take advantage of the solar radiation as an alternate energy source it is necessary to evaluate the spatial and temporal availability. The Mexican National Meterological Service (SMN) has a network with 136 meteorological stations, each coupled with a pyronometer for measuring the global solar radiation. Some of these stations had not been calibrated in several years. The Mexican Department of Energy (SENER) in order to count on a reliable evaluation of the solar resource funded this project to calibrate the SMN pyrometer network and validate the data. The calibration of the 136 pyronometers by the intercomparison method recommended by the World Meterological Organization (WMO) requires lengthy observations and specific environmental conditions such as clear skies and a stable atmosphere, circumstances that determine the site and season of the calibration. The Solar Radiation Section of the Instituto de Geofísica of the Universidad Nacional Autónoma de México is a Regional Center of the WMO and is certified to carry out the calibration procedures and emit certificates. We are responsible for the recalibration of the pyronometer network of the SMN. A continuous emission solar simulator with exposed areas with 30cm diameters was acquired to reduce the calibration time and not depend on atmospheric conditions. We present the results of the calibration of 10 thermopile pyronometers and one photovoltaic cell by the intercomparison method with more than 10000 observations each and those obtained with the solar simulator.
Site characterization for calibration of radiometric sensors using vicarious method
Parihar, Shailesh; Rathore, L. S.; Mohapatra, M.; Sharma, A. K.; Mitra, A. K.; Bhatla, R.; Singh, R. S.; Desai, Yogdeep; Srivastava, Shailendra S.
2016-05-01
Radiometric performances of earth observation satellite/sensors vary from ground pre-launch calibration campaign to post launch period extended to lifetime of the satellite due to launching vibrations. Therefore calibration is carried out worldwide through various methods throughout satellite lifetime. In India Indian Space Research Organization (ISRO) calibrates the sensor of Resourcesat-2 satellite by vicarious method. One of these vicarious calibration methods is the reflectance-based approach that is applied in this study for radiometric calibration of sensors on-board Resouresat-2 satellite. The results of ground-based measurement of atmospheric conditions and surface reflectance are made at Bap, Rajasthan Calibration/Validation (Cal/Val) site. Cal/Val observations at site were carried out with hyper-spectral Spectroradiometer covering spectral range of 350nm- 2500nm for radiometric characterization of the site. The Sunphotometer/Ozonometer for measuring the atmospheric parameters has also been used. The calibrated radiance is converted to absolute at-sensor spectral reflectance and Top-Of-Atmosphere (TOA) radiance. TOA radiance was computed using radiative transfer model `Second simulation of the satellite signal in the solar spectrum' (6S), which can accurately simulate the problems introduced by the presence of the atmosphere along the path from Sun to target (surface) to Sensor. The methodology for band averaged reflectance retrieval and spectral reflectance fitting process are described. Then the spectral reflectance and atmospheric parameters are put into 6S code to predict TOA radiance which compare with Resourcesat-2 radiance. Spectral signature and its reflectance ratio indicate the uniformity of the site. Thus the study proves that the selected site is suitable for vicarious calibration of sensor of Resourcesat-2. Further the study demonstrates the procedure for similar exercise for site selection for Cal/Val analysis of other satellite over India
Calibration of Radio Interferometers Using a Sparse DoA Estimation Framework
Brossard, Martin; Pesavento, Marius; Boyer, Rémy; Larzabal, Pascal
2016-01-01
The calibration of modern radio interferometers is a significant challenge, specifically at low frequencies. In this perspective, we propose a novel iterative calibration algorithm, which employs the popular sparse representation framework, in the regime where the propagation conditions bend dissimilarly the directions of the sources. More precisely, our algorithm is designed to estimate the apparent directions of the calibration sources, their powers, the directional and undirectional complex gains of the array elements and their noise powers, with a reasonable computational complexity. Numerical simulations reveal that the proposed scheme is statistically efficient at low SNR and even with additional non-calibration sources at unknown directions.
NIST Infrared Blackbody Calibration Support for Climate Change Research
Hanssen, L. M.; Zeng, J.; Mekhontsev, S.; Khromchenko, V.
2012-12-01
The National Institute of Technology (NIST) Sensor Science Division has established measurement capabilities in support of various existing and planned satellite programs, which monitor key parameters for the study of climate change, such as solar irradiance, earth radiance, and atmospheric effects. These capabilities include the characterization of infrared reference blackbody sources and cavity radiometers, as well as the materials used to coat the cavity surfaces. In order to accurately measure high levels of effective emissivity and absorptance of cavities, NIST has developed a laser- and integrating-sphere-based facility (the Complete Hemispherical Infrared Laser-based Reflectometer (CHILR)). The system is used for both radiometer and blackbody cavity characterization. Multiple laser sources with wavelengths ranging from 1.5 μm to 23 μm are used to perform reflectance (1 - emissivity (or absorptance)) measurements of radiometer cavities. Measurements have been performed for numerous instruments including the Internal Calibration Target (ICT)) blackbody source used for calibration of the Cross track Infrared Sounder (CrIS), and the Total Irradiance Monitor (TIM) instrument on the Solar Radiation and Climate Experiment (SORCE), both for the Joint Polar Satellite System (JPSS), as well as the Active Cavity Radiometer Irradiance Monitor (ACRIM) instrument, and blackbodies constructed for prototyping of an infrared instrument on the Climate Absolute Radiance and Refractivity Observatory (CLARREO). For a more comprehensive understanding of the measurement results, NIST has also measured samples of the coated surfaces of the cavities and associated baffles. This includes several types of reflectance measurements: specular, directional-hemispherical (diffuse), and bi-directional distribution function (BRDF). The first two are performed spectrally and provide information that enables estimation of the cavity performance where laser sources for CHILR are not available
Calibration of Models Using Groundwater Age (Invited)
Sanford, W. E.
2009-12-01
Water-resource managers are frequently concerned with the long-term ability of a groundwater system to deliver volumes of water for both humans and ecosystems under natural and anthropogenic stresses. Analysis of how a groundwater system responds to such stresses usually involves the construction and calibration of a numerical groundwater-flow model. The calibration procedure usually involves the use of both groundwater-level and flux observations. Water-level data are often more abundant, and thus the availability of flux data can be critical, with well discharge and base flow to streams being most often available. Lack of good flux data however is a common occurrence, especially in more arid climates where the sustainability of the water supply may be even more in question. Environmental tracers are frequently being used to estimate the “age” of a water sample, which represents the time the water has been in the subsurface since its arrival at the water table. Groundwater ages provide flux-related information and can be used successfully to help calibrate groundwater models if porosity is well constrained, especially when there is a paucity of other flux data. As several different methods of simulating groundwater age and tracer movement are possible, a review is presented here of the advantages, disadvantages, and potential pitfalls of the various numerical and tracer methods used in model calibration. The usefulness of groundwater ages for model calibration depends on the ability both to interpret a tracer so as to obtain an apparent observed age, and to use a numerical model to obtain an equivalent simulated age observation. Different levels of simplicity and assumptions accompany different methods for calculating the equivalent simulated age observation. The advantages of computational efficiency in certain methods can be offset by error associated with the underlying assumptions. Advective travel-time calculation using path-line tracking in finite
Extending Critical Performativity
Spicer, André; Alvesson, Mats; Kärreman, Dan
2016-01-01
In this article we extend the debate about critical performativity. We begin by outlining the basic tenets of critical performativity and how this has been applied in the study of management and organization. We then address recent critiques of critical performance. We note these arguments suffer...... from an undue focus on intra-academic debates; engage in author-itarian theoretical policing; feign relevance through symbolic radicalism; and repackage common sense. We take these critiques as an opportunity to offer an extended model of critical performativity that involves focusing on issues...
Calibrating System for Vacuum Gauges
MengJun; YangXiaotian; HaoBinggan; HouShengjun; HuZhenjun
2003-01-01
In order to measure the vacuum degree, a lot of vacuum gauges will be used in CSR vacuum system. We bought several types of vacuum gauges. We know that different typos of vacuum gauges or even one type of vacuum gauges have different measure results in same condition, so they must be calibrated. But it seems impossible for us to send so many gauges to the calibrating station outside because of the high price. So the best choice is to build a second class calibrating station for vacuum gauges by ourselves (Fig.l).
Jet energy calibration in ATLAS
Schouten, Doug
A correct energy calibration for jets is essential to the success of the ATLAS experi- ment. In this thesis I study a method for deriving an in situ jet energy calibration for the ATLAS detector. In particular, I show the applicability of the missing transverse energy projection fraction method. This method is shown to set the correct mean energy for jets. Pileup effects due to the high luminosities at ATLAS are also stud- ied. I study the correlations in lateral distributions of pileup energy, as well as the luminosity dependence of the in situ calibration metho
Calibration of the Albian/Cenomanian Boundary by Ammonite Biostratigraphy: U.S. Western Interior
无
2007-01-01
Calibration of numerical ages to the geological time scale is a long scientific pursuit that requires the integration of multiple data sets. A case study of the Albian/Cenomanian stage boundary,also the Lower/Upper Cretaceous series boundary, illustrates the calibration process. The numerical age of this boundary has shifted from 96 Ma to 99 Ma over a time span of nearly fifty years. Recalibration resulted first from improvements in radiometric dating, and later from inferences about ammonite phylogeny, and most recently from radiometric dates of newly discovered volcanic beds interbedded with diagnostic guide fossils. However, the calibration process continues with study of cosmopolitan dinoflagellates.
Relevance of ellipse eccentricity for camera calibration
Mordwinzew, W.; Tietz, B.; Boochs, F.; Paulus, D.
2015-05-01
of ellipse eccentricity on image blocks cannot be modeled in a straight forward fashion. Instead, simulations can help make the impact visible, and to distinguish critical or less critical situations. In particular, this might be of importance for calibrations, as undetected influence on the results will affect further projects where the same camera will be used. This paper therefore aims to point out the influence of ellipse eccentricities on camera calibrations, by using two typical calibration bodies: planar and cube shaped calibration. In the first step, their relevance and influence on the image measurements, object- and camera geometry is shown with numeric examples. Differences and similarities between both calibration bodies are identified and discussed. In the second step, practical relevance of a correction is proven in a real calibration. Finally, a conclusion is drawn followed by recommendations to handle ellipse eccentricity in the practice.
Extended Interneuronal Network of the Dentate Gyrus
Gergely G. Szabo
2017-08-01
Full Text Available Local interneurons control principal cells within individual brain areas, but anecdotal observations indicate that interneuronal axons sometimes extend beyond strict anatomical boundaries. Here, we use the case of the dentate gyrus (DG to show that boundary-crossing interneurons with cell bodies in CA3 and CA1 constitute a numerically significant and diverse population that relays patterns of activity generated within the CA regions back to granule cells. These results reveal the existence of a sophisticated retrograde GABAergic circuit that fundamentally extends the canonical interneuronal network.
Focus point supersymmetry in extended gauge mediation
Ding, Ran [School of Physics, Nankai University,Tianjin 300071 (China); Li, Tianjun [State Key Laboratory of Theoretical Physics and Kavli Institute for Theoretical Physics (KITPC),Institute of Theoretical Physics, Chinese Academy of Sciences, Beijing 100190 (China); School of Physical Electronics, University of Electronic Science and Technology of China,Chengdu 610054 (China); Staub, Florian [Bethe Center for Theoretical Physics & Physikalisches Institut der Universität Bonn,Nußallee 12, 53115 Bonn (Germany); Zhu, Bin [School of Physics, Nankai University,Tianjin 300071 (China)
2014-03-27
We propose a small extension of the minimal gauge mediation through the combination of extended gauge mediation and conformal sequestering. We show that the focus point supersymmetry can be realized naturally, and the fine tuning is significantly reduced compared to the minimal gauge mediation and extended gauge mediation without focus point. The Higgs boson mass is around 125 GeV, the gauginos remain light, and the gluino is likely to be detected at the next run of the LHC. However, the multi-TeV squarks is out of the reach of the LHC. The numerical calculation for fine-tuning shows that this model remains natural.
Björn J. Döring
2013-12-01
Full Text Available A synthetic aperture radar (SAR system requires external absolute calibration so that radiometric measurements can be exploited in numerous scientific and commercial applications. Besides estimating a calibration factor, metrological standards also demand the derivation of a respective calibration uncertainty. This uncertainty is currently not systematically determined. Here for the first time it is proposed to use hierarchical modeling and Bayesian statistics as a consistent method for handling and analyzing the hierarchical data typically acquired during external calibration campaigns. Through the use of Markov chain Monte Carlo simulations, a joint posterior probability can be conveniently derived from measurement data despite the necessary grouping of data samples. The applicability of the method is demonstrated through a case study: The radar reflectivity of DLR’s new C-band Kalibri transponder is derived through a series of RADARSAT-2 acquisitions and a comparison with reference point targets (corner reflectors. The systematic derivation of calibration uncertainties is seen as an important step toward traceable radiometric calibration of synthetic aperture radars.
Numerical simulations of concrete flow: A benchmark comparison
Roussel, Nicolas; Gram, Annika; Cremonesi, Massimiliano;
2016-01-01
First, we define in this paper two benchmark flows readily usable by anyone calibrating a numerical tool for concrete flow prediction. Such benchmark flows shall allow anyone to check the validity of their computational tools no matter the numerical methods and parameters they choose. Second, we...... compare numerical predictions of the concrete sample final shape for these two benchmark flows obtained by various research teams around the world using various numerical techniques. Our results show that all numerical techniques compared here give very similar results suggesting that numerical...
Parameterization of extended systems
Niemann, Hans Henrik
2006-01-01
The YJBK parameterization (of all stabilizing controllers) is extended to handle systems with additional sensors and/or actuators. It is shown that the closed loop transfer function is still an affine function in the YJBK parameters in the nominal case. Further, some closed-loop stability results...
Wu, J -L; Xiao, H
2015-01-01
Model-form uncertainties in complex mechanics systems are a major obstacle for predictive simulations. Reducing these uncertainties is critical for stake-holders to make risk-informed decisions based on numerical simulations. For example, Reynolds-Averaged Navier-Stokes (RANS) simulations are increasingly used in mission-critical systems involving turbulent flows. However, for many practical flows the RANS predictions have large model-form uncertainties originating from the uncertainty in the modeled Reynolds stresses. Recently, a physics-informed Bayesian framework has been proposed to quantify and reduce model-form uncertainties in RANS simulations by utilizing sparse observation data. However, in the design stage of engineering systems, measurement data are usually not available. In the present work we extend the original framework to scenarios where there are no available data on the flow to be predicted. In the proposed method, we first calibrate the model discrepancy on a related flow with available dat...
Numeric invariants from multidimensional persistence
Skryzalin, Jacek [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Carlsson, Gunnar [Stanford Univ., Stanford, CA (United States)
2017-05-19
In this paper, we analyze the space of multidimensional persistence modules from the perspectives of algebraic geometry. We first build a moduli space of a certain subclass of easily analyzed multidimensional persistence modules, which we construct specifically to capture much of the information which can be gained by using multidimensional persistence over one-dimensional persistence. We argue that the global sections of this space provide interesting numeric invariants when evaluated against our subclass of multidimensional persistence modules. Lastly, we extend these global sections to the space of all multidimensional persistence modules and discuss how the resulting numeric invariants might be used to study data.
Coast guard STD calibration procedures
Freeman, R.H; Krug, W.S
1973-01-01
This manual describes the procedures used by the Coast Guard Oceanographic UNIT (CGOU) to calibrate several Model 9040 STD systems, manufactured by Plessey Environmental Systems, currently in use within the Coast Guard...
Calibration of "Babyline" RP instruments
2015-01-01
If you have old RP instrumentation of the “Babyline” type, as shown in the photo, please contact the Radiation Protection Group (Joffrey Germa, 73171) to have the instrument checked and calibrated. Thank you. Radiation Protection Group
Astrid-2 EMMA Magnetic Calibration
Merayo, José M.G.; Brauer, Peter; Risbo, Torben
1998-01-01
experiment built as a collaboration between the DTU, Department of Automation and the Department of Plasma Physics, The Alfvenlaboratory, Royal Institute of Technology (RIT), Stockholm. The final magnetic calibration of the Astrid-2 satellite was done at the Lovoe Magnetic Observatory under the Geological...... of the magnetometer readings in each position were related to the field magnitudes from the Observatory, and a least squares fit for the 9 magnetometer calibration parameters was performed (3 offsets, 3 scale values and 3 inter-axes angles). After corrections for the magnetometer digital-to-analogue converters...... fit calibration parameters. Owing to time shortage, we did not evaluate the temperature coefficients of the flight sensor calibration parameters. However, this was done for an identical flight spare magnetometer sensor at the magnetic coil facility belonging to the Technical University of Braunschweig...
Field calibration of cup anemometers
Kristensen, L.; Jensen, G.; Hansen, A.; Kirkegaard, P.
2001-01-01
An outdoor calibration facility for cup anemometers, where the signals from 10 anemometers of which at least one is a reference can be recorded simultaneously, has been established. The results are discussed with special emphasis on the statistical significance of the calibration expressions. It is concluded that the method has the advantage that many anemometers can be calibrated accurately with a minimum of work and cost. The obvious disadvantage is that the calibration of a set of anemometers may take more than one month in order to have wind speeds covering a sufficiently large magnitude range in a wind direction sector where we can be sure that the instruments are exposed to identical, simultaneous wind flows. Another main conclusion is that statistical uncertainty must be carefully evaluated since the individual 10 minute wind-speed averages are not statistically independent. (au)
Resonance regions of extended Mathieu equation
Semyonov, V. P.; Timofeev, A. V.
2016-02-01
One of the mechanisms of energy transfer between degrees of freedom of dusty plasma system is based on parametric resonance. Initial stage of this process can de described by equation similar to Mathieu equation. Such equation is studied by analytical and numerical approach. The numerical solution of the extended Mathieu equation is obtained for a wide range of parameter values. Boundaries of resonance regions, growth rates of amplitudes and times of onset are obtained. The energy transfer between the degrees of freedom of dusty plasma system can occur over a wide range of frequencies.
Bayesian Calibration of Microsimulation Models.
Rutter, Carolyn M; Miglioretti, Diana L; Savarino, James E
2009-12-01
Microsimulation models that describe disease processes synthesize information from multiple sources and can be used to estimate the effects of screening and treatment on cancer incidence and mortality at a population level. These models are characterized by simulation of individual event histories for an idealized population of interest. Microsimulation models are complex and invariably include parameters that are not well informed by existing data. Therefore, a key component of model development is the choice of parameter values. Microsimulation model parameter values are selected to reproduce expected or known results though the process of model calibration. Calibration may be done by perturbing model parameters one at a time or by using a search algorithm. As an alternative, we propose a Bayesian method to calibrate microsimulation models that uses Markov chain Monte Carlo. We show that this approach converges to the target distribution and use a simulation study to demonstrate its finite-sample performance. Although computationally intensive, this approach has several advantages over previously proposed methods, including the use of statistical criteria to select parameter values, simultaneous calibration of multiple parameters to multiple data sources, incorporation of information via prior distributions, description of parameter identifiability, and the ability to obtain interval estimates of model parameters. We develop a microsimulation model for colorectal cancer and use our proposed method to calibrate model parameters. The microsimulation model provides a good fit to the calibration data. We find evidence that some parameters are identified primarily through prior distributions. Our results underscore the need to incorporate multiple sources of variability (i.e., due to calibration data, unknown parameters, and estimated parameters and predicted values) when calibrating and applying microsimulation models.
Pressures Detector Calibration and Measurement
AUTHOR|(CDS)2156315
2016-01-01
This is report of my first and second projects (of 3) in NA61. I did data taking and analysis in order to do calibration of pressure detectors and verified it. I analyzed the data by ROOT software using the C ++ programming language. The first part of my project was determination of calibration factor of pressure sensors. Based on that result, I examined the relation between pressure drop, gas flow rate of in paper filter and its diameter.
Bushouse, Howard
2009-07-01
Flux calibration, image displacement, and spectral trace of the UVIS G280 grism will be established using observations of the HST flux standard start GD71. Accompanying direct exposures will provide the image displacement measurements and wavelength zeropoints for dispersed exposures. The calibrations will be obtained at the central position of each CCD chip and at the center of the UVIS field. No additional field-dependent variations will be derived.
Infrasound Sensor Calibration and Response
2012-09-01
functions with faster rise times. SUMMARY We have documented past work on the determination of the calibration constant of the LANL infrasound sensor...Monitoring Technologies 735 Report Documentation Page Form ApprovedOMB No. 0704-0188 Public reporting burden for the collection of information is estimated...National Laboratory ( LANL ) has operated an infrasound sensor calibration chamber that operates over a frequency range of 0.02 to 4 Hz. This chamber has
Beam Imaging and Luminosity Calibration
Klute, Markus; Salfeld-Nebgen, Jakob
2016-01-01
We discuss a method to reconstruct two-dimensional proton bunch densities using vertex distributions accumulated during LHC beam-beam scans. The $x$-$y$ correlations in the beam shapes are studied and an alternative luminosity calibration technique is introduced. We demonstrate the method on simulated beam-beam scans and estimate the uncertainty on the luminosity calibration associated to the beam-shape reconstruction to be below 1\\%.
Salvini, Stefano
2014-01-01
Context. Modern radio astronomical arrays have (or will have) more than one order of magnitude more receivers than classical synthesis arrays, such as the VLA and the WSRT. This makes gain calibration a computationally demanding task. Several alternating direction implicit (ADI) approaches have therefore been proposed that reduce numerical complexity for this task from $\\mathcal{O}(P^3)$ to $\\mathcal{O}(P^2)$, where $P$ is the number of receive paths to be calibrated. Aims. We present an ADI method, show that it converges to the optimal solution, and assess its numerical, computational and statistical performance. We also discuss its suitability for application in self-calibration and report on its successful application in LOFAR standard pipelines. Methods. Convergence is proved by rigorous mathematical analysis using a contraction mapping. Its numerical, algorithmic, and statistical performance, as well as its suitability for application in self-calibration, are assessed using simulations. Results. Our simu...
Calibration of shaft alignment instruments
Hemming, Bjorn
1998-09-01
Correct shaft alignment is vital for most rotating machines. Several shaft alignment instruments, ranging form dial indicator based to laser based, are commercially available. At VTT Manufacturing Technology a device for calibration of shaft alignment instruments was developed during 1997. A feature of the developed device is the similarity to the typical use of shaft alignment instruments i.e. the rotation of two shafts during the calibration. The benefit of the rotation is that all errors of the shaft alignment instrument, for example the deformations of the suspension bars, are included. However, the rotation increases significantly the uncertainty of calibration because of errors in the suspension of the shafts in the developed device for calibration of shaft alignment instruments. Without rotation the uncertainty of calibration is 0.001 mm for the parallel offset scale and 0,003 mm/m for the angular scale. With rotation the uncertainty of calibration is 0.002 mm for the scale and 0.004 mm/m for the angular scale.
Essay on Option Pricing, Hedging and Calibration
da Silva Ribeiro, André Manuel
Quantitative finance is concerned about applying mathematics to financial markets.This thesis is a collection of essays that study different problems in this field: How efficient are option price approximations to calibrate a stochastic volatilitymodel? (Chapter 2) How different is the discretely...... sampled realized variance from the continuouslysampled realized variance? (Chapter 3) How can we do static hedging for a payoff with two assets? (Chapter 4) Can we apply fast Fourier Transform methods to efficiently use interest rateMarkov-functional models? Can we extend them to accommodate othertypes...... stochastic volatility model gives stable, accurate, and fast results for S&P500-index option data over the period 2005 to 2009. Discretely Sampled Variance Options: A Stochastic Approximation Approach In this paper, we expand Drimus and Farkas (2012) framework to price variance options on discretely sampled...
Direct calibration of click-counting detectors
Bohmann, M.; Kruse, R.; Sperling, J.; Silberhorn, C.; Vogel, W.
2017-03-01
We introduce and experimentally implement a method for the detector calibration of photon-number-resolving time-bin multiplexing layouts based on the measured click statistics of superconducting nanowire detectors. In particular, the quantum efficiencies, the dark count rates, and the positive operator-valued measures of these measurement schemes are directly obtained with high accuracy. The method is based on the moments of the click-counting statistics for coherent states with different coherent amplitudes. The strength of our analysis is that we can directly conclude—on a quantitative basis—that the detection strategy under study is well described by a linear response function for the light-matter interaction and that it is sensitive to the polarization of the incident light field. Moreover, our method is further extended to a two-mode detection scenario. Finally, we present possible applications for such well-characterized detectors, such as sensing of atmospheric loss channels and phase sensitive measurements.
Essay on Option Pricing, Hedging and Calibration
da Silva Ribeiro, André Manuel
Quantitative finance is concerned about applying mathematics to financial markets.This thesis is a collection of essays that study different problems in this field: How efficient are option price approximations to calibrate a stochastic volatilitymodel? (Chapter 2) How different is the discretely...... sampled realized variance from the continuouslysampled realized variance? (Chapter 3) How can we do static hedging for a payoff with two assets? (Chapter 4) Can we apply fast Fourier Transform methods to efficiently use interest rateMarkov-functional models? Can we extend them to accommodate othertypes...... variance. We investigated the impact of their assumptions and we present an adjustment for their formula. Our adjustment provides a better approximation to price discretely sampled realized variance options under different market scenarios Static Hedging for Two-Asset Options In this paper we derive...
A Frequency Comb calibrated Solar Atlas
Molaro, P; Monai, S; Hernandez, J I Gonzalez; Hansch, T W; Holzwarth, R; Manescau, A; Pasquini, L; Probst, R A; Rebolo, R; Steinmetz, T; Udem, Th; Wilken, T
2013-01-01
The solar spectrum is a primary reference for the study of physical processes in stars and their variation during activity cycles. In Nov 2010 an experiment with a prototype of a Laser Frequency Comb (LFC) calibration system was performed with the HARPS spectrograph of the 3.6m ESO telescope at La Silla during which high signal-to-noise spectra of the Moon were obtained. We exploit those Echelle spectra to study the optical integrated solar spectrum . The DAOSPEC program is used to measure solar line positions through gaussian fitting in an automatic way. We first apply the LFC solar spectrum to characterize the CCDs of the HARPS spectrograph. The comparison of the LFC and Th-Ar calibrated spectra reveals S-type distortions on each order along the whole spectral range with an amplitude of +/-40 m/s. This confirms the pattern found by Wilken et al. (2010) on a single order and extends the detection of the distortions to the whole analyzed region revealing that the precise shape varies with wavelength. A new da...
Calibrating M dwarf metallicities using molecular indices
Woolf, V M; Woolf, Vincent M; Wallerstein, George
2005-01-01
We report progress in the calibration of a method to determine cool dwarf star metallicities using molecular band strength indices. The molecular band index to metallicity relation can be calibrated using chemical abundances calculated from atomic line equivalent width measurements in high resolution spectra. Building on previous work, we have measured Fe and Ti abundances in 32 additional M and K dwarf stars to extend the range of temperature and metallicity covered. A test of our analysis method using warm star - cool star binaries shows we can calculate reliable abundances for stars warmer than 3500 K. We have used abundance measurements for warmer binary or cluster companions to estimate abundances in 6 additional cool dwarfs. Adding stars measured in our previous work and others from the literature provides 76 stars with Fe abundance and CaH2 and TiO5 index measurements. The CaH2 molecular index is directly correlated with temperature. TiO5 depends on temperature and metallicity. Metallicity can be estim...
Numerical simulations of black-hole spacetimes
Chu, Tony
This thesis covers various aspects of the numerical simulation of black-hole spacetimes according to Einstein's general theory of relativity, using the Spectral Einstein Code developed by the Caltech-Cornell-CITA collaboration. The first topic is improvement of binary-black-hole initial data. One such issue is the construction of binary-black-hole initial data with nearly extremal spins that remain nearly constant during the initial relaxation in an evolution. Another concern is the inclusion of physically realistic tidal deformations of the black holes to reduce the high-frequency components of the spurious gravitational radiation content, and represents a first step in incorporating post-Newtonian results in constraint-satisfying initial data. The next topic is the evolution of black-hole binaries and the gravitational waves they emit. The first spectral simulation of two inspiralling black holes through merger and ringdown is presented, in which the black holes are nonspinning and have equal masses. This work is extended to perform the first spectral simulations of two inspiralling black holes with moderate spins and equal masses, including the merger and ringdown. Two configurations are considered, in which both spins are either anti-aligned or aligned with the orbital angular momentum. Highly accurate gravitational waveforms are computed for all these cases, and are used to calibrate waveforms in the effective-one-body model. The final topic is the behavior of quasilocal black-hole horizons in highly dynamical situations. Simulations of a rotating black hole that is distort ed by a pulse of ingoing gravitational radiation are performed. Multiple marginally outer trapped surfaces are seen to appear and annihilate with each other during the evolution, and the world tubes th ey trace out are all dynamical horizons. The dynamical horizon and angular momentum flux laws are evaluated in this context, and the dynamical horizons are contrasted with the event horizon
Real-Time Attitude Independent Three Axis Magnetometer Calibration
Crassidis, John L.; Lai, Kok-Lam; Harman, Richard R.
2003-01-01
In this paper new real-time approaches for three-axis magnetometer sensor calibration are derived. These approaches rely on a conversion of the magnetometer-body and geomagnetic-reference vectors into an attitude independent observation by using scalar checking. The goal of the full calibration problem involves the determination of the magnetometer bias vector, scale factors and non-orthogonality corrections. Although the actual solution to this full calibration problem involves the minimization of a quartic loss function, the problem can be converted into a quadratic loss function by a centering approximation. This leads to a simple batch linear least squares solution. In this paper we develop alternative real-time algorithms based on both the extended Kalman filter and Unscented filter. With these real-time algorithms, a full magnetometer calibration can now be performed on-orbit during typical spacecraft mission-mode operations. Simulation results indicate that both algorithms provide accurate integer resolution in real time, but the Unscented filter is more robust to large initial condition errors than the extended Kalman filter. The algorithms are also tested using actual data from the Transition Region and Coronal Explorer (TRACE).
Griffin, M J; Schulz, B; Amaral-Rogers, A; Bendo, G; Bock, J; Conley, A; Dowell, C D; Ferlet, M; Glenn, J; Lim, T; Pearson, C; Pohlen, M; Sibthorpe, B; Spencer, L; Swinyard, B; Valtchanov, I
2013-01-01
Photometric instruments operating at far infrared to millimetre wavelengths often have broad spectral passbands (central wavelength/bandwidth ~ 3 or less), especially those operating in space. A broad passband can result in significant variation of the beam profile and aperture efficiency across the passband, effects which thus far have not generally been taken into account in the flux calibration of such instruments. With absolute calibration uncertainties associated with the brightness of primary calibration standards now in the region of 5% or less, variation of the beam properties across the passband can be a significant contributor to the overall calibration accuracy for extended emission. We present a calibration framework which takes such variations into account for both antenna-coupled and absorber-coupled focal plane architectures. The scheme covers point source and extended source cases, and also the intermediate case of a semi-extended source profile. We apply the new method to the Herschel-SPIRE s...
Modeling of germanium detector and its sourceless calibration
Steljić Milijana
2008-01-01
Full Text Available The paper describes the procedure of adapting a coaxial high-precision germanium detector to a device with numerical calibration. The procedure includes the determination of detector dimensions and establishing the corresponding model of the system. In order to achieve a successful calibration of the system without the usage of standard sources, Monte Carlo simulations were performed to determine its efficiency and pulse-height response function. A detailed Monte Carlo model was developed using the MCNP-5.0 code. The obtained results have indicated that this method represents a valuable tool for the quantitative uncertainty analysis of radiation spectrometers and gamma-ray detector calibration, thus minimizing the need for the deployment of radioactive sources.
Calibration of parallel kinematics machine using generalized distance error model
无
2007-01-01
This paper focus on the accuracy enhancement of parallel kinematics machine through kinematics calibration. In the calibration processing, well-structured identification Jacobian matrix construction and end-effector position and orientation measurement are two main difficulties. In this paper, the identification Jacobian matrix is constructed easily by numerical calculation utilizing the unit virtual velocity method. The generalized distance errors model is presented for avoiding measuring the position and orientation directly which is difficult to be measured. At last, a measurement tool is given for acquiring the data points in the calibration processing.Experimental studies confirmed the effectiveness of method. It is also shown in the paper that the proposed approach can be applied to other typed parallel manipulators.
Vision system for dial gage torque wrench calibration
Aggarwal, Neelam; Doiron, Theodore D.; Sanghera, Paramjeet S.
1993-11-01
In this paper, we present the development of a fast and robust vision system which, in conjunction with the Dial Gage Calibration system developed by AKO Inc., will be used by the U.S. Army in calibrating dial gage torque wrenches. The vision system detects the change in the angular position of the dial pointer in a dial gage. The angular change is proportional to the applied torque. The input to the system is a sequence of images of the torque wrench dial gage taken at different dial pointer positions. The system then reports the angular difference between the different positions. The primary components of this vision system include modules for image acquisition, linear feature extraction and angle measurements. For each of these modules, several techniques were evaluated and the most applicable one was selected. This system has numerous other applications like vision systems to read and calibrate analog instruments.
FRW Cosmology with the Extended Chaplygin Gas
B. Pourhassan
2014-01-01
Full Text Available We propose extended Chaplygin gas equation of state for which it recovers barotropic fluid with quadratic equation of state. We use numerical method to investigate the behavior of some cosmological parameters such as scale factor, Hubble expansion parameter, energy density, and deceleration parameter. We also discuss the resulting effective equation of state parameter. Using density perturbations we investigate the stability of the theory.
Extending DUNE: The dune-xt modules
Leibner, Tobias; Milk, René; Schindler, Felix
2016-01-01
We present our effort to extend and complement the core modules of the Distributed and Unified Numerics Environment DUNE (http://dune-project.org) by a well tested and structured collection of utilities and concepts. We describe key elements of our four modules dune-xt-common, dune-xt-grid, dune-xt-la and dune-xt-functions, which aim at further enabling the programming of generic algorithms within DUNE as well as adding an extra layer of usability and convenience.
Extended Irreversible Thermodynamics
Jou, David
2010-01-01
This is the 4th edition of the highly acclaimed monograph on Extended Irreversible Thermodynamics, a theory that goes beyond the classical theory of irreversible processes. In contrast to the classical approach, the basic variables describing the system are complemented by non-equilibrium quantities. The claims made for extended thermodynamics are confirmed by the kinetic theory of gases and statistical mechanics. The book covers a wide spectrum of applications, and also contains a thorough discussion of the foundations and the scope of the current theories on non-equilibrium thermodynamics. For this new edition, the authors critically revised existing material while taking into account the most recent developments in fast moving fields such as heat transport in micro- and nanosystems or fast solidification fronts in materials sciences. Several fundamental chapters have been revisited emphasizing physics and applications over mathematical derivations. Also, fundamental questions on the definition of non-equil...
Introduction to Extended Electrodynamics
Donev, S
1997-01-01
This paper summarizes the motivations and results obtained so far in the frame of a particular non-linearization of Classical Electrodynamics, which was called Extended Electrodynamics. The main purpose pursued with this non-linear extension of the classical Maxwell's equations is to have a reliable field-theoretical approach in describing (3+1) soliton-like electromagnetic formations, in particular, to build an extended and finite field model of free photons and photon complexes. The first chapter gives a corresponding analysis of Maxwell theory and introduces the new equations. The second chapter gives a full account of the results, including the photon-like solutions, in the vacuum case. A new concept, called scale factor, is defined and successfully used. Two ways for describing the intrinsic angular momentum are given. Interference of two photon-like solutions is also considered. The third chapter considers interaction with external fields (continuous media) on the base of establishing correspondence bet...
Extended Theories of Gravitation
Fatibene Lorenzo
2013-09-01
Full Text Available Within the framework of extended theories of gravitation we shall discuss physical equivalences among different formalisms and classical tests. As suggested by the Ehlers-Pirani-Schild framework, the conformal invariance will be preserved and its effect on observational protocols discussed. Accordingly, we shall review standard tests showing how Palatini f(R-theories naturally passes solar system tests. Observation protocols will be discussed in this wider framework.
Iba, Yukito
2000-01-01
``Extended Ensemble Monte Carlo''is a generic term that indicates a set of algorithms which are now popular in a variety of fields in physics and statistical information processing. Exchange Monte Carlo (Metropolis-Coupled Chain, Parallel Tempering), Simulated Tempering (Expanded Ensemble Monte Carlo), and Multicanonical Monte Carlo (Adaptive Umbrella Sampling) are typical members of this family. Here we give a cross-disciplinary survey of these algorithms with special emphasis on the great f...
Humbert, Richard
2010-01-01
A force acting on just part of an extended object (either a solid or a volume of a liquid) can cause all of it to move. That motion is due to the transmission of the force through the object by its material. This paper discusses how the force is distributed to all of the object by a gradient of stress or pressure in it, which creates the local…
Beloff, Laura; Klaus, Malena
2016-01-01
Artist talk / Work-in-progress What is the purpose of a machine or an artifact, like the Fly Printer, that is dislocated, that produces images that have no meaning, no instrumentality, that depict nothing in the world? The biological and the cultural are reunited in this apparatus as a possibilit...... the results. The extended version of the Fly Printer containing the technological perception and DNNs is a collaboration between Laura Beloff and Malene Theres Klaus...
2013-10-03
This package assists in genome assembly. extendFromReads takes as input a set of Illumina (eg, MiSeq) DNA sequencing reads, a query seed sequence and a direction to extend the seed. The algorithm collects all seed--]matching reads (flipping reverse--]orientation hits), trims off the seed and additional sequence in the other direction, sorts the remaining sequences alphabetically, and prints them aligned without gaps from the point of seed trimming. This produces a visual display distinguishing the flanks of multi-]copy seeds. A companion script hitMates.pl collects the mates of seed--]hi]ng reads, whose alignment reveals longer extensions from the seed. The collect/trim/sort strategy was made iterative and scaled up in the script denovo.pl, for de novo contig assembly. An index is pre--]built using indexReads.pl that for each unique 21--]mer found in all the reads, records its gfateh of extension (whether extendable, blocked by low coverage, or blocked by branching after a duplicated sequence) and other characteristics. Importantly, denovo.pl records all branchings that follow a branching contig endpoint, providing contig-]extension information
Landsat-8 Sensor Characterization and Calibration
Brian Markham
2015-02-01
Full Text Available Landsat-8 was launched on 11 February 2013 with two new Earth Imaging sensors to provide a continued data record with the previous Landsats. For Landsat-8, pushbroom technology was adopted, and the reflective bands and thermal bands were split into two instruments. The Operational Land Imager (OLI is the reflective band sensor and the Thermal Infrared Sensor (TIRS, the thermal. In addition to these fundamental changes, bands were added, spectral bandpasses were refined, dynamic range and data quantization were improved, and numerous other enhancements were implemented. As in previous Landsat missions, the National Aeronautics and Space Administration (NASA and United States Geological Survey (USGS cooperated in the development, launch and operation of the Landsat-8 mission. One key aspect of this cooperation was in the characterization and calibration of the instruments and their data. This Special Issue documents the efforts of the joint USGS and NASA calibration team and affiliates to characterize the new sensors and their data for the benefit of the scientific and application users of the Landsat archive. A key scientific use of Landsat data is to assess changes in the land-use and land cover of the Earth’s surface over the now 43-year record. [...
Calibration with empirically weighted mean subset
Øjelund, Henrik; Madsen, Henrik; Thyregod, Poul
2002-01-01
In this article a new calibration method called empirically weighted mean subset (EMS) is presented. The method is illustrated using spectral data. Using several near-infrared (NIR) benchmark data sets, EMS is compared to partial least-squares regression (PLS) and interval partial least-squares r......In this article a new calibration method called empirically weighted mean subset (EMS) is presented. The method is illustrated using spectral data. Using several near-infrared (NIR) benchmark data sets, EMS is compared to partial least-squares regression (PLS) and interval partial least...... is obtained by calculating the weighted mean of all coefficient vectors for subsets of the same size. The weighting is proportional to SSgamma-omega, where SSgamma is the residual sum of squares from a linear regression with subset gamma and omega is a weighting parameter estimated using cross......-validation. This construction of the weighting implies that even if some coefficients will become numerically small, none will become exactly zero. An efficient algorithm has been implemented in MATLAB to calculate the EMS solution and the source code has been made available on the Internet....
IMU-Based Online Kinematic Calibration of Robot Manipulator
Guanglong Du
2013-01-01
Full Text Available Robot calibration is a useful diagnostic method for improving the positioning accuracy in robot production and maintenance. An online robot self-calibration method based on inertial measurement unit (IMU is presented in this paper. The method requires that the IMU is rigidly attached to the robot manipulator, which makes it possible to obtain the orientation of the manipulator with the orientation of the IMU in real time. This paper proposed an efficient approach which incorporates Factored Quaternion Algorithm (FQA and Kalman Filter (KF to estimate the orientation of the IMU. Then, an Extended Kalman Filter (EKF is used to estimate kinematic parameter errors. Using this proposed orientation estimation method will result in improved reliability and accuracy in determining the orientation of the manipulator. Compared with the existing vision-based self-calibration methods, the great advantage of this method is that it does not need the complex steps, such as camera calibration, images capture, and corner detection, which make the robot calibration procedure more autonomous in a dynamic manufacturing environment. Experimental studies on a GOOGOL GRB3016 robot show that this method has better accuracy, convenience, and effectiveness than vision-based methods.
IMU-based online kinematic calibration of robot manipulator.
Du, Guanglong; Zhang, Ping
2013-01-01
Robot calibration is a useful diagnostic method for improving the positioning accuracy in robot production and maintenance. An online robot self-calibration method based on inertial measurement unit (IMU) is presented in this paper. The method requires that the IMU is rigidly attached to the robot manipulator, which makes it possible to obtain the orientation of the manipulator with the orientation of the IMU in real time. This paper proposed an efficient approach which incorporates Factored Quaternion Algorithm (FQA) and Kalman Filter (KF) to estimate the orientation of the IMU. Then, an Extended Kalman Filter (EKF) is used to estimate kinematic parameter errors. Using this proposed orientation estimation method will result in improved reliability and accuracy in determining the orientation of the manipulator. Compared with the existing vision-based self-calibration methods, the great advantage of this method is that it does not need the complex steps, such as camera calibration, images capture, and corner detection, which make the robot calibration procedure more autonomous in a dynamic manufacturing environment. Experimental studies on a GOOGOL GRB3016 robot show that this method has better accuracy, convenience, and effectiveness than vision-based methods.
Marine04 Marine radiocarbon age calibration, 26 ? 0 ka BP
Hughen, K; Baille, M; Bard, E; Beck, J; Bertrand, C; Blackwell, P; Buck, C; Burr, G; Cutler, K; Damon, P; Edwards, R; Fairbanks, R; Friedrich, M; Guilderson, T; Kromer, B; McCormac, F; Manning, S; Bronk-Ramsey, C; Reimer, P; Reimer, R; Remmele, S; Southon, J; Stuiver, M; Talamo, S; Taylor, F; der Plicht, J v; Weyhenmeyer, C
2004-11-01
New radiocarbon calibration curves, IntCal04 and Marine04, have been constructed and internationally ratified to replace the terrestrial and marine components of IntCal98. The new calibration datasets extend an additional 2000 years, from 0-26 ka cal BP (Before Present, 0 cal BP = AD 1950), and provide much higher resolution, greater precision and more detailed structure than IntCal98. For the Marine04 curve, dendrochronologically dated tree-ring samples, converted with a box-diffusion model to marine mixed-layer ages, cover the period from 0-10.5 ka cal BP. Beyond 10.5 ka cal BP, high-resolution marine data become available from foraminifera in varved sediments and U/Th-dated corals. The marine records are corrected with site-specific {sup 14}C reservoir age information to provide a single global marine mixed-layer calibration from 10.5-26.0 ka cal BP. A substantial enhancement relative to IntCal98 is the introduction of a random walk model, which takes into account the uncertainty in both the calendar age and the radiocarbon age to calculate the underlying calibration curve. The marine datasets and calibration curve for marine samples from the surface mixed layer (Marine04) are discussed here. The tree-ring datasets, sources of uncertainty, and regional offsets are presented in detail in a companion paper by Reimer et al.
Dong, Ren G.; Welcome, Daniel E.; McDowell, Thomas W.; Wu, John Z.
2015-01-01
While simulations of the measured biodynamic responses of the whole human body or body segments to vibration are conventionally interpreted as summaries of biodynamic measurements, and the resulting models are considered quantitative, this study looked at these simulations from a different angle: model calibration. The specific aims of this study are to review and clarify the theoretical basis for model calibration, to help formulate the criteria for calibration validation, and to help appropriately select and apply calibration methods. In addition to established vibration theory, a novel theorem of mechanical vibration is also used to enhance the understanding of the mathematical and physical principles of the calibration. Based on this enhanced understanding, a set of criteria was proposed and used to systematically examine the calibration methods. Besides theoretical analyses, a numerical testing method is also used in the examination. This study identified the basic requirements for each calibration method to obtain a unique calibration solution. This study also confirmed that the solution becomes more robust if more than sufficient calibration references are provided. Practically, however, as more references are used, more inconsistencies can arise among the measured data for representing the biodynamic properties. To help account for the relative reliabilities of the references, a baseline weighting scheme is proposed. The analyses suggest that the best choice of calibration method depends on the modeling purpose, the model structure, and the availability and reliability of representative reference data. PMID:26740726
Parallel-plate rheometer calibration using oil and lattice Boltzmann simulation
Ferraris, Chiara F; Geiker, Mette Rica; Martys, Nicos S.
2007-01-01
compute the viscosity. This paper presents a modified parallel plate rheometer, and proposes means of calibration using standard oils and numerical simulation of the flow. A lattice Boltzmann method was used to simulate the flow in the modified rheometer, thus using an accurate numerical solution in place...
Giada improved calibration of measurement subsystems
Della Corte, V.; Rotundi, A.; Sordini, R.; Accolla, M.; Ferrari, M.; Ivanovski, S.; Lucarelli, F.; Mazzotta Epifani, E.; Palumbo, P.
2014-12-01
GIADA (Grain Impact Analyzer and Dust Accumulator) is an in-situ instrument devoted to measure the dynamical properties of the dust grains emitted by the comet. An Extended Calibration activity using the GIADA Flight Spare Model has been carried out taking into account the knowledge gained through the analyses of IDPs and cometary samples returned from comet 81P/Wild 2. GIADA consists of three measurement subsystems: Grain Detection System, an optical device measuring the optical cross-section for individual dust; Impact Sensor an aluminum plate connected to 5 piezo-sensors measuring the momentum of impacting single dust grains; Micro Balance System measuring the cumulative deposition in time of dust grains smaller than 10 μm. The results of the analyses on data acquired with the GIADA PFM and the comparison with calibration data acquired during the pre-launch campaign allowed us to improve GIADA performances and capabilities. We will report the results of the following main activities: a) definition of a correlation between the 2 GIADA Models (PFM housed in laboratory and In-Flight Model on-board ROSETTA); b) characterization of the sub-systems performances (signal elaboration, sensitivities, space environment effects); c) new calibration measurements and related curves by means of the PFM model using realistic cometary dust analogues. Acknowledgements: GIADA was built by a consortium led by the Univ. Napoli "Parthenope" & INAF-Oss. Astr. Capodimonte, IT, in collaboration with the Inst. de Astrofisica de Andalucia, ES, Selex-ES s.p.a. and SENER. GIADA is presently managed & operated by Ist. di Astrofisica e Planetologia Spaziali-INAF, IT. GIADA was funded and managed by the Agenzia Spaziale Italiana, IT, with a support of the Spanish Ministry of Education and Science MEC, ES. GIADA was developed from a University of Kent, UK, PI proposal; sci. & tech. contribution given by CISAS, IT, Lab. d'Astr. Spat., FR, and Institutions from UK, IT, FR, DE and USA. We thank
Inequalities of extended beta and extended hypergeometric functions.
Mondal, Saiful R
2017-01-01
We study the log-convexity of the extended beta functions. As a consequence, we establish Turán-type inequalities. The monotonicity, log-convexity, log-concavity of extended hypergeometric functions are deduced by using the inequalities on extended beta functions. The particular cases of those results also give the Turán-type inequalities for extended confluent and extended Gaussian hypergeometric functions. Some reverses of Turán-type inequalities are also derived.
Numerical Hydrodynamics in General Relativity
Font José A.
2003-01-01
Full Text Available The current status of numerical solutions for the equations of ideal general relativistic hydrodynamics is reviewed. With respect to an earlier version of the article, the present update provides additional information on numerical schemes, and extends the discussion of astrophysical simulations in general relativistic hydrodynamics. Different formulations of the equations are presented, with special mention of conservative and hyperbolic formulations well-adapted to advanced numerical methods. A large sample of available numerical schemes is discussed, paying particular attention to solution procedures based on schemes exploiting the characteristic structure of the equations through linearized Riemann solvers. A comprehensive summary of astrophysical simulations in strong gravitational fields is presented. These include gravitational collapse, accretion onto black holes, and hydrodynamical evolutions of neutron stars. The material contained in these sections highlights the numerical challenges of various representative simulations. It also follows, to some extent, the chronological development of the field, concerning advances on the formulation of the gravitational field and hydrodynamic equations and the numerical methodology designed to solve them.
Calibration of the SNO+ experiment
Maneira, J.; Falk, E.; Leming, E.; Peeters, S.; SNO+ collaboration.
2017-09-01
The main goal of the SNO+ experiment is to perform a low-background and high-isotope-mass search for neutrinoless double-beta decay, employing 780 tonnes of liquid scintillator loaded with tellurium, in its initial phase at 0.5% by mass for a total mass of 1330 kg of 130Te. The SNO+ physics program includes also measurements of geo- and reactor neutrinos, supernova and solar neutrinos. Calibrations are an essential component of the SNO+ data-taking and analysis plan. The achievement of the physics goals requires both an extensive and regular calibration. This serves several goals: the measurement of several detector parameters, the validation of the simulation model and the constraint of systematic uncertainties on the reconstruction and particle identification algorithms. SNO+ faces stringent radiopurity requirements which, in turn, largely determine the materials selection, sealing and overall design of both the sources and deployment systems. In fact, to avoid frequent access to the inner volume of the detector, several permanent optical calibration systems have been developed and installed outside that volume. At the same time, the calibration source internal deployment system was re-designed as a fully sealed system, with more stringent material selection, but following the same working principle as the system used in SNO. This poster described the overall SNO+ calibration strategy, discussed the several new and innovative sources, both optical and radioactive, and covered the developments on source deployment systems.
The KLOE Online Calibration System
E.Pasqualucci
2001-01-01
Based on all the features of the KLOE online software,the online calibration system performs current calibration quality checking in real time and starts automatically new calibration procedures when needed.Acalibration manager process controls the system,implementing the interface to the online system,receiving information from the run control and translating its state transitions to a separate state machine.It acts as a " calibration run controller"and performs failure recovery when requested by a set of process checkers.The core of the system is a multi-threaded OO histogram server that receives histogramming commands by remote processes and operates on local ROOT histograms.A client library and C,fortran and C++ application interface libraries allow the user to connect and define his own histogram or read histograms owned by others using an bool-like interface.Several calibration processes running in parallel in a destributed,multiplatform environment can fill the same histograms,allowing fast external information check.A monitor thread allow remote browsing for visual inspection,Pre-filtered data are read in nonprivileged spy mode from the data acquisition system via the Kloe Integrated Dataflow,privileged spy mode from the data acquisiton system via the Kole Integrated Dataflow.The main characteristics of the system are presented.
Finite element model calibration using frequency responses with damping equalization
Abrahamsson, T. J. S.; Kammer, D. C.
2015-10-01
Model calibration is a cornerstone of the finite element verification and validation procedure, in which the credibility of the model is substantiated by positive comparison with test data. The calibration problem, in which the minimum deviation between finite element model data and experimental data is searched for, is normally characterized as being a large scale optimization problem with many model parameters to solve for and with deviation metrics that are nonlinear in these parameters. The calibrated parameters need to be found by iterative procedures, starting from initial estimates. Sometimes these procedures get trapped in local deviation function minima and do not converge to the globally optimal calibration solution that is searched for. The reason for such traps is often the multi-modality of the problem which causes eigenmode crossover problems in the iterative variation of parameter settings. This work presents a calibration formulation which gives a smooth deviation metric with a large radius of convergence to the global minimum. A damping equalization method is suggested to avoid the mode correlation and mode pairing problems that need to be solved in many other model updating procedures. By this method, the modal damping of a test data model and the finite element model is set to be the same fraction of critical modal damping. Mode pairing for mapping of experimentally found damping to the finite element model is thus not needed. The method is combined with model reduction for efficiency and employs the Levenberg-Marquardt minimizer with randomized starts to achieve the calibration solution. The performance of the calibration procedure, including a study of parameter bias and variance under noisy data conditions, is demonstrated by two numerical examples.
Clusters of Monoisotopic Elements for Calibration in (TOF) Mass Spectrometry
Kolářová, Lenka; Prokeš, Lubomír; Kučera, Lukáš; Hampl, Aleš; Peňa-Méndez, Eladia; Vaňhara, Petr; Havel, Josef
2016-12-01
Precise calibration in TOF MS requires suitable and reliable standards, which are not always available for high masses. We evaluated inorganic clusters of the monoisotopic elements gold and phosphorus (Au n +/Au n - and P n +/P n -) as an alternative to peptides or proteins for the external and internal calibration of mass spectra in various experimental and instrumental scenarios. Monoisotopic gold or phosphorus clusters can be easily generated in situ from suitable precursors by laser desorption/ionization (LDI) or matrix-assisted laser desorption/ionization mass spectrometry (MALDI-MS). Their use offers numerous advantages, including simplicity of preparation, biological inertness, and exact mass determination even at lower mass resolution. We used citrate-stabilized gold nanoparticles to generate gold calibration clusters, and red phosphorus powder to generate phosphorus clusters. Both elements can be added to samples to perform internal calibration up to mass-to-charge (m/z) 10-15,000 without significantly interfering with the analyte. We demonstrated the use of the gold and phosphorous clusters in the MS analysis of complex biological samples, including microbial standards and total extracts of mouse embryonic fibroblasts. We believe that clusters of monoisotopic elements could be used as generally applicable calibrants for complex biological samples.
Extended Testability Analysis Tool
Melcher, Kevin; Maul, William A.; Fulton, Christopher
2012-01-01
The Extended Testability Analysis (ETA) Tool is a software application that supports fault management (FM) by performing testability analyses on the fault propagation model of a given system. Fault management includes the prevention of faults through robust design margins and quality assurance methods, or the mitigation of system failures. Fault management requires an understanding of the system design and operation, potential failure mechanisms within the system, and the propagation of those potential failures through the system. The purpose of the ETA Tool software is to process the testability analysis results from a commercial software program called TEAMS Designer in order to provide a detailed set of diagnostic assessment reports. The ETA Tool is a command-line process with several user-selectable report output options. The ETA Tool also extends the COTS testability analysis and enables variation studies with sensor sensitivity impacts on system diagnostics and component isolation using a single testability output. The ETA Tool can also provide extended analyses from a single set of testability output files. The following analysis reports are available to the user: (1) the Detectability Report provides a breakdown of how each tested failure mode was detected, (2) the Test Utilization Report identifies all the failure modes that each test detects, (3) the Failure Mode Isolation Report demonstrates the system s ability to discriminate between failure modes, (4) the Component Isolation Report demonstrates the system s ability to discriminate between failure modes relative to the components containing the failure modes, (5) the Sensor Sensor Sensitivity Analysis Report shows the diagnostic impact due to loss of sensor information, and (6) the Effect Mapping Report identifies failure modes that result in specified system-level effects.
J.H Sossa Azuela; R. Barrón Fernández
2007-01-01
The #945; #946; associative memories recently developed in Ref 10 have proven to be powerful tools for memorizing and recalling patterns when they appear distorted by noise. However they are only useful in the binary case. In this paper we show that it is possible to extend these memories now to the gray-level case. To get the desired extension, we take the original operators #945; and #946;, foundation of the #945; #946; memories, and propose a more general family of operators. We find t...
Sakatani, Yuho
2016-01-01
We propose a novel approach to the brane worldvolume theory based on the geometry of extended field theories; double field theory and exceptional field theory. We demonstrate the effectiveness of this approach by showing that one can reproduce the conventional bosonic string/membrane actions, and the M5-brane action in the weak field approximation. At a glance, the proposed 5-brane action without approximation looks different from the known M5-brane actions, but it is consistent with the known non-linear self-duality relation, and it may provide a new formulation of a single M5-brane action. Actions for exotic branes are also discussed.
Reduced Ambiguity Calibration for LOFAR
Yatawatta, Sarod
2012-01-01
Interferometric calibration always yields non unique solutions. It is therefore essential to remove these ambiguities before the solutions could be used in any further modeling of the sky, the instrument or propagation effects such as the ionosphere. We present a method for LOFAR calibration which does not yield a unitary ambiguity, especially under ionospheric distortions. We also present exact ambiguities we get in our solutions, in closed form. Casting this as an optimization problem, we also present conditions for this approach to work. The proposed method enables us to use the solutions obtained via calibration for further modeling of instrumental and propagation effects. We provide extensive simulation results on the performance of our method. Moreover, we also give cases where due to degeneracy, this method fails to perform as expected and in such cases, we suggest exploiting diversity in time, space and frequency.
Reliability-Based Code Calibration
Faber, M.H.; Sørensen, John Dalsgaard
2003-01-01
The present paper addresses fundamental concepts of reliability based code calibration. First basic principles of structural reliability theory are introduced and it is shown how the results of FORM based reliability analysis may be related to partial safety factors and characteristic values....... Thereafter the code calibration problem is presented in its principal decision theoretical form and it is discussed how acceptable levels of failure probability (or target reliabilities) may be established. Furthermore suggested values for acceptable annual failure probabilities are given for ultimate...... and serviceability limit states. Finally the paper describes the Joint Committee on Structural Safety (JCSS) recommended procedure - CodeCal - for the practical implementation of reliability based code calibration of LRFD based design codes....
Generic methodology for calibrating profiling nacelle lidars
Borraccino, Antoine; Courtney, Michael; Wagner, Rozenn
is calibrated rather than a reconstructed parameter. This contribution presents a generic methodology to calibrate profiling nacelle-mounted lidars. The application of profiling lidars to wind turbine power performance and corresponding need for calibration procedures is introduced in relation to metrological...... standards. Further, two different calibration procedure concepts are described along with their strengths and weaknesses. The main steps of the generic methodology are then explained and illustrated by calibration results from two types of profiling lidars. Finally, measurement uncertainty assessment...
Flexible calibration procedure for fringe projection profilometry
Vargas, Javier; Quiroga Mellado, Juan Antonio; Terrón López, María José
2007-01-01
A novel calibration method for whole field three-dimensional shape measurement by means of fringe projection is presented. Standard calibration techniques, polynomial-and model-based, have practical limitations such as the difficulty of measuring large fields of view, the need to use precise z stages, and bad calibration results due to inaccurate calibration points. The proposed calibration procedure is a mixture of the two main standard techniques, sharing their benefits and avoiding their m...
A numerical method for determining the radial wave motion correction in plane wave couplers
Cutanda Henriquez, Vicente; Barrera Figueroa, Salvador; Torras Rosell, Antoni
2016-01-01
solution is an analytical expression that estimates the difference between the ideal plane wave sound field and a more complex lossless sound field created by a non-planar movement of the microphone’s membranes. Alternatively, a correction may be calculated numerically by introducing a full model......Microphones are used for realising the unit of sound pressure level, the pascal (Pa). Electro-acoustic reciprocity is the preferred method for the absolute determination of the sensitivity. This method can be applied in different sound fields: uniform pressure, free field or diffuse field. Pressure...... calibration, carried out in plane wave couplers, is the most extended. Here plane wave propagation is assumed. While this assumption is valid at low and mid frequencies, it fails at higher frequencies because the membrane of the microphones is not moving uniformly, and there are viscous losses. An existing...
Tank calibration; Arqueacao de tanques
Chan, Ana [PETROBRAS S.A., Rio de Janeiro, RJ (Brazil)
2003-07-01
This work relates the analysis of the norms ISO (International Organization for Standardization) for calibration of vertical cylindrical tanks used in fiscal measurement, established on Joint Regulation no 1 of June 19, 2000 between the ANP (National Agency of Petroleum) and the INMETRO (National Institute of Metrology, Normalization and Industrial Quality). In this work a comparison between norms ISO and norms published by the API (American Petroleum Institute) and the IP (Institute of Petroleum) up to 2001 was made. It was concluded that norms ISO are wider than norms API, IP, and INMETRO methods in the calibration of vertical cylindrical tanks. (author)
Instrument Calibration and Certification Procedure
Davis, R. Wesley [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2016-05-31
The Amptec 640SL-2 is a 4-wire Kelvin failsafe resistance meter, designed to reliably use very low-test currents for its resistance measurements. The 640SL-1 is a 2-wire version, designed to support customers using the Reynolds Industries type 311 connector. For both versions, a passive (analog) dual function DC Milliameter/Voltmeter allows the user to verify the actual 640SL output current level and the open circuit voltage on the test leads. This procedure includes tests of essential performance parameters. Any malfunction noticed during calibration, whether specifically tested for or not, shall be corrected before calibration continues or is completed.
Performance standard for dose Calibrator
Darmawati, S
2002-01-01
Dose calibrator is an instrument used in hospitals to determine the activity of radionuclide for nuclear medicine purposes. International Electrotechnical Commission (IEC) has published IEC 1303:1994 standard that can be used as guidance to test the performance of the instrument. This paper briefly describes content of the document,as well as explains the assessment that had been carried out to test the instrument accuracy in Indonesia through intercomparison measurement.Its is suggested that hospitals acquire a medical physicist to perform the test for its dose calibrator. The need for performance standard in the form of Indonesia Standard is also touched.
Steinwand, Daniel R.; Maddox, Brian; Beckmann, Tim; Hamer, George
2003-01-01
Beowulf clusters can provide a cost-effective way to compute numerical models and process large amounts of remote sensing image data. Usually a Beowulf cluster is designed to accomplish a specific set of processing goals, and processing is very efficient when the problem remains inside the constraints of the original design. There are cases, however, when one might wish to compute a problem that is beyond the capacity of the local Beowulf system. In these cases, spreading the problem to multiple clusters or to other machines on the network may provide a cost-effective solution.
Solute drag on perfect and extended dislocations
Sills, R. B.; Cai, W.
2016-04-01
The drag force exerted on a moving dislocation by a field of mobile solutes is studied in the steady state. The drag force is numerically calculated as a function of the dislocation velocity for both perfect and extended dislocations. The sensitivity of the non-dimensionalized force-velocity curve to the various controlling parameters is assessed, and an approximate analytical force-velocity expression is given. A non-dimensional parameter S characterizing the strength of the solute-dislocation interaction, the background solute fraction ?, and the dislocation character angle ?, are found to have the strongest influence on the force-velocity curve. Within the model considered here, a perfect screw dislocation experiences no solute drag, but an extended screw dislocation experiences a non-zero drag force that is about 10 to 30% of the drag on an extended edge dislocation. The solutes can change the spacing between the Shockley partials in both stationary and moving extended dislocations, even when the stacking fault energy remains unaltered. Under certain conditions, the solutes destabilize an extended dislocation by either collapsing it into a perfect dislocation or causing the partials to separate unboundedly. It is proposed that the latter instability may lead to the formation of large faulted areas and deformation twins in low stacking fault energy materials containing solutes, consistent with experimental observations of copper and stainless steel containing hydrogen.
Kavka, P.; Jeřábek, J.; Strouhal, L.
2016-12-01
The contribution presents a numerical model SMODERP that is used for calculation and prediction of surface runoff and soil erosion from agricultural land. The physically based model includes the processes of infiltration (Phillips equation), surface runoff routing (kinematic wave based equation), surface retention, surface roughness and vegetation impact on runoff. The model is being developed at the Department of Irrigation, Drainage and Landscape Engineering, Civil Engineering Faculty, CTU in Prague. 2D version of the model was introduced in last years. The script uses ArcGIS system tools for data preparation. The physical relations are implemented through Python scripts. The main computing part is stand alone in numpy arrays. Flow direction is calculated by Steepest Descent algorithm and in multiple flow algorithm. Sheet flow is described by modified kinematic wave equation. Parameters for five different soil textures were calibrated on the set of hundred measurements performed on the laboratory and filed rainfall simulators. Spatially distributed models enable to estimate not only surface runoff but also flow in the rills. Development of the rills is based on critical shear stress and critical velocity. For modelling of the rills a specific sub model was created. This sub model uses Manning formula for flow estimation. Flow in the ditches and streams are also computed. Numerical stability of the model is controled by Courant criterion. Spatial scale is fixed. Time step is dynamic and depends on the actual discharge. The model is used in the framework of the project "Variability of Short-term Precipitation and Runoff in Small Czech Drainage Basins and its Influence on Water Resources Management". Main goal of the project is to elaborate a methodology and online utility for deriving short-term design precipitation series, which could be utilized by a broad community of scientists, state administration as well as design planners. The methodology will account for
Experimentation and direct numerical simulation of self-similar convergent detonation wave
Bozier O.
2011-01-01
Full Text Available The propagation of self similar convergent detonation wave in TATB-based explosive composition was studied both experimentally and numerically. The device constists in a 50 mm cylinder of TATB surrounded by an HMX tube. The detonation in HMX overdrives the detonation in TATB which adapts to the propagation velocity with a convergent front at centerline. We measured a curvature of κ = −21.2 m−1 for propagation velocity of 8750 m/s, which extends the knowledge of the (Dn,κ law. A wide ranged EOS/reaction rate model inspired from previous work of Wescott et al. was calibrated to reproduce both the run-to-detonation distance and the newly extended (Dn,κ law for the 1D sligthly curved detonation theory. 2D Direct Numerical Simulations (DNS were made on fine resolved mesh grid for the experimental configuration and for various driver velocities. The simulation reproduces the experimental data both qualitatively (overall detonation structure and quantitatively (κ = −25.4 m−1.
Santos, C. Almeida; Costa, C. Oliveira; Batista, J.
2016-05-01
The paper describes a kinematic model-based solution to estimate simultaneously the calibration parameters of the vision system and the full-motion (6-DOF) of large civil engineering structures, namely of long deck suspension bridges, from a sequence of stereo images captured by digital cameras. Using an arbitrary number of images and assuming a smooth structure motion, an Iterated Extended Kalman Filter is used to recursively estimate the projection matrices of the cameras and the structure full-motion (displacement and rotation) over time, helping to meet the structure health monitoring fulfilment. Results related to the performance evaluation, obtained by numerical simulation and with real experiments, are reported. The real experiments were carried out in indoor and outdoor environment using a reduced structure model to impose controlled motions. In both cases, the results obtained with a minimum setup comprising only two cameras and four non-coplanar tracking points, showed a high accuracy results for on-line camera calibration and structure full motion estimation.
Extending juvenility in grasses
Kaeppler, Shawn; de Leon Gatti, Natalia; Foerster, Jillian
2017-04-11
The present invention relates to compositions and methods for modulating the juvenile to adult developmental growth transition in plants, such as grasses (e.g. maize). In particular, the invention provides methods for enhancing agronomic properties in plants by modulating expression of GRMZM2G362718, GRMZM2G096016, or homologs thereof. Modulation of expression of one or more additional genes which affect juvenile to adult developmental growth transition such as Glossy15 or Cg1, in conjunction with such modulation of expression is also contemplated. Nucleic acid constructs for down-regulation of GRMZM2G362718 and/or GRMZM2G096016 are also contemplated, as are transgenic plants and products produced there from, that demonstrate altered, such as extended juvenile growth, and display associated phenotypes such as enhanced yield, improved digestibility, and increased disease resistance. Plants described herein may be used, for example, as improved forage or feed crops or in biofuel production.
Capozziello, Salvatore
2011-01-01
Extended Theories of Gravity can be considered a new paradigm to cure shortcomings of General Relativity at infrared and ultraviolet scales. They are an approach that, by preserving the undoubtedly positive results of Einstein's Theory, is aimed to address conceptual and experimental problems recently emerged in Astrophysics, Cosmology and High Energy Physics. In particular, the goal is to encompass, in a self-consistent scheme, problems like Inflation, Dark Energy, Dark Matter, Large Scale Structure and, first of all, to give at least an effective description of Quantum Gravity. We review the basic principles that any gravitational theory has to follow. The geometrical interpretation is discussed in a broad perspective in order to highlight the basic assumptions of General Relativity and its possible extensions in the general framework of gauge theories. Principles of such modifications are presented, focusing on specific classes of theories like f (R)-gravity and scalar-tensor gravity in the metric and Pala...
Christensen, Tanya Karoli; Jensen, Torben Juel; Christensen, Marie Herget
Studies of general extenders (GEs), such as Eng. and stuff like that, or something, typically find that it is a feature of youth speech, sometimes correlated with sex and class (e.g. Dubois 1992, Stubbe and Holmes 1995, Cheshire 2007, Tagliamonte and Denis 2010, Pichler and Levey 2011), but only...... that variants with sådan noget, though prevalent across the board, may be stigmatized, since they are produced mainly by young WC males, and exhibit an overall drop in frequency over time. In our paper, we will use GEs in Danish as a case study for evaluating prevailing assumptions about the relationship...... few have a design enabling them to distinguish unequivocally between age grading and communal change. In this paper, we present the results of a large-scale study of GEs in Danish, based on Copenhagen data from the LANCHART corpus, encompassing speech from three age cohorts, of which two have been...
Extended Poisson Exponential Distribution
Anum Fatima
2015-09-01
Full Text Available A new mixture of Modified Exponential (ME and Poisson distribution has been introduced in this paper. Taking the Maximum of Modified Exponential random variable when the sample size follows a zero truncated Poisson distribution we have derived the new distribution, named as Extended Poisson Exponential distribution. This distribution possesses increasing and decreasing failure rates. The Poisson-Exponential, Modified Exponential and Exponential distributions are special cases of this distribution. We have also investigated some mathematical properties of the distribution along with Information entropies and Order statistics of the distribution. The estimation of parameters has been obtained using the Maximum Likelihood Estimation procedure. Finally we have illustrated a real data application of our distribution.
Extended Ewald summation technique
Kylänpää, Ilkka; Räsänen, Esa
2016-09-01
We present a technique to improve the accuracy and to reduce the computational labor in the calculation of long-range interactions in systems with periodic boundary conditions. We extend the well-known Ewald method by using a linear combination of screening Gaussian charge distributions instead of only one. This enables us to find faster converging real-space and reciprocal space summations. The combined simplicity and efficiency of our method is demonstrated, and the scheme is readily applicable to large-scale periodic simulations, classical as well as quantum. Moreover, apart from the required a priori optimization the method is straightforward to include in most routines based on the Ewald method within, e.g., density-functional, molecular dynamics, and quantum Monte Carlo calculations.
Stars with Extended Atmospheres
Sterken, C.
2002-12-01
This Workshop consisted of a full-day meeting of the Working Group "Sterren met Uitgebreide Atmosferen" (SUA, Working Group Stars with Extended Atmospheres), a discussion group founded in 1979 by Kees de Jager, Karel van der Hucht and Pik Sin The. This loose association of astronomers and astronomy students working in the Dutch-speaking part of the Low Countries (The Netherlands and Flanders) organised at regular intervals one-day meetings at the Universities of Utrecht, Leiden, Amsterdam and Brussels. These meetings consisted of the presentation of scientific results by junior as well as senior members of the group, and by discussions between the participants. As such, the SUA meetings became a forum for the exchange of ideas, and for asking questions and advice in an informal atmosphere. Kees de Jager has been chairman of the WG SUA from the beginning in 1979 till today, as the leading source of inspiration. At the occasion of Prof. Kees de Jager's 80th birthday, we decided to collect the presented talks in written form as a Festschrift in honour of this well-respected and much beloved scientist, teacher and friend. The first three papers deal with the personality of Kees de Jager, more specifically with his role as a supervisor and mentor of young researchers and as a catalyst in the research work of his colleagues. And also about his remarkable role in the establishment of astronomy education and research at the University of Brussels. The next presentation is a very detailed review of solar research, a field in which Cees was prominently active for many years. Then follow several papers dealing with stars about which Kees is a true expert: massive stars and extended atmospheres.
Practical intraoperative stereo camera calibration.
Pratt, Philip; Bergeles, Christos; Darzi, Ara; Yang, Guang-Zhong
2014-01-01
Many of the currently available stereo endoscopes employed during minimally invasive surgical procedures have shallow depths of field. Consequently, focus settings are adjusted from time to time in order to achieve the best view of the operative workspace. Invalidating any prior calibration procedure, this presents a significant problem for image guidance applications as they typically rely on the calibrated camera parameters for a variety of geometric tasks, including triangulation, registration and scene reconstruction. While recalibration can be performed intraoperatively, this invariably results in a major disruption to workflow, and can be seen to represent a genuine barrier to the widespread adoption of image guidance technologies. The novel solution described herein constructs a model of the stereo endoscope across the continuum of focus settings, thereby reducing the number of degrees of freedom to one, such that a single view of reference geometry will determine the calibration uniquely. No special hardware or access to proprietary interfaces is required, and the method is ready for evaluation during human cases. A thorough quantitative analysis indicates that the resulting intrinsic and extrinsic parameters lead to calibrations as accurate as those derived from multiple pattern views.
Scalar Calibration of Vector Magnetometers
Merayo, José M.G.; Brauer, Peter; Primdahl, Fritz;
2000-01-01
The calibration parameters of a vector magnetometer are estimated only by the use of a scalar reference magnetometer. The method presented in this paper differs from those previously reported in its linearized parametrization. This allows the determination of three offsets or signals in the absence...
Laboratory panel and radiometer calibration
Deadman, AJ
2011-07-01
Full Text Available AND RADIOMETER CALIBRATION A.J Deadmana, I.D Behnerta, N.P Foxa, D. Griffithb aNational Physical Laboratory (NPL), United Kingdom bCouncil for Scientific and Industrial Research (CSIR), South Africa ABSTRACT This paper presents the results...
CALIBRATION OF THE INFRARED OPTOMETER
An infrared optometer for measuring the absolute status of accommodation is subject to a constant error not associated with chromatic aberration or...on optometer accuracy as long as the pupil does not vignette the optometer beam. A modification is described for calibrating the infrared optometer ...for an individual subject without using trial lenses or a subjective optometer . (Author)
Measurement System and Calibration report
Kock, Carsten Weber; Vesth, Allan
This Measurement System & Calibration report is describing DTU’s measurement system installed at a specific wind turbine. A major part of the sensors has been installed by others (see [1]) the rest of the sensors have been installed by DTU. The results of the measurements, described in this report...
Measurement System and Calibration report
Vesth, Allan; Kock, Carsten Weber
The report describes power curve measurements carried out on a given wind turbine. The measurements are carried out in accordance to Ref. [1]. A site calibration has been carried out; see Ref. [2], and the measured flow correction factors for different wind directions are used in the present...
Measurement System and Calibration report
Gómez Arranz, Paula; Villanueva, Héctor
This Measurement System & Calibration report is describing DTU’s measurement system installed at a specific wind turbine. A major part of the sensors has been installed by others (see [1]) the rest of the sensors have been installed by DTU. The results of the measurements, described in this repor...
Buser, R.; Fenkart, R. P.
1990-11-01
This paper presents an extended calibration of the color-magnitude and two-color diagrams and the metal-abundance parameter for the intermediate Population II and the extreme halo dwarfs observed in the Basel Palomar-Schmidt RGU three-color photometric surveys of the galaxy. The calibration covers the metallicity range between values +0.50 and -3.00. It is shown that the calibrations presented are sufficiently accurate to be useful for the future analyses of photographic survey data.
Calibrating the Johnson-Holmquist Ceramic Model for SiC using CTH
Cazamias, James
2009-06-01
The Johnson-Holmquist ceramic material model has been calibrated and successfully applied to numerically simulate ballistic events using the Lagrangian code EPIC. While the majority of the constants are ``physics'' based, two of the constants for the failed material response are calibrated using ballistic experiments conducted on a confined cylindrical ceramic target. The maximum strength of the failed ceramic is calibrated by matching the penetration velocity. The second refers to the equivalent plastic strain at failure under constant pressure and is calibrated using the dwell time. Use of these two constants in the CTH Eulerian hydrocode does not predict the ballistic response. This difference may be due to the phenomenological nature of the model and the different numerical schemes used by the codes. This paper determines the afore mentioned material constants for SiC suitable for simulating ballistic events using CTH.
Can Asymmetry of Solar Activity be Extended into Extended Cycle?
无
2002-01-01
With the use of the Royal Greenwich Observatory data set of sunspot groups, an attempt is made to examine the north-south asymmetry of solar activity in the "extended" solar cycles. It is inferred that the asymmetry established for individual solar cycles does not extend to the "extended" cycles.
Radio Interferometric Calibration Using a Riemannian Manifold
Yatawatta, Sarod
2013-01-01
In order to cope with the increased data volumes generated by modern radio interferometers such as LOFAR (Low Frequency Array) or SKA (Square Kilometre Array), fast and efficient calibration algorithms are essential. Traditional radio interferometric calibration is performed using nonlinear optimization techniques such as the Levenberg-Marquardt algorithm in Euclidean space. In this paper, we reformulate radio interferometric calibration as a nonlinear optimization problem on a Riemannian manifold. The reformulated calibration problem is solved using the Riemannian trust-region method. We show that calibration on a Riemannian manifold has faster convergence with reduced computational cost compared to conventional calibration in Euclidean space.
Radiation calibration for LWIR Hyperspectral Imager Spectrometer
Yang, Zhixiong; Yu, Chunchao; Zheng, Wei-jian; Lei, Zhenggang; Yan, Min; Yuan, Xiaochun; Zhang, Peizhong
2014-11-01
The radiometric calibration of LWIR Hyperspectral imager Spectrometer is presented. The lab has been developed to LWIR Interferometric Hyperspectral imager Spectrometer Prototype(CHIPED-I) to study Lab Radiation Calibration, Two-point linear calibration is carried out for the spectrometer by using blackbody respectively. Firstly, calibration measured relative intensity is converted to the absolute radiation lightness of the object. Then, radiation lightness of the object is is converted the brightness temperature spectrum by the method of brightness temperature. The result indicated †that this method of Radiation Calibration calibration was very good.
On the interpretation of recharge estimates from steady-state model calibrations.
Anderson, William P; Evans, David G
2007-01-01
Ground water recharge is often estimated through the calibration of ground water flow models. We examine the nature of calibration errors by considering some simple mathematical and numerical calculations. From these calculations, we conclude that calibrating a steady-state ground water flow model to water level extremes yields estimates of recharge that have the same value as the time-varying recharge at the time the water levels are measured. These recharge values, however, are a subdued version of the actual transient recharge signal. In addition, calibrating a steady-state ground water flow model to data collected during periods of rising water levels will produce recharge values that underestimate the actual transient recharge. Similarly, calibrating during periods of falling water levels will overestimate the actual transient recharge. We also demonstrate that average water levels can be used to estimate the actual average recharge rate provided that water level data have been collected for a sufficient amount of time.
STUDY OF CALIBRATION OF SOLAR RADIO SPECTROMETERS AND THE QUIET-SUN RADIO EMISSION
Tan, Chengming; Yan, Yihua; Tan, Baolin; Fu, Qijun; Liu, Yuying [Key Laboratory of Solar Activity, National Astronomical Observatories of Chinese Academy of Sciences, Datun Road A20, Chaoyang District, Beijing 100012 (China); Xu, Guirong [Hubei Key Laboratory for Heavy Rain Monitoring and Warning Research, Institute of Heavy Rain, China Meteorological Administration, Wuhan 430205 (China)
2015-07-20
This work presents a systematic investigation of the influence of weather conditions on the calibration errors by using Gaussian fitness, least chi-square linear fitness, and wavelet transform to analyze the calibration coefficients from observations of the Chinese Solar Broadband Radio Spectrometers (at frequency bands of 1.0–2.0 GHz, 2.6–3.8 GHz, and 5.2–7.6 GHz) during 1997–2007. We found that calibration coefficients are influenced by the local air temperature. Considering the temperature correction, the calibration error will reduce by about 10%–20% at 2800 MHz. Based on the above investigation and the calibration corrections, we further study the radio emission of the quiet Sun by using an appropriate hybrid model of the quiet-Sun atmosphere. The results indicate that the numerical flux of the hybrid model is much closer to the observation flux than that of other ones.
Study of Calibration of Solar Radio Spectrometers and the quiet-Sun Radio Emission
Tan, Chengming; Tan, Baolin; Fu, Qijun; Liu, Yuying; Xu, Guirong
2015-01-01
This work presents a systematic investigation of the influence of weather conditions on the calibration errors by using Gaussian fitness, least chi-square linear fitness and wavelet transform to analyze the calibration coefficients from observations of the Chinese Solar Broadband Radio Spectrometers (at frequency bands of 1.0-2.0 GHz, 2.6-3.8 GHz, and 5.2-7.6 GHz) during 1997-2007. We found that calibration coefficients are influenced by the local air temperature. Considering the temperature correction, the calibration error will reduce by about $10\\%-20\\%$ at 2800 MHz. Based on the above investigation and the calibration corrections, we further study the radio emission of the quiet-Sun by using an appropriate hybrid model of the quiet-Sun atmosphere. The results indicate that the numerical flux of the hybrid model is much closer to the observation flux than that of other ones.
Nianzeng, Che; Grant, Barbara G.; Flittner, David E.; Slater, Philip N.; Biggar, Stuart F.; Jackson, Ray D.; Moran, M. S.
1991-01-01
The calibration method reported here makes use of the reflectances of several large, uniform areas determined from calibrated and atmospherically corrected SPOT Haute Resolution Visible (HRV) scenes of White Sands, New Mexico. These reflectances were used to predict the radiances in the first two channels of the NOAA-11 Advanced Very High Resolution Radiometer (AVHRR). The digital counts in the AVHRR image corresponding to these known reflectance areas were determined by the use of two image registration techniques. The plots of digital counts versus pixel radiance provided the calibration gains and offsets for the AVHRR. A reduction in the gains of 4 and 13 percent in channels 1 and 2 respectively was found during the period 1988-11-19 to 1990-6-21. An error budget is presented for the method and is extended to the case of cross-calibrating sensors on the same orbital platform in the Earth Observing System (EOS) era.
Robust Radio Interferometric Calibration Using the t-Distribution
Kazemi, S
2013-01-01
A major stage of radio interferometric data processing is calibration or the estimation of systematic errors in the data and the correction for such errors. A stochastic error (noise) model is assumed, and in most cases, this underlying model is assumed to be Gaussian. However, outliers in the data due to interference or due to errors in the sky model would have adverse effects on processing based on a Gaussian noise model. Most of the shortcomings of calibration such as the loss in flux or coherence, and the appearance of spurious sources, could be attributed to the deviations of the underlying noise model. In this paper, we propose to improve the robustness of calibration by using a noise model based on Student's t distribution. Student's t noise is a special case of Gaussian noise when the variance is unknown. Unlike Gaussian noise model based calibration, traditional least squares minimization would not directly extend to a case when we have a Student's t noise model. Therefore, we use a variant of the Ex...
Cloud-Based Model Calibration Using OpenStudio: Preprint
Hale, E.; Lisell, L.; Goldwasser, D.; Macumber, D.; Dean, J.; Metzger, I.; Parker, A.; Long, N.; Ball, B.; Schott, M.; Weaver, E.; Brackney, L.
2014-03-01
OpenStudio is a free, open source Software Development Kit (SDK) and application suite for performing building energy modeling and analysis. The OpenStudio Parametric Analysis Tool has been extended to allow cloud-based simulation of multiple OpenStudio models parametrically related to a baseline model. This paper describes the new cloud-based simulation functionality and presents a model cali-bration case study. Calibration is initiated by entering actual monthly utility bill data into the baseline model. Multiple parameters are then varied over multiple iterations to reduce the difference between actual energy consumption and model simulation results, as calculated and visualized by billing period and by fuel type. Simulations are per-formed in parallel using the Amazon Elastic Cloud service. This paper highlights model parameterizations (measures) used for calibration, but the same multi-nodal computing architecture is available for other purposes, for example, recommending combinations of retrofit energy saving measures using the calibrated model as the new baseline.
A standard stellar library for evolutionary synthesis. III. Metallicity calibration
Westera, P.; Lejeune, T.; Buser, R.; Cuisinier, F.; Bruzual, G.
2002-01-01
We extend the colour calibration of the widely used BaSeL standard stellar library (Lejeune et al. 1997, 1998) to non-solar metallicities, down to [Fe/H] ~ -2.0 dex. Surprisingly, we find that at the present epoch it is virtually impossible to establish a unique calibration of UBVRIJHKL colours in terms of stellar metallicity [Fe/H] which is consistent simultaneously with both colour-temperature relations and colour-absolute magnitude diagrams (CMDs) based on observed globular cluster photometry data and on published, currently popular standard stellar evolutionary tracks and isochrones. The problem appears to be related to the long-standing incompleteness in our understanding of convection in late-type stellar evolution, but is also due to a serious lack of relevant observational calibration data that would help resolve, or at least further significant progress towards resolving this issue. In view of the most important applications of the BaSeL library, we here propose two different metallicity calibration versions: (1) the ``WLBC 99'' library, which consistently matches empirical colour-temperature relations and which, therefore, should make an ideal tool for the study of individual stars; and (2), the ``PADOVA 2000'' library, which provides isochrones from the Padova 2000 grid (Girardi et al. \\cite{padova}) that successfully reproduce Galactic globular-cluster colour-absolute magnitude diagrams and which thus should prove particularly useful for studies of collective phenomena in stellar populations in clusters and galaxies.
Brax, Philippe; Tamanini, Nicola
2016-05-01
We extend the chameleon models by considering scalar-fluid theories where the coupling between matter and the scalar field can be represented by a quadratic effective potential with density-dependent minimum and mass. In this context, we study the effects of the scalar field on Solar System tests of gravity and show that models passing these stringent constraints can still induce large modifications of Newton's law on galactic scales. On these scales we analyze models which could lead to a percent deviation of Newton's law outside the virial radius. We then model the dark matter halo as a Navarro-Frenk-White profile and explicitly find that the fifth force can give large contributions around the galactic core in a particular model where the scalar field mass is constant and the minimum of its potential varies linearly with the matter density. At cosmological distances, we find that this model does not alter the growth of large scale structures and therefore would be best tested on galactic scales, where interesting signatures might arise in the galaxy rotation curves.
SMAP RADAR Calibration and Validation
West, R. D.; Jaruwatanadilok, S.; Chaubel, M. J.; Spencer, M.; Chan, S. F.; Chen, C. W.; Fore, A.
2015-12-01
The Soil Moisture Active Passive (SMAP) mission launched on Jan 31, 2015. The mission employs L-band radar and radiometer measurements to estimate soil moisture with 4% volumetric accuracy at a resolution of 10 km, and freeze-thaw state at a resolution of 1-3 km. Immediately following launch, there was a three month instrument checkout period, followed by six months of level 1 (L1) calibration and validation. In this presentation, we will discuss the calibration and validation activities and results for the L1 radar data. Early SMAP radar data were used to check commanded timing parameters, and to work out issues in the low- and high-resolution radar processors. From April 3-13 the radar collected receive only mode data to conduct a survey of RFI sources. Analysis of the RFI environment led to a preferred operating frequency. The RFI survey data were also used to validate noise subtraction and scaling operations in the radar processors. Normal radar operations resumed on April 13. All radar data were examined closely for image quality and calibration issues which led to improvements in the radar data products for the beta release at the end of July. Radar data were used to determine and correct for small biases in the reported spacecraft attitude. Geo-location was validated against coastline positions and the known positions of corner reflectors. Residual errors at the time of the beta release are about 350 m. Intra-swath biases in the high-resolution backscatter images are reduced to less than 0.3 dB for all polarizations. Radiometric cross-calibration with Aquarius was performed using areas of the Amazon rain forest. Cross-calibration was also examined using ocean data from the low-resolution processor and comparing with the Aquarius wind model function. Using all a-priori calibration constants provided good results with co-polarized measurements matching to better than 1 dB, and cross-polarized measurements matching to about 1 dB in the beta release. During the
Griego, J R
1995-01-01
Some features of extended loops are considered. In particular, the behaviour under diffeomorphism transformations of the wavefunctions with support on the extended loop space are studied. The basis of a method to obtain analytical expressions of diffeomorphism invariants via extended loops are settled. Applications to knot theory and quantum gravity are considered.
Extended Mixed Vector Equilibrium Problems
Mijanur Rahaman
2014-01-01
Full Text Available We study extended mixed vector equilibrium problems, namely, extended weak mixed vector equilibrium problem and extended strong mixed vector equilibrium problem in Hausdorff topological vector spaces. Using generalized KKM-Fan theorem (Ben-El-Mechaiekh et al.; 2005, some existence results for both problems are proved in noncompact domain.
Calibration of hydrological model with programme PEST
Brilly, Mitja; Vidmar, Andrej; Kryžanowski, Andrej; Bezak, Nejc; Šraj, Mojca
2016-04-01
PEST is tool based on minimization of an objective function related to the root mean square error between the model output and the measurement. We use "singular value decomposition", section of the PEST control file, and Tikhonov regularization method for successfully estimation of model parameters. The PEST sometimes failed if inverse problems were ill-posed, but (SVD) ensures that PEST maintains numerical stability. The choice of the initial guess for the initial parameter values is an important issue in the PEST and need expert knowledge. The flexible nature of the PEST software and its ability to be applied to whole catchments at once give results of calibration performed extremely well across high number of sub catchments. Use of parallel computing version of PEST called BeoPEST was successfully useful to speed up calibration process. BeoPEST employs smart slaves and point-to-point communications to transfer data between the master and slaves computers. The HBV-light model is a simple multi-tank-type model for simulating precipitation-runoff. It is conceptual balance model of catchment hydrology which simulates discharge using rainfall, temperature and estimates of potential evaporation. Version of HBV-light-CLI allows the user to run HBV-light from the command line. Input and results files are in XML form. This allows to easily connecting it with other applications such as pre and post-processing utilities and PEST itself. The procedure was applied on hydrological model of Savinja catchment (1852 km2) and consists of twenty one sub-catchments. Data are temporary processed on hourly basis.
Calibration of the solar radio spectrometer
无
2009-01-01
This paper shows some improvements and new results of calibration of Chinese solar radio spectrometer by analyzing the daily calibration data recorded in the period of 1997-2007. First, the calibration coefficient is fitted for three bands (1.0-2.0 GHz, 2.6-3.8 GHz, 5.2-7.6 GHz) of the spectrometer by using the moving-average method confined by the property of the daily calibration data. By this calibration coefficient, the standard deviation of the calibration result was less than 10 sfu for 95% frequencies of 2.6-3.8 GHz band in 2003. This result is better than that calibrated with the constant coefficient. Second, the calibration coefficient is found in good correlation with local air temperature for most frequencies of 2.6-3.8 GHz band. Moreover, these results are helpful in the research of the quiet solar radio emission.
Calibration and Validation of Measurement System
Kofoed, Jens Peter; Riemann, Sven; Knapp, Wilfried
The report deals with the calibration of the measuring equipment on board the Wave Dragon, Nissum Bredning prototype.......The report deals with the calibration of the measuring equipment on board the Wave Dragon, Nissum Bredning prototype....
Crop physiology calibration in the CLM
I. Bilionis
2015-04-01
scalable and adaptive scheme based on sequential Monte Carlo (SMC. The model showed significant improvement of crop productivity with the new calibrated parameters. We demonstrate that the calibrated parameters are applicable across alternative years and different sites.
Calibration of the solar radio spectrometer
TAN ChengMing; YAN YiHua; TAN BaoLin; XU GuiRong
2009-01-01
This paper shows some improvements and new results of calibration of Chinese solar radio spectrom-eter by analyzing the daily calibration data recorded in the period of 1997-2007. First, the calibration coefficient is fitted for three bands (1.0-2.0 GHz, 2.6-3.8 GHz, 5.2-7.6 GHz) of the spectrometer by using the moving-average method confined by the property of the daily calibration data. By this calibration coefficient, the standard deviation of the calibration result was less than 10 sfu for 95% frequencies of 2.6-3.8 GHz band in 2003. This result is better than that calibrated with the constant coefficient. Second, the calibration coefficient is found in good correlation with local air temperature for most frequencies of 2.6-3.8 GHz band. Moreover, these results are helpful in the research of the quiet solar radio emission.
Averaged Extended Tree Augmented Naive Classifier
Aaron Meehan
2015-07-01
Full Text Available This work presents a new general purpose classifier named Averaged Extended Tree Augmented Naive Bayes (AETAN, which is based on combining the advantageous characteristics of Extended Tree Augmented Naive Bayes (ETAN and Averaged One-Dependence Estimator (AODE classifiers. We describe the main properties of the approach and algorithms for learning it, along with an analysis of its computational time complexity. Empirical results with numerous data sets indicate that the new approach is superior to ETAN and AODE in terms of both zero-one classification accuracy and log loss. It also compares favourably against weighted AODE and hidden Naive Bayes. The learning phase of the new approach is slower than that of its competitors, while the time complexity for the testing phase is similar. Such characteristics suggest that the new classifier is ideal in scenarios where online learning is not required.
Numerical Integration with Derivatives
Hu Cheng
2006-01-01
A new formula with derivatives for numerical integration was presented. Based on this formula and the Richardson extrapolation process, a numerical integration method was established. It can converge faster than the Romberg's. With the same accuracy, the computation of the new numerical integration with derivatives is only half of that of Romberg's numerical integration.
Calibrating thermal behavior of electronics
Chainer, Timothy J.; Parida, Pritish R.; Schultz, Mark D.
2016-05-31
A method includes determining a relationship between indirect thermal data for a processor and a measured temperature associated with the processor, during a calibration process, obtaining the indirect thermal data for the processor during actual operation of the processor, and determining an actual significant temperature associated with the processor during the actual operation using the indirect thermal data for the processor during actual operation of the processor and the relationship.
Calibrating thermal behavior of electronics
Chainer, Timothy J.; Parida, Pritish R.; Schultz, Mark D.
2017-07-11
A method includes determining a relationship between indirect thermal data for a processor and a measured temperature associated with the processor, during a calibration process, obtaining the indirect thermal data for the processor during actual operation of the processor, and determining an actual significant temperature associated with the processor during the actual operation using the indirect thermal data for the processor during actual operation of the processor and the relationship.
Nonlinear Observers for Gyro Calibration
Thienel, Julie; Sanner, Robert M.
2003-01-01
Nonlinear observers for gyro calibration are presented. The first observer estimates a constant gyro bias. The second observer estimates scale factor errors. The third observer estimates the gyro alignment for three orthogonal gyros. The convergence properties of all three observers are discussed. Additionally, all three observers are coupled with a nonlinear control algorithm. The stability of each of the resulting closed loop systems is analyzed. Simulated test results are presented for each system.
Calibrating thermal behavior of electronics
Chainer, Timothy J.; Parida, Pritish R.; Schultz, Mark D.
2017-01-03
A method includes determining a relationship between indirect thermal data for a processor and a measured temperature associated with the processor, during a calibration process, obtaining the indirect thermal data for the processor during actual operation of the processor, and determining an actual significant temperature associated with the processor during the actual operation using the indirect thermal data for the processor during actual operation of the processor and the relationship.
Calibration of a Parallel Kinematic Machine Tool
HE Xiao-mei; DING Hong-sheng; FU Tie; XIE Dian-huang; XU Jin-zhong; LI Hua-feng; LIU Hui-lin
2006-01-01
A calibration method is presented to enhance the static accuracy of a parallel kinematic machine tool by using a coordinate measuring machine and a laser tracker. According to the established calibration model and the calibration experiment, the factual 42 kinematic parameters of BKX-I parallel kinematic machine tool are obtained. By circular tests the comparison is made between the calibrated and the uncalibrated parameters and shows that there is 80% improvement in accuracy of this machine tool.
Optimal Reliability-Based Code Calibration
Sørensen, John Dalsgaard; Kroon, I. B.; Faber, M. H.
1994-01-01
Calibration of partial safety factors is considered in general, including classes of structures where no code exists beforehand. The partial safety factors are determined such that the difference between the reliability for the different structures in the class considered and a target reliability...... level is minimized. Code calibration on a decision theoretical basis is also considered and it is shown how target reliability indices can be calibrated. Results from code calibration for rubble mound breakwater designs are shown....
A Careful Consideration of the Calibration Concept
Phillips, S. D.; Estler, W. T.; Doiron, T.; Eberhardt, K. R.; Levenson, M. S.
2001-01-01
This paper presents a detailed discussion of the technical aspects of the calibration process with emphasis on the definition of the measurand, the conditions under which the calibration results are valid, and the subsequent use of the calibration results in measurement uncertainty statements. The concepts of measurement uncertainty, error, systematic error, and reproducibility are also addressed as they pertain to the calibration process. PMID:27500027
On constraining pilot point calibration with regularization in PEST
Fienen, M.N.; Muffels, C.T.; Hunt, R.J.
2009-01-01
Ground water model calibration has made great advances in recent years with practical tools such as PEST being instrumental for making the latest techniques available to practitioners. As models and calibration tools get more sophisticated, however, the power of these tools can be misapplied, resulting in poor parameter estimates and/or nonoptimally calibrated models that do not suit their intended purpose. Here, we focus on an increasingly common technique for calibrating highly parameterized numerical models - pilot point parameterization with Tikhonov regularization. Pilot points are a popular method for spatially parameterizing complex hydrogeologic systems; however, additional flexibility offered by pilot points can become problematic if not constrained by Tikhonov regularization. The objective of this work is to explain and illustrate the specific roles played by control variables in the PEST software for Tikhonov regularization applied to pilot points. A recent study encountered difficulties implementing this approach, but through examination of that analysis, insight into underlying sources of potential misapplication can be gained and some guidelines for overcoming them developed. ?? 2009 National Ground Water Association.
GEOMETRIC CALIBRATION OF FULL SPHERICAL PANORAMIC RICOH-THETA CAMERA
S. Aghayari
2017-05-01
Full Text Available A novel calibration process of RICOH-THETA, full-view fisheye camera, is proposed which has numerous applications as a low cost sensor in different disciplines such as photogrammetry, robotic and machine vision and so on. Ricoh Company developed this camera in 2014 that consists of two lenses and is able to capture the whole surrounding environment in one shot. In this research, each lens is calibrated separately and interior/relative orientation parameters (IOPs and ROPs of the camera are determined on the basis of designed calibration network on the central and side images captured by the aforementioned lenses. Accordingly, designed calibration network is considered as a free distortion grid and applied to the measured control points in the image space as correction terms by means of bilinear interpolation. By performing corresponding corrections, image coordinates are transformed to the unit sphere as an intermediate space between object space and image space in the form of spherical coordinates. Afterwards, IOPs and EOPs of each lens are determined separately through statistical bundle adjustment procedure based on collinearity condition equations. Subsequently, ROPs of two lenses is computed from both EOPs. Our experiments show that by applying 3*3 free distortion grid, image measurements residuals diminish from 1.5 to 0.25 degrees on aforementioned unit sphere.
Calibration of piezoelectric RL shunts with explicit residual mode correction
Høgsberg, Jan; Krenk, Steen
2017-01-01
Piezoelectric RL (resistive-inductive) shunts are passive resonant devices used for damping of dominant vibration modes of a flexible structure and their efficiency relies on the precise calibration of the shunt components. In the present paper improved calibration accuracy is attained by an extension of the local piezoelectric transducer displacement by two additional terms, representing the flexibility and inertia contributions from the residual vibration modes not directly addressed by the shunt damping. This results in an augmented dynamic model for the targeted resonant vibration mode, in which the residual contributions, represented by two correction factors, modify both the apparent transducer capacitance and the shunt circuit impedance. Explicit expressions for the correction of the shunt circuit inductance and resistance are presented in a form that is generally applicable to calibration formulae derived on the basis of an assumed single-mode structure, where modal interaction has been neglected. A design procedure is devised and subsequently verified by a numerical example, which demonstrates that effective mitigation can be obtained for an arbitrary vibration mode when the residual mode correction is included in the calibration of the RL shunt.
Geometric Calibration of Full Spherical Panoramic Ricoh-Theta Camera
Aghayari, S.; Saadatseresht, M.; Omidalizarandi, M.; Neumann, I.
2017-05-01
A novel calibration process of RICOH-THETA, full-view fisheye camera, is proposed which has numerous applications as a low cost sensor in different disciplines such as photogrammetry, robotic and machine vision and so on. Ricoh Company developed this camera in 2014 that consists of two lenses and is able to capture the whole surrounding environment in one shot. In this research, each lens is calibrated separately and interior/relative orientation parameters (IOPs and ROPs) of the camera are determined on the basis of designed calibration network on the central and side images captured by the aforementioned lenses. Accordingly, designed calibration network is considered as a free distortion grid and applied to the measured control points in the image space as correction terms by means of bilinear interpolation. By performing corresponding corrections, image coordinates are transformed to the unit sphere as an intermediate space between object space and image space in the form of spherical coordinates. Afterwards, IOPs and EOPs of each lens are determined separately through statistical bundle adjustment procedure based on collinearity condition equations. Subsequently, ROPs of two lenses is computed from both EOPs. Our experiments show that by applying 3*3 free distortion grid, image measurements residuals diminish from 1.5 to 0.25 degrees on aforementioned unit sphere.
Variability among polysulphone calibration curves
Casale, G R [University of Rome ' La Sapienza' , Physics Department, P.le A. Moro 2, I-00185, Rome (Italy); Borra, M [ISPESL - Istituto Superiore per la Prevenzione E la Sicurezza del Lavoro, Occupational Hygiene Department, Via Fontana Candida 1, I-0040 Monteporzio Catone (RM) (Italy); Colosimo, A [University of Rome ' La Sapienza' , Department of Human Physiology and Pharmacology, P.le A. Moro 2, I-00185, Rome (Italy); Colucci, M [ISPESL - Istituto Superiore per la Prevenzione E la Sicurezza del Lavoro, Occupational Hygiene Department, Via Fontana Candida 1, I-0040 Monteporzio Catone (RM) (Italy); Militello, A [ISPESL - Istituto Superiore per la Prevenzione E la Sicurezza del Lavoro, Occupational Hygiene Department, Via Fontana Candida 1, I-0040 Monteporzio Catone (RM) (Italy); Siani, A M [University of Rome ' La Sapienza' , Physics Department, P.le A. Moro 2, I-00185, Rome (Italy); Sisto, R [ISPESL - Istituto Superiore per la Prevenzione E la Sicurezza del Lavoro, Occupational Hygiene Department, Via Fontana Candida 1, I-0040 Monteporzio Catone (RM) (Italy)
2006-09-07
Within an epidemiological study regarding the correlation between skin pathologies and personal ultraviolet (UV) exposure due to solar radiation, 14 field campaigns using polysulphone (PS) dosemeters were carried out at three different Italian sites (urban, semi-rural and rural) in every season of the year. A polysulphone calibration curve for each field experiment was obtained by measuring the ambient UV dose under almost clear sky conditions and the corresponding change in the PS film absorbance, prior and post exposure. Ambient UV doses were measured by well-calibrated broad-band radiometers and by electronic dosemeters. The dose-response relation was represented by the typical best fit to a third-degree polynomial and it was parameterized by a coefficient multiplying a cubic polynomial function. It was observed that the fit curves differed from each other in the coefficient only. It was assessed that the multiplying coefficient was affected by the solar UV spectrum at the Earth's surface whilst the polynomial factor depended on the photoinduced reaction of the polysulphone film. The mismatch between the polysulphone spectral curve and the CIE erythemal action spectrum was responsible for the variability among polysulphone calibration curves. The variability of the coefficient was related to the total ozone amount and the solar zenith angle. A mathematical explanation of such a parameterization was also discussed.
Unassisted 3D camera calibration
Atanassov, Kalin; Ramachandra, Vikas; Nash, James; Goma, Sergio R.
2012-03-01
With the rapid growth of 3D technology, 3D image capture has become a critical part of the 3D feature set on mobile phones. 3D image quality is affected by the scene geometry as well as on-the-device processing. An automatic 3D system usually assumes known camera poses accomplished by factory calibration using a special chart. In real life settings, pose parameters estimated by factory calibration can be negatively impacted by movements of the lens barrel due to shaking, focusing, or camera drop. If any of these factors displaces the optical axes of either or both cameras, vertical disparity might exceed the maximum tolerable margin and the 3D user may experience eye strain or headaches. To make 3D capture more practical, one needs to consider unassisted (on arbitrary scenes) calibration. In this paper, we propose an algorithm that relies on detection and matching of keypoints between left and right images. Frames containing erroneous matches, along with frames with insufficiently rich keypoint constellations, are detected and discarded. Roll, pitch yaw , and scale differences between left and right frames are then estimated. The algorithm performance is evaluated in terms of the remaining vertical disparity as compared to the maximum tolerable vertical disparity.
Model Calibration in Watershed Hydrology
Yilmaz, Koray K.; Vrugt, Jasper A.; Gupta, Hoshin V.; Sorooshian, Soroosh
2009-01-01
Hydrologic models use relatively simple mathematical equations to conceptualize and aggregate the complex, spatially distributed, and highly interrelated water, energy, and vegetation processes in a watershed. A consequence of process aggregation is that the model parameters often do not represent directly measurable entities and must, therefore, be estimated using measurements of the system inputs and outputs. During this process, known as model calibration, the parameters are adjusted so that the behavior of the model approximates, as closely and consistently as possible, the observed response of the hydrologic system over some historical period of time. This Chapter reviews the current state-of-the-art of model calibration in watershed hydrology with special emphasis on our own contributions in the last few decades. We discuss the historical background that has led to current perspectives, and review different approaches for manual and automatic single- and multi-objective parameter estimation. In particular, we highlight the recent developments in the calibration of distributed hydrologic models using parameter dimensionality reduction sampling, parameter regularization and parallel computing.
PACS photometer calibration block analysis
Moór, A; Kiss, Cs; Balog, Z; Billot, N; Marton, G
2013-01-01
The absolute stability of the PACS bolometer response over the entire mission lifetime without applying any corrections is about 0.5% (standard deviation) or about 8% peak-to-peak. This fantastic stability allows us to calibrate all scientific measurements by a fixed and time-independent response file, without using any information from the PACS internal calibration sources. However, the analysis of calibration block observations revealed clear correlations of the internal source signals with the evaporator temperature and a signal drift during the first half hour after the cooler recycling. These effects are small, but can be seen in repeated measurements of standard stars. From our analysis we established corrections for both effects which push the stability of the PACS bolometer response to about 0.2% (stdev) or 2% in the blue, 3% in the green and 5% in the red channel (peak-to-peak). After both corrections we still see a correlation of the signals with PACS FPU temperatures, possibly caused by parasitic h...
Embodying, calibrating and caring for a local model of obesity
Winther, Jonas; Hillersdal, Line
and technologies herein lead to the emergence of what we propose to be local models of obesity. Describing the emergence of local models of obesity we show how a specific model is being cared for, calibrated and embodied by research staff as well as research subjects and how interdisciplinary obesity research...... is an ongoing process of configuring but also extending beyond already established models of obesity. We argue that an articulation of such practices of local care, embodiment and calibration are crucial for the appreciation, evaluation and transferability of interdisciplinary obesity research....... highlighted as such a problem. Within research communities disparate explanatory models of obesity exist (Ulijaszek 2008) and some of these models of obesity are brought together in the Copenhagen-based interdisciplinary research initiative; Governing Obesity (GO) with the aim of addressing the causes...
HCAL Calibration Status in Summer 2017
CMS Collaboration
2017-01-01
This note presents the status of the HCAL calibration in Summer 2017. In particular, results on the aging of the hadron endcap (HE) detector measured using the laser calibration system and the calibration of the hadron forward (HF) detector using electrons from Z boson decays are discussed.
Net analyte signal calculation for multivariate calibration
Ferre, J.; Faber, N.M.
2003-01-01
A unifying framework for calibration and prediction in multivariate calibration is shown based on the concept of the net analyte signal (NAS). From this perspective, the calibration step can be regarded as the calculation of a net sensitivity vector, whose length is the amount of net signal when the
Code Calibration as a Decision Problem
Sørensen, John Dalsgaard; Kroon, I. B.; Faber, M. H.
1993-01-01
Calibration of partial coefficients for a class of structures where no code exists is considered. The partial coefficients are determined such that the difference between the reliability for the different structures in the class considered and a target reliability level is minimized. Code...... calibration on a decision theoretical basis is discussed. Results from code calibration for rubble mound breakwater designs are shown....
Backscatter nephelometer to calibrate scanning lidar
Cyle E. Wold; Vladmir A. Kovalev; Wei Min Hao
2008-01-01
The general concept of an open-path backscatter nephelometer, its design, principles of calibration and the operational use are discussed. The research-grade instrument, which operates at the wavelength 355 nm, will be co-located with a scanning-lidar at measurement sites near wildfires, and used for the lidar calibration. Such a near-end calibration has significant...
14 CFR 33.45 - Calibration tests.
2010-01-01
... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Calibration tests. 33.45 Section 33.45... STANDARDS: AIRCRAFT ENGINES Block Tests; Reciprocating Aircraft Engines § 33.45 Calibration tests. (a) Each engine must be subjected to the calibration tests necessary to establish its power characteristics...
14 CFR 33.85 - Calibration tests.
2010-01-01
... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Calibration tests. 33.85 Section 33.85... STANDARDS: AIRCRAFT ENGINES Block Tests; Turbine Aircraft Engines § 33.85 Calibration tests. (a) Each engine must be subjected to those calibration tests necessary to establish its power characteristics and...
Systems and methods of eye tracking calibration
2014-01-01
Methods and systems to facilitate eye tracking control calibration are provided. One or more objects are displayed on a display of a device, where the one or more objects are associated with a function unrelated to a calculation of one or more calibration parameters. The one or more calibration...
Cosmic reionization on computers. I. Design and calibration of simulations
Gnedin, Nickolay Y., E-mail: gnedin@fnal.gov [Particle Astrophysics Center, Fermi National Accelerator Laboratory, Batavia, IL 60510 (United States)
2014-09-20
Cosmic Reionization On Computers is a long-term program of numerical simulations of cosmic reionization. Its goal is to model fully self-consistently (albeit not necessarily from the first principles) all relevant physics, from radiative transfer to gas dynamics and star formation, in simulation volumes of up to 100 comoving Mpc, and with spatial resolution approaching 100 pc in physical units. In this method paper, we describe our numerical method, the design of simulations, and the calibration of numerical parameters. Using several sets (ensembles) of simulations in 20 h {sup –1} Mpc and 40 h {sup –1} Mpc boxes with spatial resolution reaching 125 pc at z = 6, we are able to match the observed galaxy UV luminosity functions at all redshifts between 6 and 10, as well as obtain reasonable agreement with the observational measurements of the Gunn-Peterson optical depth at z < 6.
An Example Multi-Model Analysis: Calibration and Ranking
Ahlmann, M.; James, S. C.; Lowry, T. S.
2007-12-01
accurately using several of the twelve analytical solutions, while more numerous calibration data lead to a clearly defined model ranking. Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
On the absolute calibration of SO2 cameras
J. Zielcke
2012-09-01
respective results are compared with measurements from an IDOAS to verify the calibration curve over the spatial extend of the image. Our results show that calibration cells can lead to an overestimation of the SO2 CD by up to 60% compared with CDs from the DOAS measurements. Besides these errors of calibration, radiative transfer effects (e.g. light dilution, multiple scattering can significantly influence the results of both instrument types. These effects can lead to an even more significant overestimation or, depending on the measurement conditions, an underestimation of the true CD. Previous investigations found that possible errors can be more than an order of magnitude. However, the spectral information from the DOAS measurements allows to correct for these radiative transfer effects. The measurement presented in this work were taken at Popocatépetl, Mexico, between 1 March 2011 and 4 March 2011. Average SO2 emission rates between 4.00 kg s−1 and 14.34 kg s−1 were observed.
VIIRS reflective solar bands on-orbit calibration and performance: a three-year update
Sun, Junqiang; Wang, Menghua
2014-11-01
The on-orbit calibration of the reflective solar bands (RSBs) of VIIRS and the result from the analysis of the up-to-date 3 years of mission data are presented. The VIIRS solar diffuser (SD) and lunar calibration methodology are discussed, and the calibration coefficients, called F-factors, for the RSBs are given for the latest reincarnation. The coefficients derived from the two calibrations are compared and the uncertainties of the calibrations are discussed. Numerous improvements are made, with the major improvement to the calibration result come mainly from the improved bidirectional reflectance factor (BRF) of the SD and the vignetting functions of both the SD screen and the sun-view screen. The very clean results, devoid of many previously known noises and artifacts, assures that VIIRS has performed well for the three years on orbit since launch, and in particular that the solar diffuser stability monitor (SDSM) is functioning essentially without flaws. The SD degradation, or H-factors, for most part shows the expected decline except for the surprising rise on day 830 lasting for 75 days signaling a new degradation phenomenon. Nevertheless the SDSM and the calibration methodology have successfully captured the SD degradation for RSB calibration. The overall improvement has the most significant and direct impact on the ocean color products which demands high accuracy from RSB observations.
42 CFR 493.1255 - Standard: Calibration and calibration verification procedures.
2010-10-01
..., if possible, traceable to a reference method or reference material of known value; and (ii) Including... 42 Public Health 5 2010-10-01 2010-10-01 false Standard: Calibration and calibration verification... for Nonwaived Testing Analytic Systems § 493.1255 Standard: Calibration and calibration...
Spectral calibration for convex grating imaging spectrometer
Zhou, Jiankang; Chen, Xinhua; Ji, Yiqun; Chen, Yuheng; Shen, Weimin
2013-12-01
Spectral calibration of imaging spectrometer plays an important role for acquiring target accurate spectrum. There are two spectral calibration types in essence, the wavelength scanning and characteristic line sampling. Only the calibrated pixel is used for the wavelength scanning methods and he spectral response function (SRF) is constructed by the calibrated pixel itself. The different wavelength can be generated by the monochromator. The SRF is constructed by adjacent pixels of the calibrated one for the characteristic line sampling methods. And the pixels are illuminated by the narrow spectrum line and the center wavelength of the spectral line is exactly known. The calibration result comes from scanning method is precise, but it takes much time and data to deal with. The wavelength scanning method cannot be used in field or space environment. The characteristic line sampling method is simple, but the calibration precision is not easy to confirm. The standard spectroscopic lamp is used to calibrate our manufactured convex grating imaging spectrometer which has Offner concentric structure and can supply high resolution and uniform spectral signal. Gaussian fitting algorithm is used to determine the center position and the Full-Width-Half-Maximum（FWHM）of the characteristic spectrum line. The central wavelengths and FWHMs of spectral pixels are calibrated by cubic polynomial fitting. By setting a fitting error thresh hold and abandoning the maximum deviation point, an optimization calculation is achieved. The integrated calibration experiment equipment for spectral calibration is developed to enhance calibration efficiency. The spectral calibration result comes from spectral lamp method are verified by monochromator wavelength scanning calibration technique. The result shows that spectral calibration uncertainty of FWHM and center wavelength are both less than 0.08nm, or 5.2% of spectral FWHM.
Gemini Planet Imager Observational Calibrations II: Detector Performance and Calibration
Ingraham, Patrick; Sadakuni, Naru; Ruffio, Jean-Baptiste; Maire, Jerome; Chilcote, Jeff; Larkin, James; Marchis, Franck; Galicher, Raphael; Weiss, Jason
2014-01-01
The Gemini Planet Imager is a newly commissioned facility instrument designed to measure the near-infrared spectra of young extrasolar planets in the solar neighborhood and obtain imaging polarimetry of circumstellar disks. GPI's science instrument is an integral field spectrograph that utilizes a HAWAII-2RG detector with a SIDECAR ASIC readout system. This paper describes the detector characterization and calibrations performed by the GPI Data Reduction Pipeline to compensate for effects including bad/hot/cold pixels, persistence, non-linearity, vibration induced microphonics and correlated read noise.
Photometric Calibrations for the SIRTF Infrared Spectrograph
Morris, P W; Herter, T L; Armus, L; Houck, J; Sloan, G
2002-01-01
The SIRTF InfraRed Spectrograph (IRS) is faced with many of the same calibration challenges that were experienced in the ISO SWS calibration program, owing to similar wavelength coverage and overlapping spectral resolutions of the two instruments. Although the IRS is up to ~300 times more sensitive and without moving parts, imposing unique calibration challenges on their own, an overlap in photometric sensitivities of the high-resolution modules with the SWS grating sections allows lessons, resources, and certain techniques from the SWS calibration programs to be exploited. We explain where these apply in an overview of the IRS photometric calibration planning.
Extended variational theory of complex rays in heterogeneous Helmholtz problem
Li, Hao; Ladeveze, Pierre; Riou, Hervé
2017-02-01
In the past years, a numerical technique method called Variational Theory of Complex Rays (VTCR) has been developed for vibration problems in medium frequency. It is a Trefftz Discontinuous Galerkin method which uses plane wave functions as shape functions. However this method is only well developed in homogeneous case. In this paper, VTCR is extended to the heterogeneous Helmholtz problem by creating a new base of shape functions. Numerical examples give a scope of the performances of such an extension of VTCR.
Potential of modern technologies for improvement of in vivo calibration.
Franck, D; de Carlan, L; Fisher, H; Pierrat, N; Schlagbauer, M; Wahl, W
2007-01-01
In the frame of IDEA project, a research programme has been carried out to study the potential of the reconstruction of numerical anthropomorphic phantoms based on personal physiological data obtained by computed tomography (CT) and Magnetic Resonance Imaging (MRI) for calibration in in vivo monitoring. As a result, new procedures have been developed taking advantage of recent progress in image processing codes that allow, after scanning and rapidly reconstructing a realistic voxel phantom, to convert the whole measurement geometry into computer file to be used on line for MCNP (Monte Carlo N-Particule code) calculations. The present paper overviews the major abilities of the OEDIPE software studies made in the frame of the IDEA project, on the examples of calibration for lung monitoring as well as whole body counting of a real patient.
Estimates of Identification Result Disturbances in Parallel Mechanism Calibration
无
2006-01-01
General QR decomposition of the observation matrix is used to solve identification functions to evaluate identification results of every parameter in parallel mechanism calibrations. A relationship between measured information and identification results is obtained by analyzing numerous matrix transforms and QR decompositions. When distributions of measurement error are determined, random distributions of identification result disturbances (IRDs) can be obtained from this relationship as a function of measurement errors. Then the ranges of the IRDs can be effectively estimated, even if true parameter values are unknown. An optimization index based on IRD estimate is presented to select measurement configurations to achieve smaller IRDs. Two simulation examples were carried out with different modes and calibration methods. The results show that the method is effective and that the optimization index is useful. Some regular parameter identification problems can be explained by the IRD estimates.
Numerical 3-D Modelling of Overflows
Larsen, Torben; Nielsen, L.; Jensen, B.;
2008-01-01
The present study uses laboratory experiments to evaluate the reliability of two types of numerical models of sewers systems: - 1-dimensional model based on the extended Saint-Venant equation including the term for curvature of the water surface (the so-called Boussinesq approximation) - 2- and 3...
Min Wang
2017-01-01
Full Text Available PFC2D(3D is commercial software, which is commonly used to model the crack initiation of rock and rock-like materials. For the PFC2D(3D numerical simulation, a proper set of microparameters need to be determined before the numerical simulation. To obtain a proper set of microparameters for PFC2D(3D model based on the macroparameters obtained from physical experiments, a novel technique has been carried out in this paper. The improved simulated annealing algorithm was employed to calibrate the microparameters of the numerical simulation model of PFC2D(3D. A Python script completely controls the calibration process, which can terminate automatically based on a termination criterion. The microparameter calibration process is not based on establishing the relationship between microparameters and macroparameters; instead, the microparameters are calibrated according to the improved simulated annealing algorithm. By using the proposed approach, the microparameters of both the contact-bond model and parallel-bond model in PFC2D(3D can be determined. To verify the validity of calibrating the microparameters of PFC2D(3D via the improved simulated annealing algorithm, some examples were selected from the literature. The corresponding numerical simulations were performed, and the numerical simulation results indicated that the proposed method is reliable for calibrating the microparameters of PFC2D(3D model.
Muon Energy Calibration of the MINOS Detectors
Miyagawa, Paul S. [Somerville College, Oxford (United Kingdom)
2004-01-01
MINOS is a long-baseline neutrino oscillation experiment designed to search for conclusive evidence of neutrino oscillations and to measure the oscillation parameters precisely. MINOS comprises two iron tracking calorimeters located at Fermilab and Soudan. The Calibration Detector at CERN is a third MINOS detector used as part of the detector response calibration programme. A correct energy calibration between these detectors is crucial for the accurate measurement of oscillation parameters. This thesis presents a calibration developed to produce a uniform response within a detector using cosmic muons. Reconstruction of tracks in cosmic ray data is discussed. This data is utilized to calculate calibration constants for each readout channel of the Calibration Detector. These constants have an average statistical error of 1.8%. The consistency of the constants is demonstrated both within a single run and between runs separated by a few days. Results are presented from applying the calibration to test beam particles measured by the Calibration Detector. The responses are calibrated to within 1.8% systematic error. The potential impact of the calibration on the measurement of oscillation parameters by MINOS is also investigated. Applying the calibration reduces the errors in the measured parameters by ~ 10%, which is equivalent to increasing the amount of data by 20%.
Muon Energy Calibration of the MINOS Detectors
Miyagawa, Paul S.
2004-09-01
MINOS is a long-baseline neutrino oscillation experiment designed to search for conclusive evidence of neutrino oscillations and to measure the oscillation parameters precisely. MINOS comprises two iron tracking calorimeters located at Fermilab and Soudan. The Calibration Detector at CERN is a third MINOS detector used as part of the detector response calibration programme. A correct energy calibration between these detectors is crucial for the accurate measurement of oscillation parameters. This thesis presents a calibration developed to produce a uniform response within a detector using cosmic muons. Reconstruction of tracks in cosmic ray data is discussed. This data is utilized to calculate calibration constants for each readout channel of the Calibration Detector. These constants have an average statistical error of 1.8%. The consistency of the constants is demonstrated both within a single run and between runs separated by a few days. Results are presented from applying the calibration to test beam particles measured by the Calibration Detector. The responses are calibrated to within 1.8% systematic error. The potential impact of the calibration on the measurement of oscillation parameters by MINOS is also investigated. Applying the calibration reduces the errors in the measured parameters by {approx} 10%, which is equivalent to increasing the amount of data by 20%.
"Calibration-on-the-spot": How to calibrate an EMCCD camera from its images.
Mortensen, Kim I; Flyvbjerg, Henrik
2016-07-06
In order to count photons with a camera, the camera must be calibrated. Photon counting is necessary, e.g., to determine the precision of localization-based super-resolution microscopy. Here we present a protocol that calibrates an EMCCD camera from information contained in isolated, diffraction-limited spots in any image taken by the camera, thus making dedicated calibration procedures redundant by enabling calibration post festum, from images filed without calibration information.
“Calibration-on-the-spot”: How to calibrate an EMCCD camera from its images
Mortensen, Kim; Flyvbjerg, Henrik
2016-01-01
In order to count photons with a camera, the camera must be calibrated. Photon counting is necessary, e.g., to determine the precision of localization-based super-resolution microscopy. Here we present a protocol that calibrates an EMCCD camera from information contained in isolated, diffraction......-limited spots in any image taken by the camera, thus making dedicated calibration procedures redundant by enabling calibration post festum, from images filed without calibration information....
Calibration of the Cherenkov Telescope Array
Gaug, Markus; Berge, David; Reyes, Raquel de los; Doro, Michele; Foerster, Andreas; Maccarone, Maria Concetta; Parsons, Dan; van Eldik, Christopher
2015-01-01
The construction of the Cherenkov Telescope Array is expected to start soon. We will present the baseline methods and their extensions currently foreseen to calibrate the observatory. These are bound to achieve the strong requirements on allowed systematic uncertainties for the reconstructed gamma-ray energy and flux scales, as well as on the pointing resolution, and on the overall duty cycle of the observatory. Onsite calibration activities are designed to include a robust and efficient calibration of the telescope cameras, and various methods and instruments to achieve calibration of the overall optical throughput of each telescope, leading to both inter-telescope calibration and an absolute calibration of the entire observatory. One important aspect of the onsite calibration is a correct understanding of the atmosphere above the telescopes, which constitutes the calorimeter of this detection technique. It is planned to be constantly monitored with state-of-the-art instruments to obtain a full molecular and...
Verification of L-band SAR calibration
Larson, R. W.; Jackson, P. L.; Kasischke, E.
1985-01-01
Absolute calibration of a digital L-band SAR system to an accuracy of better than 3 dB has been verified. This was accomplished with a calibration signal generator that produces the phase history of a point target. This signal relates calibration values to various SAR data sets. Values of radar cross-section (RCS) of reference reflectors were obtained using a derived calibration relationship for the L-band channel on the ERIM/CCRS X-C-L SAR system. Calibrated RCS values were compared to known RCS values of each reference reflector for verification and to obtain an error estimate. The calibration was based on the radar response to 21 calibrated reference reflectors.
Radio Interferometric Calibration Using The SAGE Algorithm
Kazemi, S; Zaroubi, S; de Bruyn, A G; Koopmans, L V E; Noordam, J
2010-01-01
The aim of the new generation of radio synthesis arrays such as LOFAR and SKA is to achieve much higher sensitivity, resolution and frequency coverage than what is available now. To accomplish this goal, the accuracy of the calibration techniques used is of considerable importance. Moreover, since these telescopes produce huge amounts of data, speed of convergence of calibration is a major bottleneck. The errors in calibration are due to system noise (sky and instrumental) as well as the estimation errors introduced by the calibration technique itself, which we call "solver noise". We define solver noise as the "distance" between the optimal solution, the true value of the unknowns corrupted by the system noise, and the solution obtained by calibration. We present the Space Alternating Generalized Expectation Maximization (SAGE) calibration technique, which is a modification of the Expectation Maximization algorithm, and compare its performance with the traditional Least Squares calibration based on the level...
A novel calibration method for phase-locked loops
Cassia, Marco; Shah, Peter Jivan; Bruun, Erik
2005-01-01
A novel method to calibrate the frequency response of a Phase-Locked Loop is presented. The method requires just an additional digital counter to measure the natural frequency of the PLL; moreover it is capable of estimating the static phase offset. The measured value can be used to tune the PLL...... response to the desired value. The method is demonstrated mathematically on a typical PLL topology and it is extended to SigmaDelta fractional-N PLLs. A set of simulations performed with two different simulators is used to verify the applicability of the method....
A calibration method for PLLs based on transient response
Cassia, Marco; Shah, Peter Jivan; Bruun, Erik
2004-01-01
A novel method to calibrate the frequency response of a Phase-Locked Loop is presented. The method requires just an additional digital counter and an auxiliary Phase-Frequency Detector (PFD) to measure the natural frequency of the PLL. The measured value can be used to tune the PLL response...... to the desired value. The method is demonstrated mathematically on a typical PLL topology and it is extended to ΣΔ fractional-N PLLs. A set of simulations performed with two different simulators is used to verify the applicability of the method....
Calibration of the TWIST high-precision drift chambers
Grossheim, A; Olin, A; 10.1016/j.nima.2010.08.105
2010-01-01
A method for the precise measurement of drift times for the high-precision drift chambers used in the TWIST detector is described. It is based on the iterative correction of the space-time relationships by the time residuals of the track fit, resulting in a measurement of the effective drift times. The corrected drift time maps are parametrised individually for each chamber using spline functions. Biases introduced by the reconstruction itself are taken into account as well, making it necessary to apply the procedure to both data and simulation. The described calibration is shown to improve the reconstruction performance and to extend significantly the physics reach of the experiment.
A novel calibration method for phase-locked loops
Cassia, Marco; Shah, Peter Jivan; Bruun, Erik
2005-01-01
A novel method to calibrate the frequency response of a Phase-Locked Loop is presented. The method requires just an additional digital counter to measure the natural frequency of the PLL; moreover it is capable of estimating the static phase offset. The measured value can be used to tune the PLL ...... response to the desired value. The method is demonstrated mathematically on a typical PLL topology and it is extended to SigmaDelta fractional-N PLLs. A set of simulations performed with two different simulators is used to verify the applicability of the method....
Theoretical and Numerical Investigations on Shallow Tunnelling in Unsaturated Soils
Soranzo, Enrico; Wu, Wei
2013-04-01
Excavation of shallow tunnels with the New Austrian Tunnelling Method (NATM) requires proper assessing of the tunnel face stability, to enable an open-face excavation, and the estimation of the correspondent surface settlements. Soils in a partially saturated condition exhibit a higher cohesion than in a fully saturated state, which can be taken into account when assessing the stability of the tunnel face. For the assessment of the face support pressure, different methods are used in engineering practice, varying from simple empirical and analytical formulations to advanced finite element analysis. Such procedures can be modified to account for the unsaturated state of soils. In this study a method is presented to incorporate the effect of partial saturation in the numerical analysis. The results are then compared with a simple analytical formulation derived from parametric studies. As to the numerical analysis, the variation of cohesion and of Young's modulus with saturation can be considered when the water table lies below the tunnel in a soil exhibiting a certain capillary rise, so that the tunnel is driven in a partially saturated layer. The linear elastic model with Mohr-Coulomb failure criterion can be extended to partially saturated states and calibrated with triaxial tests on unsaturated. In order to model both positive and negative pore water pressure (suction), Bishop's effective stress is incorporated into Mohr-Coulomb's failure criterion. The effective stress parameter in Bishop's formulation is related to the degree of saturation as suggested by Fredlund. If a linear suction distribution is assumed, the degree of saturation can be calculated from the Soil Water Characteristic Curve (SWCC). Expressions exist that relate the Young's modulus of unsaturated soils to the net mean stress and the matric suction. The results of the numerical computation can be compared to Vermeer & Ruse's closed-form formula that expresses the limit support pressure of the
Calibration of TOB+ Thermometer's Cards
Banitt, Daniel
2014-01-01
Motivation - Under the new upgrade of the CMS detector the working temperature of the trackers had been reduced to -27 Celsius degrees. Though the thermal sensors themselves (Murata and Fenwal thermistors) are effective at these temperatures, the max1542 PLC (programmable logic controller) cards, interpreting the resistance of the thermal sensors into DC counts usable by the DCS (detector control system), are not designed for these temperatures in which the counts exceed their saturation and therefor had to be replaced. In my project I was in charge of handling the emplacement and calibration of the new PLC cards to the TOB (tracker outer barrel) control system.
AFFTC Standard Airspeed Calibration Procedures
1981-06-01
25x0UIXQXQ Results of groundLpeed course calibration are normally pre- sented in the following plots: 1. .AvP vs Vi Ŗ. All vs V ic 3. AMPC vs Mic .4...8217Average AfPeavgpo, tion correction AM /AH 10-5 per and figure V 9 PC PC feet . fu V AYpc" x q3 @ , Average position avg corred ion (AM @ AMPC /AVPC...instrument error 0 M ic From and 0), Mach number p Chart 8.5 in reference’l (AFTR 6273) (DO AMPPacer poqition error calibra- Pc tion at9 S( AMpc /’,HpC)p
Fundamentals of Physics, Extended 7th Edition
Halliday, David; Resnick, Robert; Walker, Jearl
2004-05-01
No other book on the market today can match the 30-year success of Halliday, Resnick and Walker's Fundamentals of Physics! Fundamentals of Physics, 7th Edition and the Extended Version, 7th Edition offer a solid understanding of fundamental physics concepts, helping readers apply this conceptual understanding to quantitative problem solving, in a breezy, easy-to-understand style. A unique combination of authoritative content and stimulating applications. * Numerous improvements in the text, based on feedback from the many users of the sixth edition (both instructors and students) * Several thousand end-of-chapter problems have been rewritten to streamline both the presentations and answers * 'Chapter Puzzlers' open each chapter with an intriguing application or question that is explained or answered in the chapter * Problem-solving tactics are provided to help beginning Physics students solve problems and avoid common error * The first section in every chapter introduces the subject of the chapter by asking and answering, "What is Physics?" as the question pertains to the chapter * Numerous supplements available to aid teachers and students The extended edition provides coverage of developments in Physics in the last 100 years, including: Einstein and Relativity, Bohr and others and Quantum Theory, and the more recent theoretical developments like String Theory.
Calibration of hydrological models using flow-duration curves
I. K. Westerberg
2011-07-01
Full Text Available The degree of belief we have in predictions from hydrologic models will normally depend on how well they can reproduce observations. Calibrations with traditional performance measures, such as the Nash-Sutcliffe model efficiency, are challenged by problems including: (1 uncertain discharge data, (2 variable sensitivity of different performance measures to different flow magnitudes, (3 influence of unknown input/output errors and (4 inability to evaluate model performance when observation time periods for discharge and model input data do not overlap. This paper explores a calibration method using flow-duration curves (FDCs to address these problems. The method focuses on reproducing the observed discharge frequency distribution rather than the exact hydrograph. It consists of applying limits of acceptability for selected evaluation points (EPs on the observed uncertain FDC in the extended GLUE approach. Two ways of selecting the EPs were tested – based on equal intervals of discharge and of volume of water. The method was tested and compared to a calibration using the traditional model efficiency for the daily four-parameter WASMOD model in the Paso La Ceiba catchment in Honduras and for Dynamic TOPMODEL evaluated at an hourly time scale for the Brue catchment in Great Britain. The volume method of selecting EPs gave the best results in both catchments with better calibrated slow flow, recession and evaporation than the other criteria. Observed and simulated time series of uncertain discharges agreed better for this method both in calibration and prediction in both catchments. An advantage with the method is that the rejection criterion is based on an estimation of the uncertainty in discharge data and that the EPs of the FDC can be chosen to reflect the aims of the modelling application, e.g. using more/less EPs at high/low flows. While the method appears less sensitive to epistemic input/output errors than previous use of limits of
2016-09-01
data acquisition equipment and phase measurement added. Some errors due to simplifications in the acoustics of the coupler are left to future work...new data acquisition equipment and phase measurement added. Some errors due to simplifications in the acoustics of the coupler are left to future...using the sample clock for sample rate and the convert clock to time the sequential scans over the active channels within each sample. The driver software
Pulse-based internal calibration of polarimetric SAR
Dall, Jørgen; Skou, Niels; Christensen, Erik Lintz
1994-01-01
Internal calibration greatly diminishes the dependence on calibration target deployment compared to external calibration. Therefore the Electromagnetics Institute (EMI) at the Technical University of Denmark (TUD) has equipped its polarimetric SAR, EMISAR, with several calibration loops and devel......Internal calibration greatly diminishes the dependence on calibration target deployment compared to external calibration. Therefore the Electromagnetics Institute (EMI) at the Technical University of Denmark (TUD) has equipped its polarimetric SAR, EMISAR, with several calibration loops...
Development and Uncertainty Evaluation of Calibrating System for Digital Energy Setting
Zhao Sha
2016-01-01
Full Text Available The power measurement mode has made fundamental changes with the development of smart substation, traditional energy metering analog devices are replaced by digital devices according to the requirements of digitalized information substation. Creating traceability between digital measuring value and analog measuring value and developing relevant calibration devices are key issues in smart substation construction. A new device for calibration digital energy meters using integral calibration technology is described in this paper. Based on the Standard Comparison Method, the device establishes a connection between the digital power and the traditional analog power with the integral calibration technology, and solves the problem that digital power was not able to be traced. The device is designed with final measuring voltage level: 220kV/110kV, measuring current range: 5A~5000A, and extended uncertainty less than 0.05%. Furthermore, the estimating process of the uncertainty of the device is discussed emphatically.
Calibration of robot tool centre point using camera-based system
Gordić Zaviša
2016-01-01
Full Text Available Robot Tool Centre Point (TCP calibration problem is of great importance for a number of industrial applications, and it is well known both in theory and in practice. Although various techniques have been proposed for solving this problem, they mostly require tool jogging or long processing time, both of which affect process performance by extending cycle time. This paper presents an innovative way of TCP calibration using a set of two cameras. The robot tool is placed in an area where images in two orthogonal planes are acquired using cameras. Using robust pattern recognition, even deformed tool can be identified on images, and information about its current position and orientation forwarded to control unit for calibration. Compared to other techniques, test results show significant reduction in procedure complexity and calibration time. These improvements enable more frequent TCP checking and recalibration during production, thus improving the product quality.
Reda, Ibrahim; Andreas, Afshin; Dooraghi, Mike; Sengupta, Manajit; Habte, Aron; Kutchenreiter, Mark
2017-01-01
Shortwave radiometers such as pyranometers, pyrheliometers, and photovoltaic cells are calibrated with traceability to consensus reference, maintained by Absolute Cavity Radiometers (ACRs). The ACR is an open cavity with no window, and measures the extended broadband spectrum of the terrestrial direct solar beam irradiance, unlike shortwave radiometers that cover a limited range of the spectrum. The difference between the two spectral ranges may lead to calibration bias that can exceed 1%. This article describes a method to reduce the calibration bias resulting from using broadband ACRs to calibrate shortwave radiometers, by using an ACR with Schott glass window to measure the reference broadband shortwave irradiance in the terrestrial direct solar beam from 0.3 um to 3 um.
Alternative Data Reduction Procedures for UVES: Wavelength Calibration and Spectrum Addition
Thompson, Rodger I; Black, John H; Martins, C J A P
2008-01-01
This paper addresses alternative procedures to the ESO supplied pipeline procedures for the reduction of UVES spectra of two quasar spectra to determine the value of the fundamental constant mu = Mp/Me at early times in the universe. The procedures utilize intermediate product images and spectra produced by the pipeline with alternative wavelength calibration and spectrum addition methods. Spectroscopic studies that require extreme wavelength precision need customized wavelength calibration procedures beyond that usually supplied by the standard data reduction pipelines. An example of such studies is the measurement of the values of the fundamental constants at early times in the universe. This article describes a wavelength calibration procedure for the UV-Visual Echelle Spectrometer on the Very Large Telescope, however, it can be extended to other spectrometers as well. The procedure described here provides relative wavelength precision of better than 3E-7 for the long-slit Thorium-Argon calibration lamp ex...
Design of a Calibration System for Heat Flux Meters
Arpino, F.; Dell'Isola, M.; Ficco, G.; Iacomini, L.; Fernicola, V.
2011-12-01
Accurate heat flux measurements are needed to gain a better knowledge of the thermal performance of buildings and to evaluate the heat exchange among various parts of a building envelope. Heat flux meters (HFMs) are commonly used both in laboratory applications and in situ for measuring one-dimensional heat fluxes and, thus, estimating the thermal transmittance of material samples and existing buildings components. Building applications often requires heat flux measurements below 100 W · m-2. However, a standard reference system generating such a low heat flux is available only in a few national metrology institutes (NMIs). In this work, a numerical study aimed at designing an HFM calibration apparatus operating in the heat flux range from 5 W·m-2 to 100 W · m-2 is presented. Predictions about the metrological performance of such a calibration system were estimated by numerical modeling exploiting a commercial FEM code (COMSOL®). On the basis of the modeling results, an engineered design of such an apparatus was developed and discussed in detail. The system was designed for two different purposes: (i) for measuring the thermal conductivity of insulators and (ii) for calibrating an HFM with an absolute method (i.e., by measuring the applied power from the heater and its active cross section) or by a relative method (i.e., by measuring the temperature drop across a reference material of known thickness and thermal conductivity). The numerical investigations show that in order to minimize the uncertainty of the generated heat flux, a fine temperature control on the thermal guard is needed. The predicted standard uncertainty is within 2% at 10W·m-2 and within 0.5% at 100 W · m-2.
Input calibration for negative originals
Tuijn, Chris
1995-04-01
One of the major challenges in the prepress environment consists of controlling the electronic color reproduction process such that a perfect match of any original can be realized. Whether this goal can be reached depends on many factors such as the dynamic range of the input device (scanner, camera), the color gamut of the output device (dye sublimation printer, ink-jet printer, offset), the color management software etc. The characterization of the color behavior of the peripheral devices is therefore very important. Photographs and positive transparents reflect the original scene pretty well; for negative originals, however, there is no obvious link to either the original scene or a particular print of the negative under consideration. In this paper, we establish a method to scan negatives and to convert the scanned data to a calibrated RGB space, which is known colorimetrically. This method is based on the reconstruction of the original exposure conditions (i.e., original scene) which generated the negative. Since the characteristics of negative film are quite diverse, a special calibration is required for each combination of scanner and film type.
Calibration of atmospheric hydrogen measurements
A. Jordan
2011-03-01
Full Text Available Interest in atmospheric hydrogen (H_{2} has been growing in recent years with the prospect of H_{2} being a potential alternative to fossil fuels as an energy carrier. This has intensified research for a quantitative understanding of the atmospheric hydrogen cycle and its total budget, including the expansion of the global atmospheric measurement network. However, inconsistencies in published observational data constitute a major limitation in exploring such data sets. The discrepancies can be mainly attributed to difficulties in the calibration of the measurements. In this study various factors that may interfere with accurate quantification of atmospheric H_{2} were investigated including drifts of standard gases in high pressure cylinders. As an experimental basis a procedure to generate precise mixtures of H_{2} within the atmospheric concentration range was established. Application of this method has enabled a thorough linearity characterization of the commonly used GC-HgO reduction detector. We discovered that the detector response was sensitive to the composition of the matrix gas. Addressing these systematic errors, a new calibration scale has been generated defined by thirteen standards with dry air mole fractions ranging from 139–1226 nmol mol^{−1}. This new scale has been accepted as the official World Meteorological Organisation's (WMO Global Atmospheric Watch (GAW H_{2} mole fraction scale.
Crop physiology calibration in CLM
I. Bilionis
2014-10-01
Full Text Available Farming is using more terrestrial ground, as population increases and agriculture is increasingly used for non-nutritional purposes such as biofuel production. This agricultural expansion exerts an increasing impact on the terrestrial carbon cycle. In order to understand the impact of such processes, the Community Land Model (CLM has been augmented with a CLM-Crop extension that simulates the development of three crop types: maize, soybean, and spring wheat. The CLM-Crop model is a complex system that relies on a suite of parametric inputs that govern plant growth under a given atmospheric forcing and available resources. CLM-Crop development used measurements of gross primary productivity and net ecosystem exchange from AmeriFlux sites to choose parameter values that optimize crop productivity in the model. In this paper we calibrate these parameters for one crop type, soybean, in order to provide a faithful projection in terms of both plant development and net carbon exchange. Calibration is performed in a Bayesian framework by developing a scalable and adaptive scheme based on sequential Monte Carlo (SMC.
Sun, Limin; Chen, Lin
2017-10-01
Residual mode correction is found crucial in calibrating linear resonant absorbers for flexible structures. The classic modal representation augmented with stiffness and inertia correction terms accounting for non-resonant modes improves the calibration accuracy and meanwhile avoids complex modal analysis of the full system. This paper explores the augmented modal representation in calibrating control devices with nonlinearity, by studying a taut cable attached with a general viscous damper and its Equivalent Dynamic Systems (EDSs), i.e. the augmented modal representations connected to the same damper. As nonlinearity is concerned, Frequency Response Functions (FRFs) of the EDSs are investigated in detail for parameter calibration, using the harmonic balance method in combination with numerical continuation. The FRFs of the EDSs and corresponding calibration results are then compared with those of the full system documented in the literature for varied structural modes, damper locations and nonlinearity. General agreement is found and in particular the EDS with both stiffness and inertia corrections (quasi-dynamic correction) performs best among available approximate methods. This indicates that the augmented modal representation although derived from linear cases is applicable to a relatively wide range of damper nonlinearity. Calibration of nonlinear devices by this means still requires numerical analysis while the efficiency is largely improved owing to the system order reduction.
On core stability and extendability
Shellshear, Evan
2007-01-01
This paper investigates conditions under which the core of a TU cooperative game is stable. In particular the author extends the idea of extendability to find new conditions under which the core is stable. It is also shown that these new conditions are not necessary for core stability.
On core stability and extendability
Shellshear, Evan
2007-01-01
This paper investigates conditions under which the core of a TU cooperative game is stable. In particular the author extends the idea of extendability to find new conditions under which the core is stable. It is also shown that these new conditions are not necessary for core stability.
Extended Active Disturbance Rejection Controller
Gao, Zhiqiang (Inventor); Tian, Gang (Inventor)
2016-01-01
Multiple designs, systems, methods and processes for controlling a system or plant using an extended active disturbance rejection control (ADRC) based controller are presented. The extended ADRC controller accepts sensor information from the plant. The sensor information is used in conjunction with an extended state observer in combination with a predictor that estimates and predicts the current state of the plant and a co-joined estimate of the system disturbances and system dynamics. The extended state observer estimates and predictions are used in conjunction with a control law that generates an input to the system based in part on the extended state observer estimates and predictions as well as a desired trajectory for the plant to follow.
14 GHz visible supercontinuum generation: calibration sources for astronomical spectrographs.
Stark, S P; Steinmetz, T; Probst, R A; Hundertmark, H; Wilken, T; Hänsch, T W; Udem, Th; Russell, P St J; Holzwarth, R
2011-08-15
We report the use of a specially designed tapered photonic crystal fiber to produce a broadband optical spectrum covering the visible spectral range. The pump source is a frequency doubled Yb fiber laser operating at a repetition rate of 14 GHz and emitting sub-5 pJ pulses. We experimentally determine the optimum core diameter and achieve a 235 nm broad spectrum. Numerical simulations are used to identify the underlying mechanisms and explain spectral features. The high repetition rate makes this system a promising candidate for precision calibration of astronomical spectrographs.
Multiplexed absorption tomography with calibration-free wavelength modulation spectroscopy
Cai, Weiwei; Kaminski, Clemens F., E-mail: cfk23@cam.ac.uk [Department of Chemical Engineering and Biotechnology, University of Cambridge, Cambridge CB2 3RA (United Kingdom)
2014-04-14
We propose a multiplexed absorption tomography technique, which uses calibration-free wavelength modulation spectroscopy with tunable semiconductor lasers for the simultaneous imaging of temperature and species concentration in harsh combustion environments. Compared with the commonly used direct absorption spectroscopy (DAS) counterpart, the present variant enjoys better signal-to-noise ratios and requires no baseline fitting, a particularly desirable feature for high-pressure applications, where adjacent absorption features overlap and interfere severely. We present proof-of-concept numerical demonstrations of the technique using realistic phantom models of harsh combustion environments and prove that the proposed techniques outperform currently available tomography techniques based on DAS.
Vaccarono, Mattia; Bechini, Renzo; Chandrasekar, Chandra V.; Cremonini, Roberto; Cassardo, Claudio
2016-11-01
The stability of weather radar calibration is a mandatory aspect for quantitative applications, such as rainfall estimation, short-term weather prediction and initialization of numerical atmospheric and hydrological models. Over the years, calibration monitoring techniques based on external sources have been developed, specifically calibration using the Sun and calibration based on ground clutter returns. In this paper, these two techniques are integrated and complemented with a self-consistency procedure and an intercalibration technique. The aim of the integrated approach is to implement a robust method for online monitoring, able to detect significant changes in the radar calibration. The physical consistency of polarimetric radar observables is exploited using the self-consistency approach, based on the expected correspondence between dual-polarization power and phase measurements in rain. This technique allows a reference absolute value to be provided for the radar calibration, from which eventual deviations may be detected using the other procedures. In particular, the ground clutter calibration is implemented on both polarization channels (horizontal and vertical) for each radar scan, allowing the polarimetric variables to be monitored and hardware failures to promptly be recognized. The Sun calibration allows monitoring the calibration and sensitivity of the radar receiver, in addition to the antenna pointing accuracy. It is applied using observations collected during the standard operational scans but requires long integration times (several days) in order to accumulate a sufficient amount of useful data. Finally, an intercalibration technique is developed and performed to compare colocated measurements collected in rain by two radars in overlapping regions. The integrated approach is performed on the C-band weather radar network in northwestern Italy, during July-October 2014. The set of methods considered appears suitable to establish an online tool to
Spectral domain optical coherence tomography with extended depth-of-focus by aperture synthesis
Bo, En; Liu, Linbo
2016-10-01
We developed a spectral domain optical coherence tomography (SD-OCT) with an extended depth-of-focus (DOF) by synthetizing aperture. For a designated Gaussian-shape light source, the lateral resolution was determined by the numerical aperture (NA) of the objective lens and can be approximately maintained over the confocal parameter, which was defined as twice the Rayleigh range. However, the DOF was proportional to the square of the lateral resolution. Consequently, a trade-off existed between the DOF and lateral resolution, and researchers had to weigh and judge which was more important for their research reasonably. In this study, three distinct optical apertures were obtained by imbedding a circular phase spacer in the sample arm. Due to the optical path difference between three distinct apertures caused by the phase spacer, three images were aligned with equal spacing along z-axis vertically. By correcting the optical path difference (OPD) and defocus-induced wavefront curvature, three images with distinct depths were coherently summed together. This system digitally refocused the sample tissue and obtained a brand new image with higher lateral resolution over the confocal parameter when imaging the polystyrene calibration beads.
Third COS FUV Lifetime Calibration Program: Flatfield and Flux Calibrations
Debes, J. H.; Becker, G.; Roman-Duval, J.; Ely, J.; Massa, D.; Oliveira, C.; Plesha, R.; Proffitt, C.; Taylor, J.
2016-10-01
As part of the calibration of the third lifetime position (LP3) of the Cosmic Origins Spectrograph (COS) Far-Ultraviolet (FUV) detector, observations of WD 0308-565 were obtained with the G130M, G160M, and G140L gratings and observations of GD 71 were obtained in the G160M grating through the Point Source Aperture (PSA) to derive low-order flatfields (L-flats) and sensitivities at LP3. Observations were executed for all CENWAVES and all FP-POS with the exception of G130M/1055 and G130M/1096, which remained at LP2. The derivation of the L-flats and sensitivities at LP3 differed from their LP1 and LP2 counterparts in a few key ways, which we describe in this report. Firstly, we quantified a cut-off in spatial frequency that we assigned to the L-flats. Secondly, we derived a new method for simultaneously fitting both the L-flats, pixel-to-pixel flats (P-flats), and sensitvities which we compared to our previous method of separately fitting L-flats and sensitivities. These new methods produce comparable results, but provide us with an external test on the robustness of each approach individually. The results of our work show that with the new profile extraction routines, sensitivities, and L-flats, the relative and absolute flux calibration accuracies (1% and 2% respectively) at LP3 are slightly improved relative to previous locations on the COS FUV detector.
Observing Extended Sources with the \\Herschel SPIRE Fourier Transform Spectrometer
Wu, Ronin; Etxaluze, Mireya; Makiwa, Gibion; Naylor, David A; Salji, Carl; Swinyard, Bruce M; Ferlet, Marc; van der Wiel, Matthijs H D; Smith, Anthony J; Fulton, Trevor; Griffin, Matt J; Baluteau, Jean-Paul; Benielli, Dominique; Glenn, Jason; Hopwood, Rosalind; Imhof, Peter; Lim, Tanya; Lu, Nanyao; Panuzzo, Pasquale; Pearson, Chris; Sidher, Sunil; Valtchanov, Ivan
2013-01-01
The Spectral and Photometric Imaging Receiver (SPIRE) on the European Space Agency's Herschel Space Observatory utilizes a pioneering design for its imaging spectrometer in the form of a Fourier Transform Spectrometer (FTS). The standard FTS data reduction and calibration schemes are aimed at objects with either a spatial extent much larger than the beam size or a source that can be approximated as a point source within the beam. However, when sources are of intermediate spatial extent, neither of these calibrations schemes is appropriate and both the spatial response of the instrument and the source's light profile must be taken into account and the coupling between them explicitly derived. To that end, we derive the necessary corrections using an observed spectrum of a fully extended source with the beam profile and the source's light profile taken into account. We apply the derived correction to several observations of planets and compare the corrected spectra with their spectral models to study the beam c...
Cornic, Philippe; Illoul, Cédric; Cheminet, Adam; Le Besnerais, Guy; Champagnat, Frédéric; Le Sant, Yves; Leclaire, Benjamin
2016-09-01
We address calibration and self-calibration of tomographic PIV experiments within a pinhole model of cameras. A complete and explicit pinhole model of a camera equipped with a 2-tilt angles Scheimpflug adapter is presented. It is then used in a calibration procedure based on a freely moving calibration plate. While the resulting calibrations are accurate enough for Tomo-PIV, we confirm, through a simple experiment, that they are not stable in time, and illustrate how the pinhole framework can be used to provide a quantitative evaluation of geometrical drifts in the setup. We propose an original self-calibration method based on global optimization of the extrinsic parameters of the pinhole model. These methods are successfully applied to the tomographic PIV of an air jet experiment. An unexpected by-product of our work is to show that volume self-calibration induces a change in the world frame coordinates. Provided the calibration drift is small, as generally observed in PIV, the bias on the estimated velocity field is negligible but the absolute location cannot be accurately recovered using standard calibration data.
Universal Numeric Segmented Display
Azad, Md Abul kalam; Kamruzzaman, S M
2010-01-01
Segmentation display plays a vital role to display numerals. But in today's world matrix display is also used in displaying numerals. Because numerals has lots of curve edges which is better supported by matrix display. But as matrix display is costly and complex to implement and also needs more memory, segment display is generally used to display numerals. But as there is yet no proposed compact display architecture to display multiple language numerals at a time, this paper proposes uniform display architecture to display multiple language digits and general mathematical expressions with higher accuracy and simplicity by using a 18-segment display, which is an improvement over the 16 segment display.
The Swift-UVOT ultraviolet and visible grism calibration
Kuin, N P M; Breeveld, A A; Page, M J; James, C; Lamoureux, H; Mehdipour, M; Still, M; Yershov, V; Brown, P J; Carter, M; Mason, K O; Kennedy, T; Marshall, F; Roming, P W A; Siegel, M; Oates, S; Smith, P J; De Pasquale, M
2015-01-01
We present the calibration of the Swift UVOT grisms, of which there are two, providing low-resolution field spectroscopy in the ultraviolet and optical bands respectively. The UV grism covers the range 1700-5000 Angstrom with a spectral resolution of 75 at 2600 Angstrom for source magnitudes of u=10-16 mag, while the visible grism covers the range 2850-6600 Angstrom with a spectral resolution of 100 at 4000 Angstrom for source magnitudes of b=12-17 mag. This calibration extends over all detector positions, for all modes used during operations. The wavelength accuracy (1-sigma) is 9 Angstrom in the UV grism clocked mode, 17 Angstrom in the UV grism nominal mode and 22 Angstrom in the visible grism. The range below 2740 Angstrom in the UV grism and 5200 Angstrom in the visible grism never suffers from overlapping by higher spectral orders. The flux calibration of the grisms includes a correction we developed for coincidence loss in the detector. The error in the coincidence loss correction is less than 20%. The...
Calibration of the Nikon 200 for Close Range Photogrammetry
Sheriff, Lassana; /City Coll., N.Y. /SLAC
2010-08-25
The overall objective of this project is to study the stability and reproducibility of the calibration parameters of the Nikon D200 camera with a Nikkor 20 mm lens for close-range photogrammetric surveys. The well known 'central perspective projection' model is used to determine the camera parameters for interior orientation. The Brown model extends it with the introduction of radial distortion and other less critical variables. The calibration process requires a dense network of targets to be photographed at different angles. For faster processing, reflective coded targets are chosen. Two scenarios have been used to check the reproducibility of the parameters. The first one is using a flat 2D wall with 141 coded targets and 12 custom targets that were previously measured with a laser tracker. The second one is a 3D Unistrut structure with a combination of coded targets and 3D reflective spheres. The study has shown that this setup is only stable during a short period of time. In conclusion, this camera is acceptable when calibrated before each use. Future work should include actual field tests and possible mechanical improvements, such as securing the lens to the camera body.
Calibration Monitor for Dark Energy Experiments
Kaiser, M. E.
2009-11-23
The goal of this program was to design, build, test, and characterize a flight qualified calibration source and monitor for a Dark Energy related experiment: ACCESS - 'Absolute Color Calibration Experiment for Standard Stars'. This calibration source, the On-board Calibration Monitor (OCM), is a key component of our ACCESS spectrophotometric calibration program. The OCM will be flown as part of the ACCESS sub-orbital rocket payload in addition to monitoring instrument sensitivity on the ground. The objective of the OCM is to minimize systematic errors associated with any potential changes in the ACCESS instrument sensitivity. Importantly, the OCM will be used to monitor instrument sensitivity immediately after astronomical observations while the instrument payload is parachuting to the ground. Through monitoring, we can detect, track, characterize, and thus correct for any changes in instrument senstivity over the proposed 5-year duration of the assembled and calibrated instrument.
Herschel SPIRE FTS Relative Spectral Response Calibration
Fulton, Trevor; Baluteau, Jean-Paul; Benielli, Dominique; Imhof, Peter; Lim, Tanya; Lu, Nanyao; Marchili, Nicola; Naylor, David; Polehampton, Edward; Swinyard, Bruce; Valtchanov, Ivan
2014-01-01
Herschel/SPIRE Fourier transform spectrometer (FTS) observations contain emission from both the Herschel Telescope and the SPIRE Instrument itself, both of which are typically orders of magnitude greater than the emission from the astronomical source, and must be removed in order to recover the source spectrum. The effects of the Herschel Telescope and the SPIRE Instrument are removed during data reduction using relative spectral response calibration curves and emission models. We present the evolution of the methods used to derive the relative spectral response calibration curves for the SPIRE FTS. The relationship between the calibration curves and the ultimate sensitivity of calibrated SPIRE FTS data is discussed and the results from the derivation methods are compared. These comparisons show that the latest derivation methods result in calibration curves that impart a factor of between 2 and 100 less noise to the overall error budget, which results in calibrated spectra for individual observations whose n...
New method to calibrate a spinner anemometer
Demurtas, Giorgio; Friis Pedersen, Troels
2014-01-01
The spinner anemometer is a wind sensor, based on three one dimensional sonic sensor probes, mounted on the wind turbine spinner, and an algorithm to convert the wind speeds measured by the three sonic sensors to horizontal wind speed, yaw misalignment and flow inclination angle. The conversion...... to be stopped during calibration in order for the rotor induction not to influence on the calibration, so that the spinner anemometer measures ”free” wind values in stopped condition. The calibration of flow angle measurements is made by calibration of the ratio of the two algorithm constants k2=k1 = k......_. The calibration of k_ is made by relating the spinner anemometer yaw misalignment measurements to the yaw position when yawing the wind turbine in and out of the wind several times. The calibration of the constant k1 is made by comparing the spinner anemometer wind speed measurement with a free metmast or lidar...
NASA AURA HIRDLS instrument calibration facility
Hepplewhite, Christopher L.; Barnett, John J.; Watkins, Robert E. J.; Row, Frederick; Wolfenden, Roger; Djotni, Karim; Oduleye, Olusoji O.; Whitney, John G.; Walton, Trevor W.; Arter, Philip I.
2003-11-01
A state-of-the-art calibration facility was designed and built for the calibration of the HIRDLS instrument at the University of Oxford, England. This paper describes the main features of the facility, the driving requirements and a summary of the performance that was achieved during the calibration. Specific technical requirements and a summary of the performance that was achieved during the calibration. Specific technical requirements and other constaints determined the design solutions that were adopted and the implementation methodology. The main features of the facility included a high performance clean room, vacuum chamber with thermal environmental control as well as the calibration sources. Particular attention was paid to maintenance of cleanliness (molecular and particulate), ESD control, mechanical isolation and high reliability. Schedule constraints required that all the calibration sources were integrated into the facility so that the number of re-press and warm up cycles was minimized and so that all the equipment could be operated at the same time.
Research of Camera Calibration Based on DSP
Zheng Zhang
2013-09-01
Full Text Available To take advantage of the high-efficiency and stability of DSP in the data processing and the functions of OpenCV library, this study brought forward a scheme that camera calibration in DSP embedded system calibration. An arithmetic of camera calibration based on OpenCV is designed by analyzing the camera model and lens distortion. The transplantation of EMCV to DSP is completed and the arithmetic of camera calibration is migrated and optimized based on the CCS development environment and the DSP/BIOS system. On the premise of realizing calibration function, this arithmetic improves the efficiency of program execution and the precision of calibration and lays the foundation for further research of the visual location based on DSP embedded system.
New method to calibrate a spinner anemometer
Demurtas, Giorgio; Friis Pedersen, Troels
2014-01-01
The spinner anemometer is a wind sensor, based on three one dimensional sonic sensor probes, mounted on the wind turbine spinner, and an algorithm to convert the wind speeds measured by the three sonic sensors to horizontal wind speed, yaw misalignment and flow inclination angle. The conversion...... to be stopped during calibration in order for the rotor induction not to influence on the calibration, so that the spinner anemometer measures ”free” wind values in stopped condition. The calibration of flow angle measurements is made by calibration of the ratio of the two algorithm constants k2=k1 = k......_. The calibration of k_ is made by relating the spinner anemometer yaw misalignment measurements to the yaw position when yawing the wind turbine in and out of the wind several times. The calibration of the constant k1 is made by comparing the spinner anemometer wind speed measurement with a free metmast or lidar...
GIFTS SM EDU Radiometric and Spectral Calibrations
Tian, J.; Reisse, R. a.; Johnson, D. G.; Gazarik, J. J.
2007-01-01
The Geosynchronous Imaging Fourier Transform Spectrometer (GIFTS) Sensor Module (SM) Engineering Demonstration Unit (EDU) is a high resolution spectral imager designed to measure infrared (IR) radiance using a Fourier transform spectrometer (FTS). The GIFTS instrument gathers measurements across the long-wave IR (LWIR), short/mid-wave IR (SMWIR), and visible spectral bands. The raw interferogram measurements are radiometrically and spectrally calibrated to produce radiance spectra, which are further processed to obtain atmospheric profiles via retrieval algorithms. This paper describes the processing algorithms involved in the calibration. The calibration procedures can be subdivided into three categories: the pre-calibration stage, the calibration stage, and finally, the post-calibration stage. Detailed derivations for each stage are presented in this paper.
Biogeographic calibrations for the molecular clock.
Ho, Simon Y W; Tong, K Jun; Foster, Charles S P; Ritchie, Andrew M; Lo, Nathan; Crisp, Michael D
2015-09-01
Molecular estimates of evolutionary timescales have an important role in a range of biological studies. Such estimates can be made using methods based on molecular clocks, including models that are able to account for rate variation across lineages. All clock models share a dependence on calibrations, which enable estimates to be given in absolute time units. There are many available methods for incorporating fossil calibrations, but geological and climatic data can also provide useful calibrations for molecular clocks. However, a number of strong assumptions need to be made when using these biogeographic calibrations, leading to wide variation in their reliability and precision. In this review, we describe the nature of biogeographic calibrations and the assumptions that they involve. We present an overview of the different geological and climatic events that can provide informative calibrations, and explain how such temporal information can be incorporated into dating analyses.
Pereyra A, P.; Lopez H, M. E. [Pontificia Universidad Catolica del Peru, Av. Universitaria 1801, San Miguel Lima 32 (Peru); Palacios F, D.; Sajo B, L. [Universidad Simon Bolivar, Laboratorio de Fisica Nuclear, Apartado 89000 Caracas (Venezuela, Bolivarian Republic of); Valdivia, P., E-mail: ppereyr@pucp.edu.pe [Universidad Nacional de Ingenieria, Av. Tupac Amaru s/n, Rimac, Lima 25 (Peru)
2016-10-15
Simulated and measured calibration of PADC detectors is given for cylindrical diffusion chambers employed in environmental radon measurements. The method is based on determining the minimum alpha energy (E{sub min}), average critical angle (<Θ{sub c}>), and fraction of {sup 218}Po atoms; the volume of the chamber (f{sub 1}), are compared to commercially available devices. Radon concentration for exposed detectors is obtained from induced track densities and the well-established calibration coefficient for NRPB monitor. Calibration coefficient of a PADC detector in a cylindrical diffusion chamber of any size is determined under the same chemical etching conditions and track analysis methodology. In this study the results of numerical examples and comparison between experimental calibration coefficients and simulation purpose made code. Results show that the developed method is applicable when uncertainties of 10% are acceptable. (Author)
Effect of photobleaching on calibration model development in biological Raman spectroscopy
Barman, Ishan; Kong, Chae-Ryon; Singh, Gajendra P.; Dasari, Ramachandra R.
2011-01-01
A major challenge in performing quantitative biological studies using Raman spectroscopy lies in overcoming the influence of the dominant sample fluorescence background. Moreover, the prediction accuracy of a calibration model can be severely compromised by the quenching of the endogenous fluorophores due to the introduction of spurious correlations between analyte concentrations and fluorescence levels. Apparently, functional models can be obtained from such correlated samples, which cannot be used successfully for prospective prediction. This work investigates the deleterious effects of photobleaching on prediction accuracy of implicit calibration algorithms, particularly for transcutaneous glucose detection using Raman spectroscopy. Using numerical simulations and experiments on physical tissue models, we show that the prospective prediction error can be substantially larger when the calibration model is developed on a photobleaching correlated dataset compared to an uncorrelated one. Furthermore, we demonstrate that the application of shifted subtracted Raman spectroscopy (SSRS) reduces the prediction errors obtained with photobleaching correlated calibration datasets compared to those obtained with uncorrelated ones. PMID:21280891
Calibration of the 7—Equation Transition Model for High Reynolds Flows at Low Mach
Colonia, S.; Leble, V.; Steijl, R.; Barakos, G.
2016-09-01
The numerical simulation of flows over large-scale wind turbine blades without considering the transition from laminar to fully turbulent flow may result in incorrect estimates of the blade loads and performance. Thanks to its relative simplicity and promising results, the Local-Correlation based Transition Modelling concept represents a valid way to include transitional effects into practical CFD simulations. However, the model involves coefficients that need tuning. In this paper, the γ—equation transition model is assessed and calibrated, for a wide range of Reynolds numbers at low Mach, as needed for wind turbine applications. An aerofoil is used to evaluate the original model and calibrate it; while a large scale wind turbine blade is employed to show that the calibrated model can lead to reliable solutions for complex three-dimensional flows. The calibrated model shows promising results for both two-dimensional and three-dimensional flows, even if cross-flow instabilities are neglected.
Riccardi, A.; Briguglio, R.; Pinna, E.; Agapito, G.; Quiros-Pacheco, F.; Esposito, S.
2012-07-01
ERIS is a new Adaptive Optics Instrument for the Adaptive Optics Facility of the VLT that foresees, in its design phase, a Pyramid Wavefront Sensor Module (PWM) to be used with the VLT Deformable Secondary Mirror (VLT-DSM) as corrector. As opposite to the concave secondary mirrors currently in use (e.g. at LBT), VLT-DSM is convex and calibration of the interaction matrix (IM) between the PWM and the DSM is not foreseen on-telescope during day-time. In this paper different options of calibration are evaluated and compared with particular attention on the synthetic evaluation and on-sky calibration of the IM. A trade-off of the calibration options, the optimization techniques and the related validation with numerical simulations are also provided.
Application of Taguchi Method and Genetic Algorithm for Calibration of Soil Constitutive Models
M. Yazdani
2013-01-01
Full Text Available A special inverse analysis method is established in order to calibrate soil constitutive models. Taguchi method as a systematic sensitivity analysis is conducted to determine the real values of mechanical parameters. This technique was applied on the hardening soil (as an elastoplastic constitutive model which is calibrated using the results from pressuremeter test performed on “Le Rheu” clayey sand. Meanwhile, a genetic algorithm (GA as a well-known optimization technique is used to fit the computed numerical results and observed data of the soil model. This study indicates that the Taguchi method can reasonably calibrate the soil parameters with minimum number of numerical analyses in comparison with GA which needs plenty of analyses. In addition, the contribution of each parameter on mechanical behavior of soil during the test can be determined through the Taguchi method.
Calibration Procedure for 3D Turning Dynamometer
Axinte, Dragos Aurelian; Belluci, Walter
1999-01-01
The aim of the static calibration of the dynamometer is to obtain the matrix for evaluating cutting forces through the output voltage of the piezoelectric cells and charge amplifiers. In the same time, it is worth to evaluate the linearity of the dependencies between applied forces and output...... of the piezoelectric cells;5. Mounting of the dynamometer;6. Calibration of the dynamometer;7. Data analysis;8. Uncertainty budget of the calibration....