Standard test method for K-R curve determination
American Society for Testing and Materials. Philadelphia
2010-01-01
1.1 This test method covers the determination of the resistance to fracture of metallic materials under Mode I loading at static rates using either of the following notched and precracked specimens: the middle-cracked tension M(T) specimen or the compact tension C(T) specimen. A K-R curve is a continuous record of toughness development (resistance to crack extension) in terms of KR plotted against crack extension in the specimen as a crack is driven under an increasing stress intensity factor, K. 1.2 Materials that can be tested for K-R curve development are not limited by strength, thickness, or toughness, so long as specimens are of sufficient size to remain predominantly elastic to the effective crack extension value of interest. 1.3 Specimens of standard proportions are required, but size is variable, to be adjusted for yield strength and toughness of the materials. 1.4 Only two of the many possible specimen types that could be used to develop K-R curves are covered in this method. 1.5 The test is app...
Hacke, Uwe G; Venturas, Martin D; MacKinnon, Evan D; Jacobsen, Anna L; Sperry, John S; Pratt, R Brandon
2015-01-01
The standard centrifuge method has been frequently used to measure vulnerability to xylem cavitation. This method has recently been questioned. It was hypothesized that open vessels lead to exponential vulnerability curves, which were thought to be indicative of measurement artifact. We tested this hypothesis in stems of olive (Olea europea) because its long vessels were recently claimed to produce a centrifuge artifact. We evaluated three predictions that followed from the open vessel artifact hypothesis: shorter stems, with more open vessels, would be more vulnerable than longer stems; standard centrifuge-based curves would be more vulnerable than dehydration-based curves; and open vessels would cause an exponential shape of centrifuge-based curves. Experimental evidence did not support these predictions. Centrifuge curves did not vary when the proportion of open vessels was altered. Centrifuge and dehydration curves were similar. At highly negative xylem pressure, centrifuge-based curves slightly overestimated vulnerability compared to the dehydration curve. This divergence was eliminated by centrifuging each stem only once. The standard centrifuge method produced accurate curves of samples containing open vessels, supporting the validity of this technique and confirming its utility in understanding plant hydraulics. Seven recommendations for avoiding artefacts and standardizing vulnerability curve methodology are provided. © 2014 The Authors. New Phytologist © 2014 New Phytologist Trust.
A standard curve based method for relative real time PCR data processing
Directory of Open Access Journals (Sweden)
Krause Andreas
2005-03-01
Full Text Available Abstract Background Currently real time PCR is the most precise method by which to measure gene expression. The method generates a large amount of raw numerical data and processing may notably influence final results. The data processing is based either on standard curves or on PCR efficiency assessment. At the moment, the PCR efficiency approach is preferred in relative PCR whilst the standard curve is often used for absolute PCR. However, there are no barriers to employ standard curves for relative PCR. This article provides an implementation of the standard curve method and discusses its advantages and limitations in relative real time PCR. Results We designed a procedure for data processing in relative real time PCR. The procedure completely avoids PCR efficiency assessment, minimizes operator involvement and provides a statistical assessment of intra-assay variation. The procedure includes the following steps. (I Noise is filtered from raw fluorescence readings by smoothing, baseline subtraction and amplitude normalization. (II The optimal threshold is selected automatically from regression parameters of the standard curve. (III Crossing points (CPs are derived directly from coordinates of points where the threshold line crosses fluorescence plots obtained after the noise filtering. (IV The means and their variances are calculated for CPs in PCR replicas. (V The final results are derived from the CPs' means. The CPs' variances are traced to results by the law of error propagation. A detailed description and analysis of this data processing is provided. The limitations associated with the use of parametric statistical methods and amplitude normalization are specifically analyzed and found fit to the routine laboratory practice. Different options are discussed for aggregation of data obtained from multiple reference genes. Conclusion A standard curve based procedure for PCR data processing has been compiled and validated. It illustrates that
Estimation method of the fracture resistance curve
Energy Technology Data Exchange (ETDEWEB)
Cho, Sung Keun; Lee, Kwang Hyeon; Koo, Jae Mean; Seok, Chang Sung [Sungkyunkwan Univ., Suwon (Korea, Republic of); Park, Jae Sil [Samsung Electric Company, Suwon (Korea, Republic of)
2008-07-01
Fracture resistance curves for concerned materials are required in order to perform elastic-plastic fracture mechanical analysis. Fracture resistance curve is built with J-integral values and crack extension values. The objective of this paper is to propose the estimation method of the fracture resistance curve. The estimation method of the fracture resistance curve for the pipe specimen was proposed by the load ratio method from load - displacement data for the standard specimen.
CRC standard curves and surfaces with Mathematica
von Seggern, David H
2006-01-01
Since the publication of the first edition, Mathematica® has matured considerably and the computing power of desktop computers has increased greatly. This enables the presentation of more complex curves and surfaces as well as the efficient computation of formerly prohibitive graphical plots. Incorporating both of these aspects, CRC Standard Curves and Surfaces with Mathematica®, Second Edition is a virtual encyclopedia of curves and functions that depicts nearly all of the standard mathematical functions rendered using Mathematica. While the easy-to-use format remains unchanged from the previ
Incorporating experience curves in appliance standards analysis
International Nuclear Information System (INIS)
Desroches, Louis-Benoit; Garbesi, Karina; Kantner, Colleen; Van Buskirk, Robert; Yang, Hung-Chia
2013-01-01
There exists considerable evidence that manufacturing costs and consumer prices of residential appliances have decreased in real terms over the last several decades. This phenomenon is generally attributable to manufacturing efficiency gained with cumulative experience producing a certain good, and is modeled by an empirical experience curve. The technical analyses conducted in support of U.S. energy conservation standards for residential appliances and commercial equipment have, until recently, assumed that manufacturing costs and retail prices remain constant during the projected 30-year analysis period. This assumption does not reflect real market price dynamics. Using price data from the Bureau of Labor Statistics, we present U.S. experience curves for room air conditioners, clothes dryers, central air conditioners, furnaces, and refrigerators and freezers. These experience curves were incorporated into recent energy conservation standards analyses for these products. Including experience curves increases the national consumer net present value of potential standard levels. In some cases a potential standard level exhibits a net benefit when considering experience, whereas without experience it exhibits a net cost. These results highlight the importance of modeling more representative market prices. - Highlights: ► Past appliance standards analyses have assumed constant equipment prices. ► There is considerable evidence of consistent real price declines. ► We incorporate experience curves for several large appliances into the analysis. ► The revised analyses demonstrate larger net present values of potential standards. ► The results imply that past standards analyses may have undervalued benefits.
Incorporating Experience Curves in Appliance Standards Analysis
Energy Technology Data Exchange (ETDEWEB)
Garbesi, Karina; Chan, Peter; Greenblatt, Jeffery; Kantner, Colleen; Lekov, Alex; Meyers, Stephen; Rosenquist, Gregory; Buskirk, Robert Van; Yang, Hung-Chia; Desroches, Louis-Benoit
2011-10-31
The technical analyses in support of U.S. energy conservation standards for residential appliances and commercial equipment have typically assumed that manufacturing costs and retail prices remain constant during the projected 30-year analysis period. There is, however, considerable evidence that this assumption does not reflect real market prices. Costs and prices generally fall in relation to cumulative production, a phenomenon known as experience and modeled by a fairly robust empirical experience curve. Using price data from the Bureau of Labor Statistics, and shipment data obtained as part of the standards analysis process, we present U.S. experience curves for room air conditioners, clothes dryers, central air conditioners, furnaces, and refrigerators and freezers. These allow us to develop more representative appliance price projections than the assumption-based approach of constant prices. These experience curves were incorporated into recent energy conservation standards for these products. The impact on the national modeling can be significant, often increasing the net present value of potential standard levels in the analysis. In some cases a previously cost-negative potential standard level demonstrates a benefit when incorporating experience. These results imply that past energy conservation standards analyses may have undervalued the economic benefits of potential standard levels.
International Nuclear Information System (INIS)
Pickles, W.L.; McClure, J.W.; Howell, R.H.
1978-05-01
A sophisticated nonlinear multiparameter fitting program was used to produce a best fit calibration curve for the response of an x-ray fluorescence analyzer to uranium nitrate, freeze dried, 0.2% accurate, gravimetric standards. The program is based on unconstrained minimization subroutine, VA02A. The program considers the mass values of the gravimetric standards as parameters to be fit along with the normal calibration curve parameters. The fitting procedure weights with the system errors and the mass errors in a consistent way. The resulting best fit calibration curve parameters reflect the fact that the masses of the standard samples are measured quantities with a known error. Error estimates for the calibration curve parameters can be obtained from the curvature of the ''Chi-Squared Matrix'' or from error relaxation techniques. It was shown that nondispersive XRFA of 0.1 to 1 mg freeze-dried UNO 3 can have an accuracy of 0.2% in 1000 s
Method of construction spatial transition curve
Directory of Open Access Journals (Sweden)
S.V. Didanov
2013-04-01
Full Text Available Purpose. The movement of rail transport (speed rolling stock, traffic safety, etc. is largely dependent on the quality of the track. In this case, a special role is the transition curve, which ensures smooth insertion of the transition from linear to circular section of road. The article deals with modeling of spatial transition curve based on the parabolic distribution of the curvature and torsion. This is a continuation of research conducted by the authors regarding the spatial modeling of curved contours. Methodology. Construction of the spatial transition curve is numerical methods for solving nonlinear integral equations, where the initial data are taken coordinate the starting and ending points of the curve of the future, and the inclination of the tangent and the deviation of the curve from the tangent plane at these points. System solutions for the numerical method are the partial derivatives of the equations of the unknown parameters of the law of change of torsion and length of the transition curve. Findings. The parametric equations of the spatial transition curve are calculated by finding the unknown coefficients of the parabolic distribution of the curvature and torsion, as well as the spatial length of the transition curve. Originality. A method for constructing the spatial transition curve is devised, and based on this software geometric modeling spatial transition curves of railway track with specified deviations of the curve from the tangent plane. Practical value. The resulting curve can be applied in any sector of the economy, where it is necessary to ensure a smooth transition from linear to circular section of the curved space bypass. An example is the transition curve in the construction of the railway line, road, pipe, profile, flat section of the working blades of the turbine and compressor, the ship, plane, car, etc.
Derivation of standard lactation curves for South African dairy cows ...
African Journals Online (AJOL)
South African cows displayed more variation in yields compared to those of Holstein cows in the Netherlands and Ireland. Season of calving had a pronounced effect on the shape of the Standard Lactation Curve, while the combination of calving age and lactation affected both the shape and level of the curves. Expected ...
Evaluation of Shape Parameter Effect on the J-R Curve of Curved CT Specimen Using Limit Load Method
Energy Technology Data Exchange (ETDEWEB)
Shin, In Hwan; Park, Chi Yong [Korea Hydro Nuclear Power Corporation, Seoul (Korea, Republic of); Seok, Chang Sung; Koo, Jae Mean [SungKyunKwan University, Suwon (Korea, Republic of)
2014-07-15
In this study, the effect of shape parameters on the J-R curves of curved CT specimens was evaluated using the limit load method. Fracture toughness tests considering the shape factors L/W and Rm/t of the specimens were also performed. Thereafter, the J-R curves of the curved CT specimens were compared using the J-integral equation proposed in the ASTM (American Society for Testing and Materials) and limit load solution. The J-R curves of the curved CT specimens were also compared with those of the CWP (curved wide plate), which is regarded to be similar to real pipe and standard specimens.. Finally, the effectiveness of the J-R curve of each curved CT specimen was evaluated. The results of this study can be used for assessing the applicability of curved CT specimens in the accurate evaluation of the fracture toughness of real pipes.
Comparison of power curve monitoring methods
Directory of Open Access Journals (Sweden)
Cambron Philippe
2017-01-01
Full Text Available Performance monitoring is an important aspect of operating wind farms. This can be done through the power curve monitoring (PCM of wind turbines (WT. In the past years, important work has been conducted on PCM. Various methodologies have been proposed, each one with interesting results. However, it is difficult to compare these methods because they have been developed using their respective data sets. The objective of this actual work is to compare some of the proposed PCM methods using common data sets. The metric used to compare the PCM methods is the time needed to detect a change in the power curve. Two power curve models will be covered to establish the effect the model type has on the monitoring outcomes. Each model was tested with two control charts. Other methodologies and metrics proposed in the literature for power curve monitoring such as areas under the power curve and the use of statistical copulas have also been covered. Results demonstrate that model-based PCM methods are more reliable at the detecting a performance change than other methodologies and that the effectiveness of the control chart depends on the types of shift observed.
Energy Technology Data Exchange (ETDEWEB)
Miura, N.; Soneda, N. [Central Research Inst. of Electric Power Industry (Japan); Hiranuma, N. [Tokyo Electric Power Co. (Japan)
2004-07-01
The Master Curve method to determine fracture toughness in the brittle-to-ductile transition temperature range is provided in the ASTM standard E 1921. In this study, the method was applied to two types of typical Japanese reactor pressure vessel steels. As a result, it was confirmed that valid reference temperatures as well as master curves could be determined based on the ASTM standard. The ability of the statistical size scaling as well as the propriety of the assumption on the statistical distribution of fracture toughness was exerpiementally validated. The relative position between the master curve and the current K{sub IC} curves was then compared and discussed. (orig.)
International Nuclear Information System (INIS)
Miura, N.; Soneda, N.; Hiranuma, N.
2004-01-01
The Master Curve method to determine fracture toughness in the brittle-to-ductile transition temperature range is provided in the ASTM standard E 1921. In this study, the method was applied to two types of typical Japanese reactor pressure vessel steels. As a result, it was confirmed that valid reference temperatures as well as master curves could be determined based on the ASTM standard. The ability of the statistical size scaling as well as the propriety of the assumption on the statistical distribution of fracture toughness was exerpiementally validated. The relative position between the master curve and the current K IC curves was then compared and discussed. (orig.)
Standard gestational birth weight ranges and Curve in Yaounde ...
African Journals Online (AJOL)
The aim of this study was to establish standard ranges and curve of mean gestational birth weights validated by ultrasonography for the Cameroonian population in Yaoundé. This cross sectional study was carried out in the Obstetrics & Gynaecology units of 4 major hospitals in the metropolis between March 5 and ...
An Autocorrelation Term Method for Curve Fitting
Houston, Louis M.
2013-01-01
The least-squares method is the most popular method for fitting a polynomial curve to data. It is based on minimizing the total squared error between a polynomial model and the data. In this paper we develop a different approach that exploits the autocorrelation function. In particular, we use the nonzero lag autocorrelation terms to produce a system of quadratic equations that can be solved together with a linear equation derived from summing the data. There is a maximum of solutions when th...
Modeling error distributions of growth curve models through Bayesian methods.
Zhang, Zhiyong
2016-06-01
Growth curve models are widely used in social and behavioral sciences. However, typical growth curve models often assume that the errors are normally distributed although non-normal data may be even more common than normal data. In order to avoid possible statistical inference problems in blindly assuming normality, a general Bayesian framework is proposed to flexibly model normal and non-normal data through the explicit specification of the error distributions. A simulation study shows when the distribution of the error is correctly specified, one can avoid the loss in the efficiency of standard error estimates. A real example on the analysis of mathematical ability growth data from the Early Childhood Longitudinal Study, Kindergarten Class of 1998-99 is used to show the application of the proposed methods. Instructions and code on how to conduct growth curve analysis with both normal and non-normal error distributions using the the MCMC procedure of SAS are provided.
International Nuclear Information System (INIS)
Hur, Yong; Park, Hong Sun; Koo, Jae Mean; Seok, Chang Sung; Park, Jae Sil
2009-01-01
The estimation method of the fracture resistance curve for the pipe specimen was proposed using the load ratio method for the standard specimen. For this, the calculation method of the load - CMOD curve for the pipe specimen with the common format equation(CFE) was proposed by using data of the CT specimen. The proposed method agreed well with experimental data. The J-integral value and the crack extension were calculated from the estimated load - CMOD data. The fracture resistance curve was estimated from the calculated J-integral and the crack extension. From these results, it have been seen that the proposed method is reliable to estimate the J-R curve of the pipe specimen
Energy Technology Data Exchange (ETDEWEB)
Hur, Yong; Park, Hong Sun; Koo, Jae Mean; Seok, Chang Sung [Sungkyunkwan University, Suwon (Korea, Republic of); Park, Jae Sil [Samsung Electronics Co., Suwon (Korea, Republic of)
2009-09-15
The estimation method of the fracture resistance curve for the pipe specimen was proposed using the load ratio method for the standard specimen. For this, the calculation method of the load - CMOD curve for the pipe specimen with the common format equation(CFE) was proposed by using data of the CT specimen. The proposed method agreed well with experimental data. The J-integral value and the crack extension were calculated from the estimated load - CMOD data. The fracture resistance curve was estimated from the calculated J-integral and the crack extension. From these results, it have been seen that the proposed method is reliable to estimate the J-R curve of the pipe specimen.
SCINFI II A program to calculate the standardization curve in liquid scintillation counting
Energy Technology Data Exchange (ETDEWEB)
Grau Carles, A.; Grau Malonda, A.
1985-07-01
A code, SCINFI II, written in BASIC, has been developed to compute the efficiency-quench standardization curve for any beta radionuclide. The free parameter method has been applied. The program requires the standardization curve for 3{sup H} and the polynomial or tabulated relating counting efficiency as figure of merit for both 3{sup H} and the problem radionuclide. The program is applied to the computation, of the counting efficiency for different values of quench when the problem is 14{sup C}. The results of four different computation methods are compared. (Author) 17 refs.
SCINFI II A program to calculate the standardization curve in liquid scintillation counting
International Nuclear Information System (INIS)
Grau Carles, A.; Grau Malonda, A.
1985-01-01
A code, SCINFI II, written in BASIC, has been developed to compute the efficiency-quench standardization curve for any beta radionuclide. The free parameter method has been applied. The program requires the standardization curve for 3 H and the polynomial or tabulated relating counting efficiency as figure of merit for both 3 H and the problem radionuclide. The program is applied to the computation, of the counting efficiency for different values of quench when the problem is 14 C . The results of four different computation methods are compared. (Author) 17 refs
Comparison of two methods to determine fan performance curves using computational fluid dynamics
Onma, Patinya; Chantrasmi, Tonkid
2018-01-01
This work investigates a systematic numerical approach that employs Computational Fluid Dynamics (CFD) to obtain performance curves of a backward-curved centrifugal fan. Generating the performance curves requires a number of three-dimensional simulations with varying system loads at a fixed rotational speed. Two methods were used and their results compared to experimental data. The first method incrementally changes the mass flow late through the inlet boundary condition while the second method utilizes a series of meshes representing the physical damper blade at various angles. The generated performance curves from both methods are compared with an experiment setup in accordance with the AMCA fan performance testing standard.
Standard-curve competitive RT-PCR quantification of myogenic regulatory factors in chicken embryos
Directory of Open Access Journals (Sweden)
L.E. Alvares
2003-12-01
Full Text Available The reverse transcription-polymerase chain reaction (RT-PCR is the most sensitive method used to evaluate gene expression. Although many advances have been made since quantitative RT-PCR was first described, few reports deal with the mathematical bases of this technique. The aim of the present study was to develop and standardize a competitive PCR method using standard-curves to quantify transcripts of the myogenic regulatory factors MyoD, Myf-5, Myogenin and MRF4 in chicken embryos. Competitor cDNA molecules were constructed for each gene under study using deletion primers, which were designed to maintain the anchorage sites for the primers used to amplify target cDNAs. Standard-curves were prepared by co-amplification of different amounts of target cDNA with a constant amount of competitor. The content of specific mRNAs in embryo cDNAs was determined after PCR with a known amount of competitor and comparison to standard-curves. Transcripts of the housekeeping ß-actin gene were measured to normalize the results. As predicted by the model, most of the standard-curves showed a slope close to 1, while intercepts varied depending on the relative efficiency of competitor amplification. The sensitivity of the RT-PCR method permitted the detection of as few as 60 MyoD/Myf-5 molecules per reaction but approximately 600 molecules of MRF4/Myogenin mRNAS were necessary to produce a measurable signal. A coefficient of variation of 6 to 19% was estimated for the different genes analyzed (6 to 9 repetitions. The competitive RT-PCR assay described here is sensitive, precise and allows quantification of up to 9 transcripts from a single cDNA sample.
MATHEMATICAL METHODS TO DETERMINE THE INTERSECTION CURVES OF THE CYLINDERS
Directory of Open Access Journals (Sweden)
POPA Carmen
2010-07-01
Full Text Available The aim of this paper is to establish the intersection curves between cylinders, by using the Mathematica program. This thing can be obtained by introducing the curves equations, which are inferred, in Mathematica program. This paper take into discussion three right cylinders and another inclined to 45 degrees. The intersection curves can also be obtained by using the classical methods of the descriptive geometry.
Studying the method of linearization of exponential calibration curves
International Nuclear Information System (INIS)
Bunzh, Z.A.
1989-01-01
The results of study of the method for linearization of exponential calibration curves are given. The calibration technique and comparison of the proposed method with piecewise-linear approximation and power series expansion, are given
Maximum likelihood decay curve fits by the simplex method
International Nuclear Information System (INIS)
Gregorich, K.E.
1991-01-01
A multicomponent decay curve analysis technique has been developed and incorporated into the decay curve fitting computer code, MLDS (maximum likelihood decay by the simplex method). The fitting criteria are based on the maximum likelihood technique for decay curves made up of time binned events. The probabilities used in the likelihood functions are based on the Poisson distribution, so decay curves constructed from a small number of events are treated correctly. A simple utility is included which allows the use of discrete event times, rather than time-binned data, to make maximum use of the decay information. The search for the maximum in the multidimensional likelihood surface for multi-component fits is performed by the simplex method, which makes the success of the iterative fits extremely insensitive to the initial values of the fit parameters and eliminates the problems of divergence. The simplex method also avoids the problem of programming the partial derivatives of the decay curves with respect to all the variable parameters, which makes the implementation of new types of decay curves straightforward. Any of the decay curve parameters can be fixed or allowed to vary. Asymmetric error limits for each of the free parameters, which do not consider the covariance of the other free parameters, are determined. A procedure is presented for determining the error limits which contain the associated covariances. The curve fitting procedure in MLDS can easily be adapted for fits to other curves with any functional form. (orig.)
Qualitative Comparison of Contraction-Based Curve Skeletonization Methods
Sobiecki, André; Yasan, Haluk C.; Jalba, Andrei C.; Telea, Alexandru C.
2013-01-01
In recent years, many new methods have been proposed for extracting curve skeletons of 3D shapes, using a mesh-contraction principle. However, it is still unclear how these methods perform with respect to each other, and with respect to earlier voxel-based skeletonization methods, from the viewpoint
Scinfi, a program to calculate the standardization curve in liquid scintillation counting
International Nuclear Information System (INIS)
Grau Carles, A.; Grau Malonda, A.
1984-01-01
A code, Scinfi, was developed, written in Basic, to compute the efficiency-quench standardization curve for any radionuclide. The program requires the standardization curve for 3 H and the polynomial relations between counting efficiency and figure of merit for both 3 H and the problem (e.g. 14 C). The program is applied to the computation of the efficiency-quench standardization curve for 14 C. Five different liquid scintillation spectrometers and two scintillator solutions have been checked. The computation results are compared with the experimental values obtained with a set of 14 C standardized samples. (author)
SCINFI, a program to calculate the standardization curve in liquid scintillation counting
International Nuclear Information System (INIS)
Grau Carles, A.; Grau Malonda, A.
1984-01-01
A code, SCINFI, was developed, written in BASIC, to compute the efficiency- quench standardization curve for any radionuclide. The program requires the standardization curve for 3H and the polynomial relations between counting efficiency and figure of merit for both 3H and the problem (e.g. 14 C ). The program is applied to the computation of the efficiency-quench standardization curve for 14 c . Five different liquid scintillation spectrometers and two scintillator solutions have bean checked. The computation results are compared with the experimental values obtained with a set of 14 c standardized samples. (Author)
herd levels and standard lactation curves for south african jersey
African Journals Online (AJOL)
Protein. 34.2. 370.7. 148.1. 37.6. 31.3. 482.8. 191.1. 53.8. According to the standard deviations in Table 1, much more variation exists for 305-day yields of. Holstein cows in comparison with Jersey cows, resulting in upper limits of herd levels ranging from 3487.7 kg to more than 11 219.2 kg for adjusted 305-day milk yield, ...
Construction of molecular potential energy curves by an optimization method
Wang, J.; Blake, A. J.; McCoy, D. G.; Torop, L.
1991-01-01
A technique for determining the potential energy curves for diatomic molecules from measurements of diffused or continuum spectra is presented. It is based on a numerical procedure which minimizes the difference between the calculated spectra and the experimental measurements and can be used in cases where other techniques, such as the conventional RKR method, are not applicable. With the aid of suitable spectral data, the associated dipole electronic transition moments can be simultaneously obtained. The method is illustrated by modeling the "longest band" of molecular oxygen to extract the E 3Σ u- and B 3Σ u- potential curves in analytical form.
A non-iterative method for fitting decay curves with background
International Nuclear Information System (INIS)
Mukoyama, T.
1982-01-01
A non-iterative method for fitting a decay curve with background is presented. The sum of an exponential function and a constant term is linearized by the use of the difference equation and parameters are determined by the standard linear least-squares fitting. The validity of the present method has been tested against pseudo-experimental data. (orig.)
Curve fitting methods for solar radiation data modeling
Energy Technology Data Exchange (ETDEWEB)
Karim, Samsul Ariffin Abdul, E-mail: samsul-ariffin@petronas.com.my, E-mail: balbir@petronas.com.my; Singh, Balbir Singh Mahinder, E-mail: samsul-ariffin@petronas.com.my, E-mail: balbir@petronas.com.my [Department of Fundamental and Applied Sciences, Faculty of Sciences and Information Technology, Universiti Teknologi PETRONAS, Bandar Seri Iskandar, 31750 Tronoh, Perak Darul Ridzuan (Malaysia)
2014-10-24
This paper studies the use of several type of curve fitting method to smooth the global solar radiation data. After the data have been fitted by using curve fitting method, the mathematical model of global solar radiation will be developed. The error measurement was calculated by using goodness-fit statistics such as root mean square error (RMSE) and the value of R{sup 2}. The best fitting methods will be used as a starting point for the construction of mathematical modeling of solar radiation received in Universiti Teknologi PETRONAS (UTP) Malaysia. Numerical results indicated that Gaussian fitting and sine fitting (both with two terms) gives better results as compare with the other fitting methods.
Curve fitting methods for solar radiation data modeling
Karim, Samsul Ariffin Abdul; Singh, Balbir Singh Mahinder
2014-10-01
This paper studies the use of several type of curve fitting method to smooth the global solar radiation data. After the data have been fitted by using curve fitting method, the mathematical model of global solar radiation will be developed. The error measurement was calculated by using goodness-fit statistics such as root mean square error (RMSE) and the value of R2. The best fitting methods will be used as a starting point for the construction of mathematical modeling of solar radiation received in Universiti Teknologi PETRONAS (UTP) Malaysia. Numerical results indicated that Gaussian fitting and sine fitting (both with two terms) gives better results as compare with the other fitting methods.
Curve fitting methods for solar radiation data modeling
International Nuclear Information System (INIS)
Karim, Samsul Ariffin Abdul; Singh, Balbir Singh Mahinder
2014-01-01
This paper studies the use of several type of curve fitting method to smooth the global solar radiation data. After the data have been fitted by using curve fitting method, the mathematical model of global solar radiation will be developed. The error measurement was calculated by using goodness-fit statistics such as root mean square error (RMSE) and the value of R 2 . The best fitting methods will be used as a starting point for the construction of mathematical modeling of solar radiation received in Universiti Teknologi PETRONAS (UTP) Malaysia. Numerical results indicated that Gaussian fitting and sine fitting (both with two terms) gives better results as compare with the other fitting methods
THE CPA QUALIFICATION METHOD BASED ON THE GAUSSIAN CURVE FITTING
Directory of Open Access Journals (Sweden)
M.T. Adithia
2015-01-01
Full Text Available The Correlation Power Analysis (CPA attack is an attack on cryptographic devices, especially smart cards. The results of the attack are correlation traces. Based on the correlation traces, an evaluation is done to observe whether significant peaks appear in the traces or not. The evaluation is done manually, by experts. If significant peaks appear then the smart card is not considered secure since it is assumed that the secret key is revealed. We develop a method that objectively detects peaks and decides which peak is significant. We conclude that using the Gaussian curve fitting method, the subjective qualification of the peak significance can be objectified. Thus, better decisions can be taken by security experts. We also conclude that the Gaussian curve fitting method is able to show the influence of peak sizes, especially the width and height, to a significance of a particular peak.
Wind turbine performance: Methods and criteria for reliability of measured power curves
Energy Technology Data Exchange (ETDEWEB)
Griffin, D.A. [Advanced Wind Turbines Inc., Seattle, WA (United States)
1996-12-31
In order to evaluate the performance of prototype turbines, and to quantify incremental changes in performance through field testing, Advanced Wind Turbines (AWT) has been developing methods and requirements for power curve measurement. In this paper, field test data is used to illustrate several issues and trends which have resulted from this work. Averaging and binning processes, data hours per wind-speed bin, wind turbulence levels, and anemometry methods are all shown to have significant impacts on the resulting power curves. Criteria are given by which the AWT power curves show a high degree of repeatability, and these criteria are compared and contrasted with current published standards for power curve measurement. 6 refs., 5 figs., 5 tabs.
Historical Cost Curves for Hydrogen Masers and Cesium Beam Frequency and Timing Standards
Remer, D. S.; Moore, R. C.
1985-01-01
Historical cost curves were developed for hydrogen masers and cesium beam standards used for frequency and timing calibration in the Deep Space Network. These curves may be used to calculate the cost of future hydrogen masers or cesium beam standards in either future or current dollars. The cesium beam standards are decreasing in cost by about 2.3% per year since 1966, and hydrogen masers are decreasing by about 0.8% per year since 1978 relative to the National Aeronautics and Space Administration inflation index.
Analysis of RIA standard curve by log-logistic and cubic log-logit models
International Nuclear Information System (INIS)
Yamada, Hideo; Kuroda, Akira; Yatabe, Tami; Inaba, Taeko; Chiba, Kazuo
1981-01-01
In order to improve goodness-of-fit in RIA standard analysis, programs for computing log-logistic and cubic log-logit were written in BASIC using personal computer P-6060 (Olivetti). Iterative least square method of Taylor series was applied for non-linear estimation of logistic and log-logistic. Hear ''log-logistic'' represents Y = (a - d)/(1 + (log(X)/c)sup(b)) + d As weights either 1, 1/var(Y) or 1/σ 2 were used in logistic or log-logistic and either Y 2 (1 - Y) 2 , Y 2 (1 - Y) 2 /var(Y), or Y 2 (1 - Y) 2 /σ 2 were used in quadratic or cubic log-logit. The term var(Y) represents squares of pure error and σ 2 represents estimated variance calculated using a following equation log(σ 2 + 1) = log(A) + J log(y). As indicators for goodness-of-fit, MSL/S sub(e)sup(2), CMD% and WRV (see text) were used. Better regression was obtained in case of alpha-fetoprotein by log-logistic than by logistic. Cortisol standard curve was much better fitted with cubic log-logit than quadratic log-logit. Predicted precision of AFP standard curve was below 5% in log-logistic in stead of 8% in logistic analysis. Predicted precision obtained using cubic log-logit was about five times lower than that with quadratic log-logit. Importance of selecting good models in RIA data processing was stressed in conjunction with intrinsic precision of radioimmunoassay system indicated by predicted precision. (author)
Using learning curves and confidence intervals in a time study for the calculation of standard times
Directory of Open Access Journals (Sweden)
Mitzy Natalia Roncancio Avila
2017-10-01
Full Text Available Introduction: This article explores the use of learning curves and confidence intervals in a time study carry out in a scale assembly line during a laboratory practice at the University of La Salle. Objective: The objective of this research is to show the use of confidence intervals and learning curves for the identification of stable processes and subsequent standardization of timing Methodology: The methodology used consists in two phases: Analysis for the study of times and establishment of standard times; the first consist in the calculation of the number of cycles, depuration of atypical data and the use of the curves to determine the processes suitable for the standardization, and the second phase is the calculation of the standard times. Results: The analysis allowed to determine that is only possible to standardize two of the five processes of the system under study because of the variability of them. Conclusions: Given the research, is possible to conclude that a process should be standardized only if it presents a stable behavior respect to the normal rhythm of work, which is showed in the learning curve; otherwise, the process will obtain partial standard times.
Experimental Method for Plotting S-N Curve with a Small Number of Specimens
Directory of Open Access Journals (Sweden)
Strzelecki Przemysław
2016-12-01
Full Text Available The study presents two approaches to plotting an S-N curve based on the experimental results. The first approach is commonly used by researchers and presented in detail in many studies and standard documents. The model uses a linear regression whose parameters are estimated by using the least squares method. A staircase method is used for an unlimited fatigue life criterion. The second model combines the S-N curve defined as a straight line and the record of random occurrence of the fatigue limit. A maximum likelihood method is used to estimate the S-N curve parameters. Fatigue data for C45+C steel obtained in the torsional bending test were used to compare the estimated S-N curves. For pseudo-random numbers generated by using the Mersenne Twister algorithm, the estimated S-N curve for 10 experimental results plotted by using the second model, estimates the fatigue life in the scatter band of the factor 3. The result gives good approximation, especially regarding the time required to plot the S-N curve.
Directory of Open Access Journals (Sweden)
Danielle L. Morton
2017-01-01
Full Text Available Objective. Infants with intestinal failure or feeding intolerance are nutritionally compromised and are at risk for extrauterine growth restriction. The aim of the study was to evaluate growth velocities of infants with intestinal failure and feeding intolerance for the first three months of age and to determine growth percentiles at birth and at 40-week postmenstrual age (PMA. Methods. A chart review of infants followed by the Texas Children’s Hospital Intestinal Rehabilitation Team was conducted from April 2012 to October 2014. Weekly weight, length, and head circumference growth velocities were calculated. Growth data were compared to Olsen growth curves to determine exact percentiles. Results. Data from infants (n=164 revealed that average growth velocities of 3-month-old infants (weight gain, 19.97 g/d; length, 0.81 cm/week; head circumference, 0.52 cm/week fluctuated and all were below expected norms. At discharge or death, average growth velocities had further decreased (length, 0.69 cm/week; head circumference, 0.45 cm/week except for weight, which showed a slight increase (weight, 20.56 g/d. Weight, length, and head circumference percentiles significantly decreased from birth to 40-week PMA (P<0.001. Conclusions. Growth of infants with intestinal failure or feeding intolerance did not follow standard growth curves.
Standardization of 57Co using different methods of LNMRI
International Nuclear Information System (INIS)
Rezende, E.A.; Lopes, R.T.; Silva, C.J. da; Poledna, R.; Silva, R.L. da; Tauhata, L.
2015-01-01
The activity of a 57 Co solution was determined using four LNMRI different measurement methods. The solution was standardized by live-timed anti-coincidence method and sum-peak method. The efficiency curve and standard-sample comparison methods were also used in this comparison. The results and their measurement uncertainties demonstrating the equivalence of these methods. As an additional contribution, the gamma emission probabilities of 57 Co were also determined. (author)
A volume-based method for denoising on curved surfaces
Biddle, Harry
2013-09-01
We demonstrate a method for removing noise from images or other data on curved surfaces. Our approach relies on in-surface diffusion: we formulate both the Gaussian diffusion and Perona-Malik edge-preserving diffusion equations in a surface-intrinsic way. Using the Closest Point Method, a recent technique for solving partial differential equations (PDEs) on general surfaces, we obtain a very simple algorithm where we merely alternate a time step of the usual Gaussian diffusion (and similarly Perona-Malik) in a small 3D volume containing the surface with an interpolation step. The method uses a closest point function to represent the underlying surface and can treat very general surfaces. Experimental results include image filtering on smooth surfaces, open surfaces, and general triangulated surfaces. © 2013 IEEE.
Estimating wildlife activity curves: comparison of methods and sample size.
Lashley, Marcus A; Cove, Michael V; Chitwood, M Colter; Penido, Gabriel; Gardner, Beth; DePerno, Chris S; Moorman, Chris E
2018-03-08
Camera traps and radiotags commonly are used to estimate animal activity curves. However, little empirical evidence has been provided to validate whether they produce similar results. We compared activity curves from two common camera trapping techniques to those from radiotags with four species that varied substantially in size (~1 kg-~50 kg), diet (herbivore, omnivore, carnivore), and mode of activity (diurnal and crepuscular). Also, we sub-sampled photographs of each species with each camera trapping technique to determine the minimum sample size needed to maintain accuracy and precision of estimates. Camera trapping estimated greater activity during feeding times than radiotags in all but the carnivore, likely reflective of the close proximity of foods readily consumed by all species except the carnivore (i.e., corn bait or acorns). However, additional analyses still indicated both camera trapping methods produced relatively high overlap and correlation to radiotags. Regardless of species or camera trapping method, mean overlap increased and overlap error decreased rapidly as sample sizes increased until an asymptote near 100 detections which we therefore recommend as a minimum sample size. Researchers should acknowledge that camera traps and radiotags may estimate the same mode of activity but differ in their estimation of magnitude in activity peaks.
International Nuclear Information System (INIS)
Ha-Kawa, Sang Kil; Suga, Yutaka; Kouda, Katsuyasu; Ikeda, Koshi; Tanaka, Yoshimasa
1997-01-01
We investigated a curve-fitting method for the rate of blood retention of 99m Tc-galactosyl serum albumin (GSA) as a substitute for the blood sampling method. Seven healthy volunteers and 27 patients with liver disease underwent 99m Tc-GSA scanning. After normalization of the y-intercept as 100 percent, a biexponential regression curve for the precordial time-activity curve provided the percent injected dose (%ID) of 99m Tc-GSA in the blood without blood sampling. The discrepancy between %ID obtained by the curve-fitting method and that by the multiple blood samples was minimal in normal volunteers 3.1±2.1% (mean±standard deviation, n=77 sampling). Slightly greater discrepancy was observed in patients with liver disease (7.5±6.1%, n=135 sampling). The %ID at 15 min after injection obtained from the fitted curve was significantly greater in patients with liver cirrhosis than in the controls (53.2±11.6%, n=13; vs. 31.9±2.8%, n=7, p 99m Tc-GSA and the plasma retention rate for indocyanine green (r=-0.869, p 99m Tc-GSA and could be a substitute for the blood sampling method. (author)
Semiclassical methods in curved spacetime and black hole thermodynamics
International Nuclear Information System (INIS)
Camblong, Horacio E.; Ordonez, Carlos R.
2005-01-01
Improved semiclassical techniques are developed and applied to a treatment of a real scalar field in a D-dimensional gravitational background. This analysis, leading to a derivation of the thermodynamics of black holes, is based on the simultaneous use of (i) a near-horizon description of the scalar field in terms of conformal quantum mechanics; (ii) a novel generalized WKB framework; and (iii) curved-spacetime phase-space methods. In addition, this improved semiclassical approach is shown to be asymptotically exact in the presence of hierarchical expansions of a near-horizon type. Most importantly, this analysis further supports the claim that the thermodynamics of black holes is induced by their near-horizon conformal invariance
Schipper, H.R.; Janssen, B.
2011-01-01
Free form architecture with complex geometry brings along new challenges for manufacturers of building components. This paper describes the application of structural mechanics to predict the behaviour of an elastic mould surface, used as formwork for the manufacturing of double curved panels in
A Method of Timbre-Shape Synthesis Based On Summation of Spherical Curves
DEFF Research Database (Denmark)
Putnam, Lance Jonathan
2014-01-01
for simultaneous production of sonic tones and graphical curves based on additive synthesis of spherical curves. The spherical curves are generated from a sequence of elemental 3D rotations, similar to a Euler rotation. We show that this method can produce many important two- and three-dimensional curves directly...
Hong Shen
2011-01-01
The concepts of curve profile, curve intercept, curve intercept density, curve profile area density, intersection density in containing intersection (or intersection density relied on intersection reference), curve profile intersection density in surface (or curve intercept intersection density relied on intersection of containing curve), and curve profile area density in surface (AS) were defined. AS expressed the amount of curve profile area of Y phase in the unit containing surface area, S...
Spencer, Phoebe R; Sanders, Katherine A; Judge, Debra S
2018-02-01
Population-specific growth references are important in understanding local growth variation, especially in developing countries where child growth is poor and the need for effective health interventions is high. In this article, we use mixed longitudinal data to calculate the first growth curves for rural East Timorese children to identify where, during development, deviation from the international standards occurs. Over an eight-year period, 1,245 children from two ecologically distinct rural areas of Timor-Leste were measured a total of 4,904 times. We compared growth to the World Health Organization (WHO) standards using z-scores, and modeled height and weight velocity using the SuperImposition by Translation And Rotation (SITAR) method. Using the Generalized Additive Model for Location, Scale and Shape (GAMLSS) method, we created the first growth curves for rural Timorese children for height, weight and body mass index (BMI). Relative to the WHO standards, children show early-life growth faltering, and stunting throughout childhood and adolescence. The median height and weight for this population tracks below the WHO fifth centile. Males have poorer growth than females in both z-BMI (p = .001) and z-height-for-age (p = .018) and, unlike females, continue to grow into adulthood. This is the most comprehensive investigation to date of rural Timorese children's growth, and the growth curves created may potentially be used to identify future secular trends in growth as the country develops. We show significant deviation from the international standard that becomes most pronounced at adolescence, similar to the growth of other Asian populations. Males and females show different growth responses to challenging conditions in this population. © 2017 Wiley Periodicals, Inc.
International Nuclear Information System (INIS)
Ros, F C; Sidek, L M; Desa, M N; Arifin, K; Tosaka, H
2013-01-01
The purpose of the stage-discharge curves varies from water quality study, flood modelling study, can be used to project climate change scenarios and so on. As the bed of the river often changes due to the annual monsoon seasons that sometimes cause by massive floods, the capacity of the river will changed causing shifting controlled to happen. This study proposes to use the historical flood event data from 1960 to 2009 in calculating the stage-discharge curve of Guillemard Bridge located in Sg. Kelantan. Regression analysis was done to check the quality of the data and examine the correlation between the two variables, Q and H. The mean values of the two variables then were adopted to find the value of difference between zero gauge height and the level of zero flow, 'a', K and 'n' to fit into rating curve equation and finally plotting the stage-discharge rating curve. Regression analysis of the historical flood data indicate that 91 percent of the original uncertainty has been explained by the analysis with the standard error of 0.085.
Directory of Open Access Journals (Sweden)
María Ballester
Full Text Available BACKGROUND: Real-time quantitative PCR (qPCR is still the gold-standard technique for gene-expression quantification. Recent technological advances of this method allow for the high-throughput gene-expression analysis, without the limitations of sample space and reagent used. However, non-commercial and user-friendly software for the management and analysis of these data is not available. RESULTS: The recently developed commercial microarrays allow for the drawing of standard curves of multiple assays using the same n-fold diluted samples. Data Analysis Gene (DAG Expression software has been developed to perform high-throughput gene-expression data analysis using standard curves for relative quantification and one or multiple reference genes for sample normalization. We discuss the application of DAG Expression in the analysis of data from an experiment performed with Fluidigm technology, in which 48 genes and 115 samples were measured. Furthermore, the quality of our analysis was tested and compared with other available methods. CONCLUSIONS: DAG Expression is a freely available software that permits the automated analysis and visualization of high-throughput qPCR. A detailed manual and a demo-experiment are provided within the DAG Expression software at http://www.dagexpression.com/dage.zip.
International Nuclear Information System (INIS)
Nakagawa, Yasuaki
1996-01-01
The methods for testing permanent magnets stipulated in the usual industrial standards are so-called closed magnetic circuit methods which employ a loop tracer using an iron-core electromagnet. If the coercivity exceeds the highest magnetic field generated by the electromagnet, full hysteresis curves cannot be obtained. In the present work, magnetic fields up to 15 T were generated by a high-power water-cooled magnet, and the magnetization was measured by an induction method with an open magnetic circuit, in which the effect of a demagnetizing field should be taken into account. Various rare earth magnets materials such as sintered or bonded Sm-Co and Nd-Fe-B were provided by a number of manufacturers. Hysteresis curves for cylindrical samples with 10 nm in diameter and 2 mm, 3.5 mm, 5 mm, 14 mm or 28 mm in length were measured. Correction for the demagnetizing field is rather difficult because of its non-uniformity. Roughly speaking, a mean demagnetizing factor for soft magnetic materials can be used for the correction, although the application of this factor to hard magnetic material is hardly justified. Thus the dimensions of the sample should be specified when the data obtained by the open magnetic circuit method are used as industrial standards. (author)
Radiation reaction in curved space-time:. local method
Gal'Tsov, Dmitri; Spirin, Pavel; Staub, Simona
Although consensus seems to exist about the validity of equations accounting for radiation reaction in curved space-time, their previous derivations were criticized recently as not fully satisfactory: some ambiguities were noticed in the procedure of integration of the field momentum over the tube surrounding the world-line. To avoid these problems we suggest a purely local derivation dealing with the field quantities defined only on the world-line. We consider point particle interacting with scalar, vector (electromagnetic) and linearized gravitational fields in the (generally non-vacuum) curved space-time. To properly renormalize the self-action in the gravitational case, we use a manifestly reparameterization-invariant formulation of the theory. Scalar and vector divergences are shown to cancel for a certain ratio of the corresponding charges. We also report on a modest progress in extending the results for the gravitational radiation reaction to the case of non-vacuum background.
Directory of Open Access Journals (Sweden)
Sylvie Troncale
Full Text Available MOTIVATION: Reverse phase protein array (RPPA is a powerful dot-blot technology that allows studying protein expression levels as well as post-translational modifications in a large number of samples simultaneously. Yet, correct interpretation of RPPA data has remained a major challenge for its broad-scale application and its translation into clinical research. Satisfying quantification tools are available to assess a relative protein expression level from a serial dilution curve. However, appropriate tools allowing the normalization of the data for external sources of variation are currently missing. RESULTS: Here we propose a new method, called NormaCurve, that allows simultaneous quantification and normalization of RPPA data. For this, we modified the quantification method SuperCurve in order to include normalization for (i background fluorescence, (ii variation in the total amount of spotted protein and (iii spatial bias on the arrays. Using a spike-in design with a purified protein, we test the capacity of different models to properly estimate normalized relative expression levels. The best performing model, NormaCurve, takes into account a negative control array without primary antibody, an array stained with a total protein stain and spatial covariates. We show that this normalization is reproducible and we discuss the number of serial dilutions and the number of replicates that are required to obtain robust data. We thus provide a ready-to-use method for reliable and reproducible normalization of RPPA data, which should facilitate the interpretation and the development of this promising technology. AVAILABILITY: The raw data, the scripts and the normacurve package are available at the following web site: http://microarrays.curie.fr.
Directory of Open Access Journals (Sweden)
Wenting Luo
2016-04-01
Full Text Available Pavement horizontal curve is designed to serve as a transition between straight segments, and its presence may cause a series of driving-related safety issues to motorists and drivers. As is recognized that traditional methods for curve geometry investigation are time consuming, labor intensive, and inaccurate, this study attempts to develop a method that can automatically conduct horizontal curve identification and measurement at network level. The digital highway data vehicle (DHDV was utilized for data collection, in which three Euler angles, driving speed, and acceleration of survey vehicle were measured with an inertial measurement unit (IMU. The 3D profiling data used for cross slope calibration was obtained with PaveVision3D Ultra technology at 1 mm resolution. In this study, the curve identification was based on the variation of heading angle, and the curve radius was calculated with kinematic method, geometry method, and lateral acceleration method. In order to verify the accuracy of the three methods, the analysis of variance (ANOVA test was applied by using the control variable of curve radius measured by field test. Based on the measured curve radius, a curve safety analysis model was used to predict the crash rates and safe driving speeds at horizontal curves. Finally, a case study on 4.35 km road segment demonstrated that the proposed method could efficiently conduct network level analysis.
Log-cubic method for generation of soil particle size distribution curve.
Shang, Songhao
2013-01-01
Particle size distribution (PSD) is a fundamental physical property of soils. Traditionally, the PSD curve was generated by hand from limited data of particle size analysis, which is subjective and may lead to significant uncertainty in the freehand PSD curve and graphically estimated cumulative particle percentages. To overcome these problems, a log-cubic method was proposed for the generation of PSD curve based on a monotone piecewise cubic interpolation method. The log-cubic method and commonly used log-linear and log-spline methods were evaluated by the leave-one-out cross-validation method for 394 soil samples extracted from UNSODA database. Mean error and root mean square error of the cross-validation show that the log-cubic method outperforms two other methods. What is more important, PSD curve generated by the log-cubic method meets essential requirements of a PSD curve, that is, passing through all measured data and being both smooth and monotone. The proposed log-cubic method provides an objective and reliable way to generate a PSD curve from limited soil particle analysis data. This method and the generated PSD curve can be used in the conversion of different soil texture schemes, assessment of grading pattern, and estimation of soil hydraulic parameters and erodibility factor.
International Nuclear Information System (INIS)
Lott, B.; Escande, L.; Larsson, S.; Ballet, J.
2012-01-01
Here, we present a method enabling the creation of constant-uncertainty/constant-significance light curves with the data of the Fermi-Large Area Telescope (LAT). The adaptive-binning method enables more information to be encapsulated within the light curve than with the fixed-binning method. Although primarily developed for blazar studies, it can be applied to any sources. Furthermore, this method allows the starting and ending times of each interval to be calculated in a simple and quick way during a first step. The reported mean flux and spectral index (assuming the spectrum is a power-law distribution) in the interval are calculated via the standard LAT analysis during a second step. In the absence of major caveats associated with this method Monte-Carlo simulations have been established. We present the performance of this method in determining duty cycles as well as power-density spectra relative to the traditional fixed-binning method.
High cycle fatigue test and regression methods of S-N curve
International Nuclear Information System (INIS)
Kim, D. W.; Park, J. Y.; Kim, W. G.; Yoon, J. H.
2011-11-01
The fatigue design curve in the ASME Boiler and Pressure Vessel Code Section III are based on the assumption that fatigue life is infinite after 106 cycles. This is because standard fatigue testing equipment prior to the past decades was limited in speed to less than 200 cycles per second. Traditional servo-hydraulic machines work at frequency of 50 Hz. Servo-hydraulic machines working at 1000 Hz have been developed after 1997. This machines allow high frequency and displacement of up to ±0.1 mm and dynamic load of ±20 kN are guaranteed. The frequency of resonant fatigue test machine is 50-250 Hz. Various forced vibration-based system works at 500 Hz or 1.8 kHz. Rotating bending machines allow testing frequency at 0.1-200 Hz. The main advantage of ultrasonic fatigue testing at 20 kHz is performing Although S-N curve is determined by experiment, the fatigue strength corresponding to a given fatigue life should be determined by statistical method considering the scatter of fatigue properties. In this report, the statistical methods for evaluation of fatigue test data is investigated
Development and Analysis of Train Brake Curve Calculation Methods with Complex Simulation
Directory of Open Access Journals (Sweden)
Bela Vincze
2006-01-01
Full Text Available This paper describes an efficient method using simulation for developing and analyzing train brake curve calculation methods for the on-board computer of the ETCS system. An application example with actual measurements is also presented.
Methods for simulating solute breakthrough curves in pumping groundwater wells
Starn, J. Jeffrey; Bagtzoglou, Amvrossios C.; Robbins, Gary A.
2012-01-01
In modeling there is always a trade-off between execution time and accuracy. For gradient-based parameter estimation methods, where a simulation model is run repeatedly to populate a Jacobian (sensitivity) matrix, there exists a need for rapid simulation methods of known accuracy that can decrease execution time, and thus make the model more useful without sacrificing accuracy. Convolution-based methods can be executed rapidly for any desired input function once the residence-time distribution is known. The residence-time distribution can be calculated efficiently using particle tracking, but particle tracking can be ambiguous near a pumping well if the grid is too coarse. We present several embedded analytical expressions for improving particle tracking near a pumping well and compare them with a finely gridded finite-difference solution in terms of accuracy and CPU usage. Even though the embedded analytical approach can improve particle tracking near a well, particle methods reduce, but do not eliminate, reliance on a grid because velocity fields typically are calculated on a grid, and additional error is incurred using linear interpolation of velocity. A dilution rate can be calculated for a given grid and pumping well to determine if the grid is sufficiently refined. Embedded analytical expressions increase accuracy but add significantly to CPU usage. Structural error introduced by the numerical solution method may affect parameter estimates.
Energy Technology Data Exchange (ETDEWEB)
Ha-Kawa, Sang Kil; Suga, Yutaka; Kouda, Katsuyasu; Ikeda, Koshi; Tanaka, Yoshimasa [Kansai Medical Univ., Moriguchi, Osaka (Japan)
1997-02-01
We investigated a curve-fitting method for the rate of blood retention of {sup 99m}Tc-galactosyl serum albumin (GSA) as a substitute for the blood sampling method. Seven healthy volunteers and 27 patients with liver disease underwent {sup 99m}Tc-GSA scanning. After normalization of the y-intercept as 100 percent, a biexponential regression curve for the precordial time-activity curve provided the percent injected dose (%ID) of {sup 99m}Tc-GSA in the blood without blood sampling. The discrepancy between %ID obtained by the curve-fitting method and that by the multiple blood samples was minimal in normal volunteers 3.1{+-}2.1% (mean{+-}standard deviation, n=77 sampling). Slightly greater discrepancy was observed in patients with liver disease (7.5{+-}6.1%, n=135 sampling). The %ID at 15 min after injection obtained from the fitted curve was significantly greater in patients with liver cirrhosis than in the controls (53.2{+-}11.6%, n=13; vs. 31.9{+-}2.8%, n=7, p<0.0001). There was a highly linear correlation between the %IDs of {sup 99m}Tc-GSA and the plasma retention rate for indocyanine green (r=-0.869, p<0.0001, n=27). These results indicate that the curve-fitting method provides an accurate %ID of {sup 99m}Tc-GSA and could be a substitute for the blood sampling method. (author)
Learning curve patterns generated by a training method for laparoscopic small bowel anastomosis.
Manuel-Palazuelos, Jose Carlos; Riaño-Molleda, María; Ruiz-Gómez, José Luis; Martín-Parra, Jose Ignacio; Redondo-Figuero, Carlos; Maestre, José María
2016-01-01
The identification of developmental curve patterns generated by a simulation-based educational method and the variables that can accelerate the learning process will result in cost-effective training. This study describes the learning curves of a simulation-based instructional design (ID) that uses ex vivo animal models to teach laparoscopic latero-lateral small bowel anastomosis. Twenty general surgery residents were evaluated on their performance of laparoscopic latero-lateral jejuno-jejunal anastomoses (JJA) and gastro-jejunal anastomoses (GJA), using swine small bowel and stomach on an endotrainer. The ID included the following steps: (1) provision of references and videos demonstrating the surgical technique, (2) creation of an engaging context for learning, (3) critical review of the literature and video on the procedures, (4) demonstration of the critical steps, (5) hands-on practice, (6) in-action instructor's feedback, (7) quality assessment, (8) debriefing at the end of the session, and (9) deliberate and repetitive practice. Time was recorded from the beginning to the completion of the procedure, along with the presence or absence of anastomotic leaks. The participants needed to perform 23.8 ± 6.96 GJA (12-35) and 24.2 ± 6.96 JJA (9-43) to attain proficiency. The starting point of the learning curve was higher for the GJA than for the JJA, although the slope and plateau were parallel. Further, four types of learning curves were identified: (1) exponential, (2) rapid, (3) slow, and (4) no tendency. The type of pattern could be predicted after procedure number 8. These findings may help to identify the learning curve of a trainee early in the developmental process, estimate the number of sessions required to reach a performance goal, determine a trainee's readiness to practice the procedure on patients, and identify the subjects who lack the innate technical abilities. It may help motivated individuals to become reflective and self
Solving eigenvalue problems on curved surfaces using the Closest Point Method
Macdonald, Colin B.
2011-06-01
Eigenvalue problems are fundamental to mathematics and science. We present a simple algorithm for determining eigenvalues and eigenfunctions of the Laplace-Beltrami operator on rather general curved surfaces. Our algorithm, which is based on the Closest Point Method, relies on an embedding of the surface in a higher-dimensional space, where standard Cartesian finite difference and interpolation schemes can be easily applied. We show that there is a one-to-one correspondence between a problem defined in the embedding space and the original surface problem. For open surfaces, we present a simple way to impose Dirichlet and Neumann boundary conditions while maintaining second-order accuracy. Convergence studies and a series of examples demonstrate the effectiveness and generality of our approach. © 2011 Elsevier Inc.
Inversion method applied to the rotation curves of galaxies
Márquez-Caicedo, L. A.; Lora-Clavijo, F. D.; Sanabria-Gómez, J. D.
2017-07-01
We used simulated annealing, Montecarlo and genetic algorithm methods for matching both numerical data of density and velocity profiles in some low surface brigthness galaxies with theoretical models of Boehmer-Harko, Navarro-Frenk-White and Pseudo Isothermal Profiles for galaxies with dark matter halos. We found that Navarro-Frenk-White model does not fit at all in contrast with the other two models which fit very well. Inversion methods have been widely used in various branches of science including astrophysics (Charbonneau 1995, ApJS, 101, 309). In this work we have used three different parametric inversion methods (MonteCarlo, Genetic Algorithm and Simmulated Annealing) in order to determine the best fit of the observed data of the density and velocity profiles of a set of low surface brigthness galaxies (De Block et al. 2001, ApJ, 122, 2396) with three models of galaxies containing dark mattter. The parameters adjusted by the inversion methods were the central density and a characteristic distance in the Boehmer-Harko BH (Boehmer & Harko 2007, JCAP, 6, 25), Navarro-Frenk-White NFW (Navarro et al. 2007, ApJ, 490, 493) and Pseudo Isothermal Profile PI (Robles & Matos 2012, MNRAS, 422, 282). The results obtained showed that the BH and PI Profile dark matter galaxies fit very well for both the density and the velocity profiles, in contrast the NFW model did not make good adjustments to the profiles in any analized galaxy.
TTF HOM Data Analysis with Curve Fitting Method
Energy Technology Data Exchange (ETDEWEB)
Pei, S.; Adolphsen, C.; Li, Z.; Bane, K.; Smith, J.; /SLAC
2009-07-14
To investigate the possibility of using HOM signals induced in SC cavities as beam and cavity diagnostics, narrow band (20 MHz) data was recorded around the strong TE111-6(6{pi}/9-like) dipole modes (1.7 GHz) in the 40 L-band (1.3 GHz) cavities at the DESY TTF facility. The analyses of these data have so far focused on using a Singular Value Decomposition (SVD) technique to correlate the signals with each other and data from conventional BPMs to show the dipole signals provide an alternate means of measuring the beam trajectory. However, these analyses do not extract the modal information (i.e., frequencies and Q's of the nearly degenerate horizontal and vertical modes). In this paper, we described a method to fit the signal frequency spectrum to obtain this information, and then use the resulting mode amplitudes and phases together with conventional BPM data to determine the mode polarizations and relative centers and tilts. Compared with the SVD analysis, this method is more physical, and can also be used to obtain the beam position and trajectory angle.
Directory of Open Access Journals (Sweden)
Imre Silvia
2013-02-01
Full Text Available Introduction: This study proposes the simultaneous determination of atorvastatin and amlodipine in industrial tablets by a quantitative spectrophotometric method, named the apparent content curve method, test method, and by an HPLC method with UV detection as reference method.
A Novel Method for Detecting and Computing Univolatility Curves in Ternary Mixtures
DEFF Research Database (Denmark)
Shcherbakov, Nataliya; Rodriguez-Donis, Ivonne; Abildskov, Jens
2017-01-01
of the generalized univolatility and unidistribution curves in the three dimensional composition – temperature state space lead to a simple and efficient algorithm of computation of the univolatility curves. Two peculiar ternary systems, namely diethylamine – chloroform – methanol and hexane – benzene......Residue curve maps (RCMs) and univolatility curves are crucial tools for analysis and design of distillation processes. Even in the case of ternary mixtures, the topology of these maps is highly non-trivial. We propose a novel method allowing detection and computation of univolatility curves...... in homogeneous ternary mixtures independently of the presence of azeotropes, which is particularly important in the case of zeotropic mixtures. The method is based on the analysis of the geometry of the boiling temperature surface constrained by the univolatility condition. The introduced concepts...
Multimodal determination of Rayleigh dispersion and attenuation curves using the circle fit method
Verachtert, R.; Lombaert, G.; Degrande, G.
2018-03-01
This paper introduces the circle fit method for the determination of multi-modal Rayleigh dispersion and attenuation curves as part of a Multichannel Analysis of Surface Waves (MASW) experiment. The wave field is transformed to the frequency-wavenumber (fk) domain using a discretized Hankel transform. In a Nyquist plot of the fk-spectrum, displaying the imaginary part against the real part, the Rayleigh wave modes correspond to circles. The experimental Rayleigh dispersion and attenuation curves are derived from the angular sweep of the central angle of these circles. The method can also be applied to the analytical fk-spectrum of the Green's function of a layered half-space in order to compute dispersion and attenuation curves, as an alternative to solving an eigenvalue problem. A MASW experiment is subsequently simulated for a site with a regular velocity profile and a site with a soft layer trapped between two stiffer layers. The performance of the circle fit method to determine the dispersion and attenuation curves is compared with the peak picking method and the half-power bandwidth method. The circle fit method is found to be the most accurate and robust method for the determination of the dispersion curves. When determining attenuation curves, the circle fit method and half-power bandwidth method are accurate if the mode exhibits a sharp peak in the fk-spectrum. Furthermore, simulated and theoretical attenuation curves determined with the circle fit method agree very well. A similar correspondence is not obtained when using the half-power bandwidth method. Finally, the circle fit method is applied to measurement data obtained for a MASW experiment at a site in Heverlee, Belgium. In order to validate the soil profile obtained from the inversion procedure, force-velocity transfer functions were computed and found in good correspondence with the experimental transfer functions, especially in the frequency range between 5 and 80 Hz.
Application of numerical methods in spectroscopy : fitting of the curve of thermoluminescence
International Nuclear Information System (INIS)
RANDRIAMANALINA, S.
1999-01-01
The method of non linear least squares is one of the mathematical tools widely employed in spectroscopy, it is used for the determination of parameters of a model. In other hand, the spline function is among fitting functions that introduce the smallest error. It is used for the calculation of the area under the curve. We present an application of these methods, with the details of the corresponding algorithms, to the fitting of the thermoluminescence curve. [fr
Methods for the Precise Locating and Forming of Arrays of Curved Features into a Workpiece
Gill, David Dennis; Keeler, Gordon A.; Serkland, Darwin K.; Mukherjee, Sayan D.
2008-10-14
Methods for manufacturing high precision arrays of curved features (e.g. lenses) in the surface of a workpiece are described utilizing orthogonal sets of inter-fitting locating grooves to mate a workpiece to a workpiece holder mounted to the spindle face of a rotating machine tool. The matching inter-fitting groove sets in the workpiece and the chuck allow precisely and non-kinematically indexing the workpiece to locations defined in two orthogonal directions perpendicular to the turning axis of the machine tool. At each location on the workpiece a curved feature can then be on-center machined to create arrays of curved features on the workpiece. The averaging effect of the corresponding sets of inter-fitting grooves provide for precise repeatability in determining, the relative locations of the centers of each of the curved features in an array of curved features.
Surface charge method for molecular surfaces with curved areal elements I. Spherical triangles
Yu, Yi-Kuo
2018-03-01
Parametrizing a curved surface with flat triangles in electrostatics problems creates a diverging electric field. One way to avoid this is to have curved areal elements. However, charge density integration over curved patches appears difficult. This paper, dealing with spherical triangles, is the first in a series aiming to solve this problem. Here, we lay the ground work for employing curved patches for applying the surface charge method to electrostatics. We show analytically how one may control the accuracy by expanding in powers of the the arc length (multiplied by the curvature). To accommodate not extremely small curved areal elements, we have provided enough details to include higher order corrections that are needed for better accuracy when slightly larger surface elements are used.
Reactor Section standard analytical methods. Part 1
Energy Technology Data Exchange (ETDEWEB)
Sowden, D.
1954-07-01
the Standard Analytical Methods manual was prepared for the purpose of consolidating and standardizing all current analytical methods and procedures used in the Reactor Section for routine chemical analyses. All procedures are established in accordance with accepted practice and the general analytical methods specified by the Engineering Department. These procedures are specifically adapted to the requirements of the water treatment process and related operations. The methods included in this manual are organized alphabetically within the following five sections which correspond to the various phases of the analytical control program in which these analyses are to be used: water analyses, essential material analyses, cotton plug analyses boiler water analyses, and miscellaneous control analyses.
Venn, Bernard J; Wallace, Alison J; Monro, John A; Perry, Tracy; Brown, Rachel; Frampton, Chris; Green, Tim J
2006-05-01
Glycemic load (GL) is calculated indirectly as glycemic index (GI) times the weight of available carbohydrate. Alternatively, GL may be measured directly using a standard glucose curve. The purpose of this study was to test the agreement between GL values obtained using direct and indirect methods of measurement in 20 healthy volunteers. A standard curve in which glucose dose was plotted against blood glucose incremental area under the curve (iAUC) was generated using beverages containing 0, 12.5, 25, 50, and 75 g glucose. The GI and available carbohydrate content of 5 foods were measured. The foods (white bread, fruit bread, granola bar, instant potato, and chickpeas) were consumed in 3 portion sizes, yielding 15 food/portion size combinations. GL was determined directly by relating the iAUC of a test food to the glucose standard curve. For 12 of 15 food/portion size combinations, GL determined using GI x available carbohydrate did not differ from GL measured from the standard curve (P > 0.05). For 3 of the test products (100 g white bread, and 100- and 150-g granola bars), GI x available carbohydrate was higher than the direct measure. Benefits of the direct measure are that the method does not require testing for available carbohydrate and it allows portion sizes to be tested. For practical purposes, GI x available carbohydrate provided a good estimate of GL, at least under circumstances in which available carbohydrate was measured, and GI and GL were tested in the same group of people.
Moore, Sarah; Kailasapathy, Kasipathy; Phillips, Michael; Jones, Mark R
2015-07-01
Microencapsulation is proposed to protect probiotic strains from food processing procedures and to maintain probiotic viability. Little research has described the in situ viability of microencapsulated probiotics. This study successfully developed a real-time viability standard curve for microencapsulated bacteria using confocal microscopy, fluorescent dyes and image analysis software. Copyright © 2015 Elsevier B.V. All rights reserved.
Light Curve Periodic Variability of Cyg X-1 using Jurkevich Method ...
Indian Academy of Sciences (India)
Abstract. The Jurkevich method is a useful method to explore periodic- ity in the unevenly sampled observational data. In this work, we adopted the method to the light curve of Cyg X-1 from 1996 to 2012, and found that there is an interesting period of 370 days, which appears in both low/hard and high/soft states.
Light Curve Periodic Variability of Cyg X-1 using Jurkevich Method
Indian Academy of Sciences (India)
The Jurkevich method is a useful method to explore periodicity in the unevenly sampled observational data. In this work, we adopted the method to the light curve of Cyg X-1 from 1996 to 2012, and found that there is an interesting period of 370 days, which appears in both low/hard and high/soft states. That period may be ...
International Nuclear Information System (INIS)
Purohit, D.N.; Goswami, A.K.; Chauhan, R.S.; Ressalan, S.
1999-01-01
A spectrophotometric method for determination of stability constants making use of Job's curves has been developed. Using this method stability constants of Zn(II), Cd(II), Mo(VI) and V(V) complexes of hydroxytriazenes have been determined. For the sake of comparison, values of the stability constants were also determined using Harvey and Manning's method. The values of the stability constants developed by two methods compare well. This new method has been named as Purohit's method. (author)
Chamberlain, D. M.; Elliot, J. L.
1997-01-01
We present a method for speeding up numerical calculations of a light curve for a stellar occultation by a planetary atmosphere with an arbitrary atmospheric model that has spherical symmetry. This improved speed makes least-squares fitting for model parameters practical. Our method takes as input several sets of values for the first two radial derivatives of the refractivity at different values of model parameters, and interpolates to obtain the light curve at intermediate values of one or more model parameters. It was developed for small occulting bodies such as Pluto and Triton, but is applicable to planets of all sizes. We also present the results of a series of tests showing that our method calculates light curves that are correct to an accuracy of 10(exp -4) of the unocculted stellar flux. The test benchmarks are (i) an atmosphere with a l/r dependence of temperature, which yields an analytic solution for the light curve, (ii) an atmosphere that produces an exponential refraction angle, and (iii) a small-planet isothermal model. With our method, least-squares fits to noiseless data also converge to values of parameters with fractional errors of no more than 10(exp -4), with the largest errors occurring in small planets. These errors are well below the precision of the best stellar occultation data available. Fits to noisy data had formal errors consistent with the level of synthetic noise added to the light curve. We conclude: (i) one should interpolate refractivity derivatives and then form light curves from the interpolated values, rather than interpolating the light curves themselves; (ii) for the most accuracy, one must specify the atmospheric model for radii many scale heights above half light; and (iii) for atmospheres with smoothly varying refractivity with altitude, light curves can be sampled as coarsely as two points per scale height.
Standard Test Method for Sandwich Corrosion Test
American Society for Testing and Materials. Philadelphia
2009-01-01
1.1 This test method defines the procedure for evaluating the corrosivity of aircraft maintenance chemicals, when present between faying surfaces (sandwich) of aluminum alloys commonly used for aircraft structures. This test method is intended to be used in the qualification and approval of compounds employed in aircraft maintenance operations. 1.2 The values stated in SI units are to be regarded as the standard. The values given in parentheses are for information. 1.3 This standard may involve hazardous materials, operations, and equipment. This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use. Specific hazard statements appear in Section 9.
A new method for measuring coronary artery diameters with CT spatial profile curves
International Nuclear Information System (INIS)
Shimamoto, Ryoichi; Suzuki, Jun-ichi; Yamazaki, Tadashi; Tsuji, Taeko; Ohmoto, Yuki; Morita, Toshihiro; Yamashita, Hiroshi; Honye, Junko; Nagai, Ryozo; Akahane, Masaaki; Ohtomo, Kuni
2007-01-01
Purpose: Coronary artery vascular edge recognition on computed tomography (CT) angiograms is influenced by window parameters. A noninvasive method for vascular edge recognition independent of window setting with use of multi-detector row CT was contrived and its feasibility and accuracy were estimated by intravascular ultrasound (IVUS). Methods: Multi-detector row CT was performed to obtain 29 CT spatial profile curves by setting a line cursor across short-axis coronary angiograms processed by multi-planar reconstruction. IVUS was also performed to determine the reference coronary diameter. IVUS diameter was fitted horizontally between two points on the upward and downward slopes of the profile curves and Hounsfield number was measured at the fitted level to test seven candidate indexes for definition of intravascular coronary diameter. The best index from the curves should show the best agreement with IVUS diameter. Results: Of the seven candidates the agreement was the best (agreement: 16 ± 11%) when the two ratios of Hounsfield number at the level of IVUS diameter over that at the peak on the profile curves were used with water and with fat as the background tissue. These edge definitions were achieved by cutting the horizontal distance by the curves at the level defined by the ratio of 0.41 for water background and 0.57 for fat background. Conclusions: Vascular edge recognition of the coronary artery with CT spatial profile curves was feasible and the contrived method could define the coronary diameter with reasonable agreement
A Method to Generate Freeform Curves from a Hand-drawn Sketch
Directory of Open Access Journals (Sweden)
Tetsuzo Kuragano
2007-04-01
Full Text Available When designers begin a product design, they create their ideas and expand them. This process is performed on paper, and designers' hand-drawn lines are called sketches. If the designers' hand-drawn sketches can be realized as real curves, it would be effective in shortening the design period. We have developed a method to extract five degree Bezier curves based on a hand-drawn sketch. The basic techniques to detect curves based on a hand-drawn sketch are described. First, light intensity transformation, binarization of the hand-drawn sketch, and feature based erosion and dilation to smooth the edges of the binary sketch image are described. Then, line segment determination using the detected edges is described. Using the determined line segments a five degree Bezier curve generation is described. A curve shape modification algorithm is described to reconstruct a five degree Bezier curve. Examples of five degree fair curvature Bezier curves based on a sketch are given.
Energy Technology Data Exchange (ETDEWEB)
Ying Chen; Shao-Jing Dong; Terrence Draper; Ivan Horvath; Keh-Fei Liu; Nilmani Mathur; Sonali Tamhankar; Cidambi Srinivasan; Frank X. Lee; Jianbo Zhang
2004-05-01
We introduce the ''Sequential Empirical Bayes Method'', an adaptive constrained-curve fitting procedure for extracting reliable priors. These are then used in standard augmented-{chi}{sup 2} fits on separate data. This better stabilizes fits to lattice QCD overlap-fermion data at very low quark mass where a priori values are not otherwise known. Lessons learned (including caveats limiting the scope of the method) from studying artificial data are presented. As an illustration, from local-local two-point correlation functions, we obtain masses and spectral weights for ground and first-excited states of the pion, give preliminary fits for the a{sub 0} where ghost states (a quenched artifact) must be dealt with, and elaborate on the details of fits of the Roper resonance and S{sub 11}(N{sup 1/2-}) previously presented elsewhere. The data are from overlap fermions on a quenched 16{sup 3} x 28 lattice with spatial size La = 3.2 fm and pion mass as low as {approx}180 MeV.
The "curved lead pathway" method to enable a single lead to reach any two intracranial targets.
Ding, Chen-Yu; Yu, Liang-Hong; Lin, Yuan-Xiang; Chen, Fan; Lin, Zhang-Ya; Kang, De-Zhi
2017-01-11
Deep brain stimulation is an effective way to treat movement disorders, and a powerful research tool for exploring brain functions. This report proposes a "curved lead pathway" method for lead implantation, such that a single lead can reach in sequence to any two intracranial targets. A new type of stereotaxic system for implanting a curved lead to the brain of human/primates was designed, the auxiliary device needed for this method to be used in rat/mouse was fabricated and verified in rat, and the Excel algorithm used for automatically calculating the necessary parameters was implemented. This "curved lead pathway" method of lead implantation may complement the current method, make lead implantation for multiple targets more convenient, and expand the experimental techniques of brain function research.
Standardization of C-14 by tracing method
Energy Technology Data Exchange (ETDEWEB)
Koskinas, Marina F.; Kuznetsova, Maria; Yamazaki, Ione; Brancaccio, Franco; Dias, Mauro S., E-mail: koskinas@ipen.br, E-mail: marysmith@usp.br, E-mail: yamazaki@ipen.br, E-mail: fbrancac@ipen.br, E-mail: msdias@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)
2015-07-01
The standardization of a {sup 14}C radioactive solution by means of the efficiency tracing technique is described. The {sup 14}C is a beta pure emitter with endpoint energy of 156 keV decaying to the ground state of {sup 14}N. The activity measurement was performed in a 4πβ-γ coincidence system, measuring the pure beta emitter mixed with a beta gamma emitter, which provides the beta detection efficiency. The radionuclide {sup 60}Co, which decays by beta particle followed by two gamma rays, was used as tracer and the efficiency was obtained by selecting the 1173 keV plus 1332 keV total energy absorption peak at the gamma channel. Known aliquots of the tracer, previously standardized by 4πβ (PC)-γ coincidence, were mixed with known aliquots of {sup 14}C. The sources of {sup 14}C + {sup 60}Co were prepared by dropping known aliquots from each radioactive solution. The events were registered by a Software Coincidence System (SCS). The activity of the solution was determined by using the extrapolation technique, changing the beta efficiency by pulse height discrimination. In order to determine the final activity, a Monte Carlo simulation was used to calculate the extrapolation curve. All the uncertainties involved were treated rigorously, by means of the covariance analysis methodology. Measurements using a HIDEX, a commercial liquid scintillator system, were carried out and the results were compared with the tracing technique, showing a good agreement. (author)
Tsunami Simulation using CIP Method with Characteristic Curve Equations and TVD-MacCormack Method
Fukazawa, Souki; Tosaka, Hiroyuki
2015-04-01
After entering 21st century, we already had two big tsunami disasters associated with Mw9 earthquakes in Sumatra and Japan. To mitigate the damages of tsunami, the numerical simulation technology combined with information technologies could provide reliable predictions in planning countermeasures to prevent the damage to the social system, making safety maps, and submitting early evacuation information to the residents. Shallow water equations are still solved not only for global scale simulation of the ocean tsunami propagation but also for local scale simulation of overland inundation in many tsunami simulators though three-dimensional model starts to be used due to improvement of CPU. One-dimensional shallow water equations are below: partial bm{Q}/partial t+partial bm{E}/partial x=bm{S} in which bm{Q}=( D M )), bm{E}=( M M^2/D+gD^2/2 )), bm{S}=( 0 -gDpartial z/partial x-gn2 M|M| /D7/3 )). where D[m] is total water depth; M[m^2/s] is water flux; z[m] is topography; g[m/s^2] is the gravitational acceleration; n[s/m1/3] is Manning's roughness coefficient. To solve these, the staggered leapfrog scheme is used in a lot of wide-scale tsunami simulator. But this scheme has a problem that lagging phase error occurs when courant number is small. In some practical simulation, a kind of diffusion term is added. In this study, we developed two wide-scale tsunami simulators with different schemes and compared usual scheme and other schemes in practicability and validity. One is a total variation diminishing modification of the MacCormack method (TVD-MacCormack method) which is famous for the simulation of compressible fluids. The other is the Cubic Interpolated Profile (CIP) method with characteristic curve equations transformed from shallow water equations. Characteristic curve equations derived from shallow water equations are below: partial R_x±/partial t+C_x±partial R_x±/partial x=∓ g/2partial z/partial x in which R_x±=√{gD}± u/2, C_x±=u± √{gD}. where u
A method to enhance the curve negotiation performance of HTS Maglev
Che, T.; Gou, Y. F.; Deng, Z. G.; Zheng, J.; Zheng, B. T.; Chen, P.
2015-09-01
High temperature superconducting (HTS) Maglev has attracted more and more attention due to its special self-stable characteristic, and much work has been done to achieve its actual application, but the research about the curve negotiation is not systematic and comprehensive. In this paper, we focused on the change of the lateral displacements of the Maglev vehicle when going through curves under different velocities, and studied the change of the electromagnetic forces through experimental methods. Experimental results show that setting an appropriate initial eccentric distance (ED), which is the distance between the center of the bulk unit and the center of the permanent magnet guideway (PMG), when cooling the bulks is favorable for the Maglev system’s curve negotiation. This work will provide some available suggestions for improving the curve negotiation performance of the HTS Maglev system.
Directory of Open Access Journals (Sweden)
Makita A.
2010-06-01
Full Text Available Instrumented Charpy test was conducted on small sized specimen of 21/4Cr-1Mo steel. In the test the single specimen key curve method was applied to determine the value of fracture toughness for the initiation of crack extension with hydrogen free, KIC, and for hydrogen embrittlement cracking, KIH. Also the tearing modulus as a parameter for resistance to crack extension was determined. The role of these parameters was discussed at an upper shelf temperature and at a transition temperature. Then the key curve method combined with instrumented Charpy test was proven to be used to evaluate not only temper embrittlement but also hydrogen embrittlement.
International Nuclear Information System (INIS)
Bykova, L.N.; Chesnokova, O.Ya.; Orlova, M.V.
1995-01-01
The method for linearizing the potentiometric curves of precipitation titration is studied for its application in the determination of halide ions (Cl - , Br - , I - ) in dimethylacetamide, dimethylformamide, in which titration is complicated by additional equilibrium processes. It is found that the method of linearization permits the determination of the titrant volume at the end point of titration to high accuracy in the case of titration curves without a potential jump in the proximity of the equivalent point (5 x 10 -5 M). 3 refs., 2 figs., 3 tabs
Standard methods for analysis of phosphorus-32
International Nuclear Information System (INIS)
Anon.
1975-01-01
Methods are described for the determination of the radiochemical purity and the absolute disintegration rate of 32 P radioisotope preparations. The 32 P activity is determined by β counting, and other low-energy β radioactive contaminants are determined by aluminum-absorption curve data. Any γ-radioactive contaminants are determined by γ counting. Routine chemical testing is used to establish the chemical characteristics. The presence or absence of heavy metals is established by spot tests; free acid is determined by use of a pH meter; total solids are determined gravimetrically by evaporation and ignition at a temperature sufficient to evaporate the mineral acids, HCl and HNO 3 ; and nonvolatile matter, defined as that material which does not evaporate or ignite at a temperature sufficient to convert C to CO or CO 2 , is determined gravimetrically after such ignition
Way, Rupert; Lafond, François; Farmer, J. Doyne; Lillo, Fabrizio; Panchenko, Valentyn
2017-01-01
This paper considers how to optimally allocate investments in a portfolio of competing technologies. We introduce a simple model representing the underlying trade-off - between investing enough effort in any one project to spur rapid progress, and diversifying effort over many projects simultaneously to hedge against failure. We use stochastic experience curves to model the idea that investing more in a technology reduces its unit costs, and we use a mean-variance objective function to unders...
Rabesiaka, Mihasina; Porte, Catherine; Bonnin-Paris, Johanne; Havet, Jean-Louis
2011-10-01
An essential tool in the study of crystallization is the saturation curve and metastable zone width, since the shape of the solubility curve defines the crystallization mode and the supersaturation conditions, which are the driving force of crystallization. The purpose of this work was to determine saturation and supersaturation curves of lysine monohydrochloride by an automatic method based on the turbidity of the crystallization medium. As lysine solution is colored, the interest of turbidimetry is demonstrated. An automated installation and the procedure to determine several points on the saturation curve and metastable zone width were set up in the laboratory. On-line follow-up of the solution turbidity and temperature enabled the dissolution and nucleation temperatures of the crystals to be determined by measuring attenuation of the light beam by suspended particles. The thermal regulation system was programmed so that the heating rate took into account the system inertia, i.e. duration related to the dissolution rate of the compound. Using this automatic method, the saturation curve and the metastable zone width of lysine monohydrochloride were plotted.
Directory of Open Access Journals (Sweden)
Esteban Pérez-López
2014-11-01
Full Text Available Because of the importance of quantitative chemical analysis in research, quality control, sales of services and other areas of interest , and the limiting of some instrumental analysis methods for quantification with linear calibration curve, sometimes because the short linear dynamic ranges of the analyte, and sometimes by limiting the technique itself, is that there is a need to investigate a little more about the convenience of using quadratic curves for analytical quantification, which seeks demonstrate that it is a valid calculation model for chemical analysis instruments. To this was taken as an analysis method based on the technique and atomic absorption spectroscopy in particular a determination of magnesium in a sample of drinking water Tacares sector Northern Grecia, employing a nonlinear calibration curve and a curve specific quadratic behavior, which was compared with the test results obtained for the same analysis with a linear calibration curve. The results show that the methodology is valid for the determination referred to, with all confidence, since the concentrations are very similar, and as used hypothesis testing can be considered equal.
International Nuclear Information System (INIS)
Gelido, G; Angiletta, S; Pujalte, A; Quiroga, P; Cornes, P; Craiem, D
2007-01-01
Measurement of peripheral arterial pressure using the oscillometric method is commonly used by professionals as well as by patients in their homes. This non invasive automatic method is fast, efficient and the required equipment is affordable with a low cost. The measurement method consists of obtaining parameters from a calibrated decreasing curve that is modulated by heart beats witch appear when arterial pressure reaches the cuff pressure. Diastolic, mean and systolic pressures are obtained calculating particular instants from the heart beats envelope curve. In this article we analyze the envelope of this amplified curve to find out if its morphology is related to arterial stiffness in patients. We found, in 33 volunteers, that the envelope waveform width correlates to systolic pressure (r=0.4, p<0.05), to pulse pressure (r=0.6, p<0.05) and to pulse pressure normalized to systolic pressure (r=0.6, p<0.05). We believe that the morphology of the heart beats envelope curve obtained with the oscillometric method for peripheral pressure measurement depends on arterial stiffness and can be used to enhance pressure measurements
A method and apparatus for forming a double-curved panel from a flat panel
Rietbergen, D.; Vollers, K.J.
2008-01-01
A method for forming a double-curved panel from a flat panel, which comprises processing a plastically deformable flat panel or rendering the flat panel plastically deformable to enable it to mould itself to a predetermined shape, wherein the shape is obtained by a primary supporting construction
Application of macro-polarization curve method to corrosion analysis of heat exchanger
Energy Technology Data Exchange (ETDEWEB)
Aoki, S. [Dept. of Computational Science and Engineering, Toyo Univ., Kawagoe, Saitama (Japan); Amaya, K. [Dept. of Mechanical and Environmental Informatics, Tokyo Inst. of Tech., Tokyo (Japan); Miyuki, H. [Iron and Steel Research Labs., Sumitomo Steel Co. Ltd., Fuyocho, Amagasaki (Japan)
2003-07-01
A boundary element corrosion analysis was performed for a heat exchanger to predict the effect of zinc sacrificial anodes. Since a heat exchanger has thousands of stainless steel tubes held with two naval brass tube-holder plates, and hence the conventional BEM does not work, the equivalent macro-polarization curve method was applied. At first the part of the tube-holder plate surfaces which consist of a great number of stainless steel tube edges and brass tube-holder plate was assumed to be made of a homogeneous virtual material. Then, its equivalent macro-polarization curve was determined by analyzing a tube unit, which consists of a stainless steel tube and a part of naval brass tube-holder plate. By using the equivalent macro-polarization curve thus obtained, the heat exchanger was effectively analyzed with a small number of elements. (orig.)
The strategy curve. A method for representing and interpreting generator bidding strategies
International Nuclear Information System (INIS)
Lucas, N.; Taylor, P.
1995-01-01
The pool is the novel trading arrangement at the heart of the privatized electricity market in England and Wales. This central role in the new system makes it crucial that it is seen to function efficiently. Unfortunately, it is governed by a set of complex rules, which leads to a lack of transparency, and this makes monitoring of its operation difficult. This paper seeks to provide a method for illuminating one aspect of the pool, that of generator bidding behaviour. We introduce the concept of a strategy curve, which is a concise device for representing generator bidding strategies. This curve has the appealing characteristic of directly revealing any deviation in the bid price of a genset from the costs of generating electricity. After a brief discussion about what constitutes price and cost in this context we present a number of strategy curves for different days and provide some interpretation of their form, based in part on our earlier work with game theory. (author)
Energy Technology Data Exchange (ETDEWEB)
Grau Carles, A.; Grau Malonda, A.
1984-07-01
A code, SCINFI, was developed, written in BASIC, to compute the efficiency- quench standardization curve for any radionuclide. The program requires the standardization curve for 3H and the polynomial relations between counting efficiency and figure of merit for both 3H and the problem (e.g. 14{sup C}). The program is applied to the computation of the efficiency-quench standardization curve for 14{sup c}. Five different liquid scintillation spectrometers and two scintillator solutions have bean checked. The computation results are compared with the experimental values obtained with a set of 14{sup c} standardized samples. (Author)
Prediction Method for the Complete Characteristic Curves of a Francis Pump-Turbine
Directory of Open Access Journals (Sweden)
Wei Huang
2018-02-01
Full Text Available Complete characteristic curves of a pump-turbine are essential for simulating the hydraulic transients and designing pumped storage power plants but are often unavailable in the preliminary design stage. To solve this issue, a prediction method for the complete characteristics of a Francis pump-turbine was proposed. First, based on Euler equations and the velocity triangles at the runners, a mathematical model describing the complete characteristics of a Francis pump-turbine was derived. According to multiple sets of measured complete characteristic curves, explicit expressions for the characteristic parameters of characteristic operating point sets (COPs, as functions of a specific speed and guide vane opening, were then developed to determine the undetermined coefficients in the mathematical model. Ultimately, by combining the mathematical model with the regression analysis of COPs, the complete characteristic curves for an arbitrary specific speed were predicted. Moreover, a case study shows that the predicted characteristic curves are in good agreement with the measured data. The results obtained by 1D numerical simulation of the hydraulic transient process using the predicted characteristics deviate little from the measured characteristics. This method is effective and sufficient for a priori simulations before obtaining the measured characteristics and provides important support for the preliminary design of pumped storage power plants.
Hongyang, Yu; Zhengang, Lu; Xi, Yang
2017-05-01
Modular Multilevel Converter is more and more widely used in high voltage DC transmission system and high power motor drive system. It is a major topological structure for high power AC-DC converter. Due to the large module number, the complex control algorithm, and the high power user’s back ground, the MMC model used for simulation should be as accurate as possible to simulate the details of how MMC works for the dynamic testing of the MMC controller. But so far, there is no sample simulation MMC model which can simulate the switching dynamic process. In this paper, one curve embedded full-bridge MMC modeling method with detailed representation of IGBT characteristics is proposed. This method is based on the switching curve referring and sample circuit calculation, and it is sample for implementation. Based on the simulation comparison test under Matlab/Simulink, the proposed method is proved to be correct.
About the method of approximation of a simple closed plane curve with a sharp edge
Directory of Open Access Journals (Sweden)
Zelenyy A.S.
2017-02-01
Full Text Available it was noted in the article, that initially the problem of interpolation of the simple plane curve arose in the problem of simulation of subsonic flow around a body with the subsequent calculation of the velocity potential using the vortex panel method. However, as it turned out, the practical importance of this method is much wider. This algorithm can be successfully applied in any task that requires a discrete set of points which describe an arbitrary curve: potential function method, flow around an airfoil with the trailing edge (airfoil, liquid drop, etc., analytic expression, which is very difficult to obtain, creation of the font and logo and in some tasks of architecture and garment industry.
Determination of the saturation curve of a primary standard for low energy X-ray beams
International Nuclear Information System (INIS)
Cardoso, Ricardo de Souza; Poledna, Roberto; Peixoto, Jose Guilherme P.
2003-01-01
Thr free air is the well recognized as the primary standard for the measurement of kerma in the air due to his characteristics to perform the absolute measurements of that entity according to definitions. Therefore, the Institute for Radioprotection and dosimetry - IRD, Brazil used for his implantation a free air cylindrical ionization chamber. Initially, a mechanical characterization was performed for verification as a primary standard. This paper will proceed a full detailed description the point operation of 2000 V found for that chamber and her saturation coefficient
The Objective Borderline Method: A Probabilistic Method for Standard Setting
Shulruf, Boaz; Poole, Phillippa; Jones, Philip; Wilkinson, Tim
2015-01-01
A new probability-based standard setting technique, the Objective Borderline Method (OBM), was introduced recently. This was based on a mathematical model of how test scores relate to student ability. The present study refined the model and tested it using 2500 simulated data-sets. The OBM was feasible to use. On average, the OBM performed well…
S-curve networks and an approximate method for estimating degree distributions of complex networks
Guo, Jin-Li
2010-12-01
In the study of complex networks almost all theoretical models have the property of infinite growth, but the size of actual networks is finite. According to statistics from the China Internet IPv4 (Internet Protocol version 4) addresses, this paper proposes a forecasting model by using S curve (logistic curve). The growing trend of IPv4 addresses in China is forecasted. There are some reference values for optimizing the distribution of IPv4 address resource and the development of IPv6. Based on the laws of IPv4 growth, that is, the bulk growth and the finitely growing limit, it proposes a finite network model with a bulk growth. The model is said to be an S-curve network. Analysis demonstrates that the analytic method based on uniform distributions (i.e., Barabási-Albert method) is not suitable for the network. It develops an approximate method to predict the growth dynamics of the individual nodes, and uses this to calculate analytically the degree distribution and the scaling exponents. The analytical result agrees with the simulation well, obeying an approximately power-law form. This method can overcome a shortcoming of Barabási-Albert method commonly used in current network research.
S-curve networks and an approximate method for estimating degree distributions of complex networks
International Nuclear Information System (INIS)
Guo Jin-Li
2010-01-01
In the study of complex networks almost all theoretical models have the property of infinite growth, but the size of actual networks is finite. According to statistics from the China Internet IPv4 (Internet Protocol version 4) addresses, this paper proposes a forecasting model by using S curve (logistic curve). The growing trend of IPv4 addresses in China is forecasted. There are some reference values for optimizing the distribution of IPv4 address resource and the development of IPv6. Based on the laws of IPv4 growth, that is, the bulk growth and the finitely growing limit, it proposes a finite network model with a bulk growth. The model is said to be an S-curve network. Analysis demonstrates that the analytic method based on uniform distributions (i.e., Barabási-Albert method) is not suitable for the network. It develops an approximate method to predict the growth dynamics of the individual nodes, and uses this to calculate analytically the degree distribution and the scaling exponents. The analytical result agrees with the simulation well, obeying an approximately power-law form. This method can overcome a shortcoming of Barabási-Albert method commonly used in current network research. (general)
Greer, Tyler; Lietz, Christopher B.; Xiang, Feng; Li, Lingjun
2015-01-01
Absolute quantification of protein targets using liquid chromatography-mass spectrometry (LC-MS) is a key component of candidate biomarker validation. One popular method combines multiple reaction monitoring (MRM) using a triple quadrupole instrument with stable isotope-labeled standards (SIS) for absolute quantification (AQUA). LC-MRM AQUA assays are sensitive and specific, but they are also expensive because of the cost of synthesizing stable isotope peptide standards. While the chemical modification approach using mass differential tags for relative and absolute quantification (mTRAQ) represents a more economical approach when quantifying large numbers of peptides, these reagents are costly and still suffer from lower throughput because only two concentration values per peptide can be obtained in a single LC-MS run. Here, we have developed and applied a set of five novel mass difference reagents, isotopic N, N-dimethyl leucine (iDiLeu). These labels contain an amine reactive group, triazine ester, are cost effective because of their synthetic simplicity, and have increased throughput compared with previous LC-MS quantification methods by allowing construction of a four-point standard curve in one run. iDiLeu-labeled peptides show remarkably similar retention time shifts, slightly lower energy thresholds for higher-energy collisional dissociation (HCD) fragmentation, and high quantification accuracy for trypsin-digested protein samples (median errors <15%). By spiking in an iDiLeu-labeled neuropeptide, allatostatin, into mouse urine matrix, two quantification methods are validated. The first uses one labeled peptide as an internal standard to normalize labeled peptide peak areas across runs (<19% error), whereas the second enables standard curve creation and analyte quantification in one run (<8% error).
Wave characterization of cylindrical and curved panels using a finite element method.
Manconi, Elisabetta; Mace, Brian R
2009-01-01
This paper describes a wave finite element method for the numerical prediction of wave characteristics of cylindrical and curved panels. The method combines conventional finite elements and the theory of wave propagation in periodic structures. The mass and stiffness matrices of a small segment of the structure, which is typically modeled using either a single shell element or, especially for laminated structures, a stack of solid elements meshed through the cross-section, are postprocessed using periodicity conditions. The matrices are typically found using a commercial FE package. The solutions of the resulting eigenproblem provide the frequency evolution of the wavenumber and the wave modes. For cylindrical geometries, the circumferential order of the wave can be specified in order to define the phase change that a wave experiences as it propagates across the element in the circumferential direction. The method is described and illustrated by application to cylinders and curved panels of different constructions. These include isotropic, orthotropic, and laminated sandwich constructions. The application of the method is seen to be straightforward even in the complicated case of laminated sandwich panels. Accurate predictions of the dispersion curves are found at negligible computational cost.
Directory of Open Access Journals (Sweden)
Cuo Guan
2017-01-01
Full Text Available This paper provides a method for evaluating the status of old oilfield development. This method mainly uses the abundant coring well data of the oilfield to obtain the cumulative distribution curve of the displacement efficiency after the displacement efficiency of the statistical wells in the study area in a similar period is ordered from small to large. Based on the cumulative distribution curve of displacement efficiency, combined with the reservoir ineffective circulation limit, the cumulative water absorption ratio of reservoirs and other data are used to study the reservoir producing degree, calculate the degree of oil recovery, evaluate the proportion of the remaining movable oil after water flooding, calculate the reservoir ineffective circulation thickness and ineffective circulation water volume, and so on.
International Nuclear Information System (INIS)
Petkov, P.; Yavahchova, M.; Tonev, D.; Goutev, N.; Dewald, A.
2013-01-01
A new version of the differential decay curve method is proposed for the analysis of Doppler-shift attenuation lifetime measurements. The lifetime is derived directly from the line shapes of the depopulating and feeding transitions on the basis of the Blaugrund approximation without including any assumptions or fitting of the time dependence of the population of the corresponding levels. For specific simulated cases, the method shows promise for its applicability. In the future, we intend to generalize the method for the case of an arbitrary multi-detector setup
Makita A.; Shindo Y.; Ohtsuka N.
2010-01-01
Instrumented Charpy test was conducted on small sized specimen of 21/4Cr-1Mo steel. In the test the single specimen key curve method was applied to determine the value of fracture toughness for the initiation of crack extension with hydrogen free, KIC, and for hydrogen embrittlement cracking, KIH. Also the tearing modulus as a parameter for resistance to crack extension was determined. The role of these parameters was discussed at an upper shelf temperature and at a transition temperat...
Feasibility of the correlation curves method in calorimeters of different types
Grushevskaya, E. A.; Lebedev, I. A.; Fedosimova, A. I.
2014-01-01
The simulation of the development of cascade processes in calorimeters of different types for the implementation of energy measurement by correlation curves method, is carried out. Heterogeneous calorimeter has a significant transient effects, associated with the difference of the critical energy in the absorber and the detector. The best option is a mixed calorimeter, which has a target block, leading to the rapid development of the cascade, and homogeneous measuring unit. Uncertainties of e...
Ensemble Learning Method for Outlier Detection and its Application to Astronomical Light Curves
Nun, Isadora; Protopapas, Pavlos; Sim, Brandon; Chen, Wesley
2016-09-01
Outlier detection is necessary for automated data analysis, with specific applications spanning almost every domain from financial markets to epidemiology to fraud detection. We introduce a novel mixture of the experts outlier detection model, which uses a dynamically trained, weighted network of five distinct outlier detection methods. After dimensionality reduction, individual outlier detection methods score each data point for “outlierness” in this new feature space. Our model then uses dynamically trained parameters to weigh the scores of each method, allowing for a finalized outlier score. We find that the mixture of experts model performs, on average, better than any single expert model in identifying both artificially and manually picked outliers. This mixture model is applied to a data set of astronomical light curves, after dimensionality reduction via time series feature extraction. Our model was tested using three fields from the MACHO catalog and generated a list of anomalous candidates. We confirm that the outliers detected using this method belong to rare classes, like Novae, He-burning, and red giant stars; other outlier light curves identified have no available information associated with them. To elucidate their nature, we created a website containing the light-curve data and information about these objects. Users can attempt to classify the light curves, give conjectures about their identities, and sign up for follow up messages about the progress made on identifying these objects. This user submitted data can be used further train of our mixture of experts model. Our code is publicly available to all who are interested.
A neural network driving curve generation method for the heavy-haul train
Directory of Open Access Journals (Sweden)
Youneng Huang
2016-05-01
Full Text Available The heavy-haul train has a series of characteristics, such as the locomotive traction properties, the longer length of train, and the nonlinear train pipe pressure during train braking. When the train is running on a continuous long and steep downgrade railway line, the safety of the train is ensured by cycle braking, which puts high demands on the driving skills of the driver. In this article, a driving curve generation method for the heavy-haul train based on a neural network is proposed. First, in order to describe the nonlinear characteristics of train braking, the neural network model is constructed and trained by practical driving data. In the neural network model, various nonlinear neurons are interconnected to work for information processing and transmission. The target value of train braking pressure reduction and release time is achieved by modeling the braking process. The equation of train motion is computed to obtain the driving curve. Finally, in four typical operation scenarios, comparing the curve data generated by the method with corresponding practical data of the Shuohuang heavy-haul railway line, the results show that the method is effective.
Serôdio, João; Ezequiel, João; Frommlet, Jörg; Laviale, Martin; Lavaud, Johann
2013-11-01
Light-response curves (LCs) of chlorophyll fluorescence are widely used in plant physiology. Most commonly, LCs are generated sequentially, exposing the same sample to a sequence of distinct actinic light intensities. These measurements are not independent, as the response to each new light level is affected by the light exposure history experienced during previous steps of the LC, an issue particularly relevant in the case of the popular rapid light curves. In this work, we demonstrate the proof of concept of a new method for the rapid generation of LCs from nonsequential, temporally independent fluorescence measurements. The method is based on the combined use of sample illumination with digitally controlled, spatially separated beams of actinic light and a fluorescence imaging system. It allows the generation of a whole LC, including a large number of actinic light steps and adequate replication, within the time required for a single measurement (and therefore named "single-pulse light curve"). This method is illustrated for the generation of LCs of photosystem II quantum yield, relative electron transport rate, and nonphotochemical quenching on intact plant leaves exhibiting distinct light responses. This approach makes it also possible to easily characterize the integrated dynamic light response of a sample by combining the measurement of LCs (actinic light intensity is varied while measuring time is fixed) with induction/relaxation kinetics (actinic light intensity is fixed and the response is followed over time), describing both how the response to light varies with time and how the response kinetics varies with light intensity.
Yang, Qian; Lew, Hwee Yeong; Peh, Raymond Hock Huat; Metz, Michael Patrick; Loh, Tze Ping
2016-10-01
Reference intervals are the most commonly used decision support tool when interpreting quantitative laboratory results. They may require partitioning to better describe subpopulations that display significantly different reference values. Partitioning by age is particularly important for the paediatric population since there are marked physiological changes associated with growth and maturation. However, most partitioning methods are either technically complex or require prior knowledge of the underlying physiology/biological variation of the population. There is growing interest in the use of continuous centile curves, which provides seamless laboratory reference values as a child grows, as an alternative to rigidly described fixed reference intervals. However, the mathematical functions that describe these curves can be complex and may not be easily implemented in laboratory information systems. Hence, the use of fixed reference intervals is expected to continue for a foreseeable time. We developed a method that objectively proposes optimised age partitions and reference intervals for quantitative laboratory data (http://research.sph.nus.edu.sg/pp/ppResult.aspx), based on the sum of gradient that best describes the underlying distribution of the continuous centile curves. It is hoped that this method may improve the selection of age intervals for partitioning, which is receiving increasing attention in paediatric laboratory medicine. Copyright © 2016 Royal College of Pathologists of Australasia. Published by Elsevier B.V. All rights reserved.
DEFF Research Database (Denmark)
Tatu, Aditya Jayant
tracking interfaces, active contour based segmentation methods and others. It can also be used to study shape spaces, as deforming a shape can be thought of as evolving its boundary curve. During curve evolution a curve traces out a path in the infinite dimensional space of curves. Due to application...... defined subspace, the N-links bicycle chain space, i.e. the space of curves with equidistant neighboring landmark points. This in itself is a useful shape space for medical image analysis applications. The Histogram of Gradient orientation based features are many in number and are widely used...
Solving the non-isothermal kinetic curves of gas-solid reactions by non-linear fitting method
International Nuclear Information System (INIS)
Ge Qingren
1987-01-01
Solving the non-isothermal kinetic curves of gas-solid reactions is a subject in question up-to-date. This paper presents a non-linear fitting method on the basis of reviewing the previous methods. The computer programs have been compiled. The solutions may be obtained within five minutes with 441B-III Computer, and graphs of experimental curves and fitting curves can be printed simultaneously. More satisfactory results have been obtained for solving the twenty-eight typical theoretically calculated kinetic curves. The application and limitation of the method are discussed
Comparison of Optimization and Two-point Methods in Estimation of Soil Water Retention Curve
Ghanbarian-Alavijeh, B.; Liaghat, A. M.; Huang, G.
2009-04-01
Soil water retention curve (SWRC) is one of the soil hydraulic properties in which its direct measurement is time consuming and expensive. Since, its measurement is unavoidable in study of environmental sciences i.e. investigation of unsaturated hydraulic conductivity and solute transport, in this study the attempt is to predict soil water retention curve from two measured points. By using Cresswell and Paydar (1996) method (two-point method) and an optimization method developed in this study on the basis of two points of SWRC, parameters of Tyler and Wheatcraft (1990) model (fractal dimension and air entry value) were estimated and then water content at different matric potentials were estimated and compared with their measured values (n=180). For each method, we used both 3 and 1500 kPa (case 1) and 33 and 1500 kPa (case 2) as two points of SWRC. The calculated RMSE values showed that in the Creswell and Paydar (1996) method, there exists no significant difference between case 1 and case 2. However, the calculated RMSE value in case 2 (2.35) was slightly less than case 1 (2.37). The results also showed that the developed optimization method in this study had significantly less RMSE values for cases 1 (1.63) and 2 (1.33) rather than Cresswell and Paydar (1996) method.
Computer Drawing Method for Operating Characteristic Curve of PV Power Plant Array Unit
Tan, Jianbin
2018-02-01
According to the engineering design of large-scale grid-connected photovoltaic power stations and the research and development of many simulation and analysis systems, it is necessary to draw a good computer graphics of the operating characteristic curves of photovoltaic array elements and to propose a good segmentation non-linear interpolation algorithm. In the calculation method, Component performance parameters as the main design basis, the computer can get 5 PV module performances. At the same time, combined with the PV array series and parallel connection, the computer drawing of the performance curve of the PV array unit can be realized. At the same time, the specific data onto the module of PV development software can be calculated, and the good operation of PV array unit can be improved on practical application.
International Nuclear Information System (INIS)
Claisse, A.; Després, B.; Labourasse, E.; Ledoux, F.
2012-01-01
The aim of this paper is the numerical simulation of compressible hydrodynamic strong implosions, which take place for instance in Inertial Confinement Fusion. It focuses in particular on two crucial issues, for such calculations: very low CFL number at the implosion center and approximation error on the initial geometry. Thus, we propose an exceptional points method, which is translation invariant and suitable to curved meshes. This method is designed for cell-centered Lagrangian schemes (GLACE, EUCCLHYD). Several numerical examples on significant test cases are presented to show the relevance of our approach.
Assessment of p-y Curves from Numerical Methods for a non-Slender Monopile in Cohesionless Soil
DEFF Research Database (Denmark)
Wolf, Torben K.; Rasmussen, Kristian L.; Hansen, Mette
In current design the stiff large diameter monopile is a widely used solution as foundation of offshore wind turbines. Winds and waves subject the monopile to considerable lateral loads. The current design guidances apply the p-y curve method with formulations for the curves based on slender piles....... However, the behaviour of the stiff monopiles during lateral loading is not fully understood. In this paper case study from Barrow Offshore Wind Farm is used in a 3D finite element model. The analysis forms a basis for extraction of p-y curves which are used in an evaluation of the traditional curves....... Different extraction methods are described and discussed....
Assessment of p-y Curves from Numerical Methods for a non-Slender Monopile in Cohesionless Soil
DEFF Research Database (Denmark)
Ibsen, Lars Bo; Roesen, Hanne Ravn; Wolf, Torben K.
2013-01-01
In current design the stiff large diameter monopile is a widely used solution as foundation of offshore wind turbines. Winds and waves subject the monopile to considerable lateral loads. The current design guidances apply the p-y curve method with formulations for the curves based on slender piles....... However, the behaviour of the stiff monopiles during lateral loading is not fully understood. In this paper case study from Barrow Offshore Wind Farm is used in a 3D finite element model. The analysis forms a basis for extraction of p-y curves which are used in an evaluation of the traditional curves....... Different extraction methods are described and discussed....
Standard-Setting Methods as Measurement Processes
Nichols, Paul; Twing, Jon; Mueller, Canda D.; O'Malley, Kimberly
2010-01-01
Some writers in the measurement literature have been skeptical of the meaningfulness of achievement standards and described the standard-setting process as blatantly arbitrary. We argue that standard setting is more appropriately conceived of as a measurement process similar to student assessment. The construct being measured is the panelists'…
A bottom-up method to develop pollution abatement cost curves for coal-fired utility boilers
International Nuclear Information System (INIS)
Vijay, Samudra; DeCarolis, Joseph F.; Srivastava, Ravi K.
2010-01-01
This paper illustrates a new method to create supply curves for pollution abatement using boiler-level data that explicitly accounts for technology cost and performance. The Coal Utility Environmental Cost (CUECost) model is used to estimate retrofit costs for five different NO x control configurations on a large subset of the existing coal-fired, utility-owned boilers in the US. The resultant data are used to create technology-specific marginal abatement cost curves (MACCs) and also serve as input to an integer linear program, which minimizes system-wide control costs by finding the optimal distribution of NO x controls across the modeled boilers under an emission constraint. The result is a single optimized MACC that accounts for detailed, boiler-specific information related to NO x retrofits. Because the resultant MACCs do not take into account regional differences in air-quality standards or pre-existing NO x controls, the results should not be interpreted as a policy prescription. The general method as well as NO x -specific results presented here should be of significant value to modelers and policy analysts who must estimate the costs of pollution reduction.
Lima, Lyana Rodrigues Pinto; Silva, Amanda Perse da; Schmidt-Chanasit, Jonas; Paula, Vanessa Salete de
2017-03-01
The use of quantitative real time polymerase chain reaction (qPCR) for herpesvirus detection has improved the sensitivity and specificity of diagnosis, as it is able to detect shedding episodes in the absence of clinical lesions and diagnose clinical specimens that have low viral loads. With an aim to improve the detection and quantification of herpesvirus by qPCR, synthetic standard curves for human herpesvirus 1 and 2 (HHV-1 and HHV-2) targeting regions gD and gG, respectively, were designed and evaluated. The results show that synthetic curves can replace DNA standard curves in diagnostic herpes qPCR.
Directory of Open Access Journals (Sweden)
Lyana Rodrigues Pinto Lima
Full Text Available The use of quantitative real time polymerase chain reaction (qPCR for herpesvirus detection has improved the sensitivity and specificity of diagnosis, as it is able to detect shedding episodes in the absence of clinical lesions and diagnose clinical specimens that have low viral loads. With an aim to improve the detection and quantification of herpesvirus by qPCR, synthetic standard curves for human herpesvirus 1 and 2 (HHV-1 and HHV-2 targeting regions gD and gG, respectively, were designed and evaluated. The results show that synthetic curves can replace DNA standard curves in diagnostic herpes qPCR.
Cleanup standards and pathways analysis methods
International Nuclear Information System (INIS)
Devgun, J.S.
1993-01-01
Remediation of a radioactively contaminated site requires that certain regulatory criteria be met before the site can be released for unrestricted future use. Since the ultimate objective of remediation is to protect the public health and safety, residual radioactivity levels remaining at a site after cleanup must be below certain preset limits or meet acceptable dose or risk criteria. Release of a decontaminated site requires proof that the radiological data obtained from the site meet the regulatory criteria for such a release. Typically release criteria consist of a composite of acceptance limits that depend on the radionuclides, the media in which they are present, and federal and local regulations. In recent years, the US Department of Energy (DOE) has developed a pathways analysis model to determine site-specific soil activity concentration guidelines for radionuclides that do not have established generic acceptance limits. The DOE pathways analysis computer code (developed by Argonne National Laboratory for the DOE) is called RESRAD (Gilbert et al. 1989). Similar efforts have been initiated by the US Nuclear Regulatory Commission (NRC) to develop and use dose-related criteria based on genetic pathways analyses rather than simplistic numerical limits on residual radioactivity. The focus of this paper is radionuclide contaminated soil. Cleanup standards are reviewed, pathways analysis methods are described, and an example is presented in which RESRAD was used to derive cleanup guidelines
Directory of Open Access Journals (Sweden)
Alaauldin Ibrahim
2017-01-01
Full Text Available Information in patients’ medical histories is subject to various security and privacy concerns. Meanwhile, any modification or error in a patient’s medical data may cause serious or even fatal harm. To protect and transfer this valuable and sensitive information in a secure manner, radio-frequency identification (RFID technology has been widely adopted in healthcare systems and is being deployed in many hospitals. In this paper, we propose a mutual authentication protocol for RFID tags based on elliptic curve cryptography and advanced encryption standard. Unlike existing authentication protocols, which only send the tag ID securely, the proposed protocol could also send the valuable data stored in the tag in an encrypted pattern. The proposed protocol is not simply a theoretical construct; it has been coded and tested on an experimental RFID tag. The proposed scheme achieves mutual authentication in just two steps and satisfies all the essential security requirements of RFID-based healthcare systems.
Energy Technology Data Exchange (ETDEWEB)
Schulze-Hagen, Maximilian Franz, E-mail: mschulze@ukaachen.de; Pfeffer, Jochen; Zimmermann, Markus; Liebl, Martin [University Hospital RWTH Aachen, Department of Diagnostic and Interventional Radiology (Germany); Stillfried, Saskia Freifrau von [University Hospital RWTH Aachen, Department of Pathology (Germany); Kuhl, Christiane; Bruners, Philipp; Isfort, Peter [University Hospital RWTH Aachen, Department of Diagnostic and Interventional Radiology (Germany)
2017-06-15
PurposeTo evaluate the feasibility of a novel curved CT-guided biopsy needle prototype with shape memory to access otherwise not accessible biopsy targets.Methods and MaterialsA biopsy needle curved by 90° with specific radius was designed. It was manufactured using nitinol to acquire shape memory, encased in a straight guiding trocar to be driven out for access of otherwise inaccessible targets. Fifty CT-guided punctures were conducted in a biopsy phantom and 10 CT-guided punctures in a swine corpse. Biposies from porcine liver and muscle tissue were separately gained using the biopsy device, and histological examination was performed subsequently.ResultsMean time for placement of the trocar and deployment of the inner biopsy needle was ~205 ± 69 and ~93 ± 58 s, respectively, with a mean of ~4.5 ± 1.3 steps to reach adequate biopsy position. Mean distance from the tip of the needle to the target was ~0.7 ± 0.8 mm. CT-guided punctures in the swine corpse took relatively longer and required more biopsy steps (~574 ± 107 and ~380 ± 148 s, 8 ± 2.6 steps). Histology demonstrated appropriate tissue samples in nine out of ten cases (90%).ConclusionsTargets that were otherwise inaccessible via standard straight needle trajectories could be successfully reached with the curved biopsy needle prototype. Shape memory and preformed size with specific radius of the curved needle simplify the target accessibility with a low risk of injuring adjacent structures.
Directory of Open Access Journals (Sweden)
Bazhenov V.А.
2011-11-01
Full Text Available The realization order and the numerical research results for two-mass vibroimpact systems with two degrees of freedom are examined in this article. These systems are under periodic external loading. The numerical researches are fulfilled by continuation after parameter method. The solutions of movement equations are obtained depending of external loading amplitude, the loading curves and the contact force graphes are constructed. The impact is simulated by the nonlinear contact interaction force describing by Hertz law. Reliability of the received results is controlled.
A neural network driving curve generation method for the heavy-haul train
Youneng Huang; Litian Tan; Lei Chen; Tao Tang
2016-01-01
The heavy-haul train has a series of characteristics, such as the locomotive traction properties, the longer length of train, and the nonlinear train pipe pressure during train braking. When the train is running on a continuous long and steep downgrade railway line, the safety of the train is ensured by cycle braking, which puts high demands on the driving skills of the driver. In this article, a driving curve generation method for the heavy-haul train based on a neural network is proposed. F...
International Nuclear Information System (INIS)
Visbal, Jorge H. Wilches; Costa, Alessandro M.
2016-01-01
Percentage depth dose of electron beams represents an important item of data in radiation therapy treatment since it describes the dosimetric properties of these. Using an accurate transport theory, or the Monte Carlo method, has been shown obvious differences between the dose distribution of electron beams of a clinical accelerator in a water simulator object and the dose distribution of monoenergetic electrons of nominal energy of the clinical accelerator in water. In radiotherapy, the electron spectra should be considered to improve the accuracy of dose calculation since the shape of PDP curve depends of way how radiation particles deposit their energy in patient/phantom, that is, the spectrum. Exist three principal approaches to obtain electron energy spectra from central PDP: Monte Carlo Method, Direct Measurement and Inverse Reconstruction. In this work it will be presented the Simulated Annealing method as a practical, reliable and simple approach of inverse reconstruction as being an optimal alternative to other options. (author)
Zhang, S. Y.; Wang, G. F.; Wu, Y. T.; Baldwin, K. M. (Principal Investigator)
1993-01-01
On a partition chromatographic column in which the support is Kieselguhr and the stationary phase is sulfuric acid solution (2 mol/L), three components of compound theophylline tablet were simultaneously eluted by chloroform and three other components were simultaneously eluted by ammonia-saturated chloroform. The two mixtures were determined by computer-aided convolution curve method separately. The corresponding average recovery and relative standard deviation of the six components were as follows: 101.6, 1.46% for caffeine; 99.7, 0.10% for phenacetin; 100.9, 1.31% for phenobarbitone; 100.2, 0.81% for theophylline; 99.9, 0.81% for theobromine and 100.8, 0.48% for aminopyrine.
Directory of Open Access Journals (Sweden)
William Senkondo
2017-12-01
Full Text Available Information on aquifer processes and characteristics across scales has long been a cornerstone for understanding water resources. However, point measurements are often limited in extent and representativeness. Techniques that increase the support scale (footprint of measurements or leverage existing observations in novel ways can thus be useful. In this study, we used a recession-curve-displacement method to estimate regional-scale aquifer transmissivity (T from streamflow records across the Kilombero Valley of Tanzania. We compare these estimates to local-scale estimates made from pumping tests across the Kilombero Valley. The median T from the pumping tests was 0.18 m2/min. This was quite similar to the median T estimated from the recession-curve-displacement method applied during the wet season for the entire basin (0.14 m2/min and for one of the two sub-basins tested (0.16 m2/min. On the basis of our findings, there appears to be reasonable potential to inform water resource management and hydrologic model development through streamflow-derived transmissivity estimates, which is promising for data-limited environments facing rapid development, such as the Kilombero Valley.
SiFTO: An Empirical Method for Fitting SN Ia Light Curves
Conley, A.; Sullivan, M.; Hsiao, E. Y.; Guy, J.; Astier, P.; Balam, D.; Balland, C.; Basa, S.; Carlberg, R. G.; Fouchez, D.; Hardin, D.; Howell, D. A.; Hook, I. M.; Pain, R.; Perrett, K.; Pritchet, C. J.; Regnault, N.
2008-07-01
We present SiFTO, a new empirical method for modeling Type Ia supernova (SN Ia) light curves by manipulating a spectral template. We make use of high-redshift SN data when training the model, allowing us to extend it bluer than rest-frame U. This increases the utility of our high-redshift SN observations by allowing us to use more of the available data. We find that when the shape of the light curve is described using a stretch prescription, applying the same stretch at all wavelengths is not an adequate description. SiFTO therefore uses a generalization of stretch which applies different stretch factors as a function of both the wavelength of the observed filter and the stretch in the rest-frame B band. We compare SiFTO to other published light-curve models by applying them to the same set of SN photometry, and demonstrate that SiFTO and SALT2 perform better than the alternatives when judged by the scatter around the best-fit luminosity distance relationship. We further demonstrate that when SiFTO and SALT2 are trained on the same data set the cosmological results agree. Based on observations obtained with MegaPrime/MegaCam, a joint project of CFHT and CEA/DAPNIA, at the Canada-France-Hawaii Telescope (CFHT) which is operated by the National Research Council (NRC) of Canada, the Institut National des Sciences de l'Univers of the Centre National de la Recherche Scientifique (CNRS) of France, and the University of Hawaii. This work is based in part on data products produced at the Canadian Astronomy Data Centre as part of the Canada-France-Hawaii Telescope Legacy Survey, a collaborative project of NRC and CNRS.
Directory of Open Access Journals (Sweden)
Balgaisha Mukanova
2017-01-01
Full Text Available The problem of electrical sounding of a medium with ground surface relief is modelled using the integral equations method. This numerical method is based on the triangulation of the computational domain, which is adapted to the shape of the relief and the measuring line. The numerical algorithm is tested by comparing the results with the known solution for horizontally layered media with two layers. Calculations are also performed to verify the fulfilment of the “reciprocity principle” for the 4-electrode installations in our numerical model. Simulations are then performed for a two-layered medium with a surface relief. The quantitative influences of the relief, the resistivity ratios of the contacting media, and the depth of the second layer on the apparent resistivity curves are established.
Efficient method for finding square roots for elliptic curves over OEF
CSIR Research Space (South Africa)
Abu-Mahfouz, Adnan M
2009-01-01
Full Text Available Elliptic curve cryptosystems like others public key encryption schemes, require computing a square roots modulo a prime number. The arithmetic operations in elliptic curve schemes over Optimal Extension Fields (OEF) can be efficiently computed...
Analysis and Extension of the Percentile Method, Estimating a Noise Curve from a Single Image
Directory of Open Access Journals (Sweden)
Miguel Colom
2013-12-01
Full Text Available Given a white Gaussian noise signal on a sampling grid, its variance can be estimated from a small block sample. However, in natural images we observe the combination of the geometry of the scene being photographed and the added noise. In this case, estimating directly the standard deviation of the noise from block samples is not reliable since the measured standard deviation is not explained just by the noise but also by the geometry of the image. The Percentile method tries to estimate the standard deviation of the noise from blocks of a high-passed version of the image and a small p-percentile of these standard deviations. The idea behind is that edges and textures in a block of the image increase the observed standard deviation but they never make it decrease. Therefore, a small percentile (0.5%, for example in the list of standard deviations of the blocks is less likely to be affected by the edges and textures than a higher percentile (50%, for example. The 0.5%-percentile is empirically proven to be adequate for most natural, medical and microscopy images. The Percentile method is adapted to signal-dependent noise, which is realistic with the Poisson noise model obtained by a CCD device in a digital camera.
A Method for Formulizing Disaster Evacuation Demand Curves Based on SI Model
Directory of Open Access Journals (Sweden)
Yulei Song
2016-10-01
Full Text Available The prediction of evacuation demand curves is a crucial step in the disaster evacuation plan making, which directly affects the performance of the disaster evacuation. In this paper, we discuss the factors influencing individual evacuation decision making (whether and when to leave and summarize them into four kinds: individual characteristics, social influence, geographic location, and warning degree. In the view of social contagion of decision making, a method based on Susceptible-Infective (SI model is proposed to formulize the disaster evacuation demand curves to address both social influence and other factors’ effects. The disaster event of the “Tianjin Explosions” is used as a case study to illustrate the modeling results influenced by the four factors and perform the sensitivity analyses of the key parameters of the model. Some interesting phenomena are found and discussed, which is meaningful for authorities to make specific evacuation plans. For example, due to the lower social influence in isolated communities, extra actions might be taken to accelerate evacuation process in those communities.
A novel curve fitting method for AV optimisation of biventricular pacemakers.
Dehbi, Hakim-Moulay; Jones, Siana; Sohaib, S M Afzal; Finegold, Judith A; Siggers, Jennifer H; Stegemann, Berthold; Whinnett, Zachary I; Francis, Darrel P
2015-09-01
In this study, we designed and tested a new algorithm, which we call the 'restricted parabola', to identify the optimum atrioventricular (AV) delay in patients with biventricular pacemakers. This algorithm automatically restricts the hemodynamic data used for curve fitting to the parabolic zone in order to avoid inadvertently selecting an AV optimum that is too long.We used R, a programming language and software environment for statistical computing, to create an algorithm which applies multiple different cut-offs to partition curve fitting of a dataset into a parabolic and a plateau region and then selects the best cut-off using a least squares method. In 82 patients, AV delay was adjusted and beat-to-beat systolic blood pressure (SBP) was measured non-invasively using our multiple-repetition protocol. The novel algorithm was compared to fitting a parabola across the whole dataset to identify how many patients had a plateau region, and whether a higher hemodynamic response was achieved with one method.In 9/82 patients, the restricted parabola algorithm detected that the pattern was not parabolic at longer AV delays. For these patients, the optimal AV delay predicted by the restricted parabola algorithm increased SBP by 1.36 mmHg above that predicted by the conventional parabolic algorithm (95% confidence interval: 0.65 to 2.07 mmHg, p-value = 0.002).AV optima selected using our novel restricted parabola algorithm give a greater improvement in acute hemodynamics than fitting a parabola across all tested AV delays. Such an algorithm may assist the development of automated methods for biventricular pacemaker optimisation.
Standardization of methods of maxillofacial roentgenology
International Nuclear Information System (INIS)
Rabukhina, N.A.; Arzhantsev, A.P.; Chikirdin, Eh.G.; Tombak, M.I.; Stavitskij, R.V.; Vasil'ev, Yu.D.
1989-01-01
Typical errors in teeth roentgenography reproduced in experiment, indicate that considerable disproportional distortions of images of anatomical structures which are decisive for radiodiagnosis, may occur in these cases. Standardization of intraoral roentgenography is based on a strict position of the patient's head, angle of inclination and alignment of a tube. Specialized R3-1 film should be used
Brewick, Patrick T.; Smyth, Andrew W.
2016-12-01
The authors have previously shown that many traditional approaches to operational modal analysis (OMA) struggle to properly identify the modal damping ratios for bridges under traffic loading due to the interference caused by the driving frequencies of the traffic loads. This paper presents a novel methodology for modal parameter estimation in OMA that overcomes the problems presented by driving frequencies and significantly improves the damping estimates. This methodology is based on finding the power spectral density (PSD) of a given modal coordinate, and then dividing the modal PSD into separate regions, left- and right-side spectra. The modal coordinates were found using a blind source separation (BSS) algorithm and a curve-fitting technique was developed that uses optimization to find the modal parameters that best fit each side spectra of the PSD. Specifically, a pattern-search optimization method was combined with a clustering analysis algorithm and together they were employed in a series of stages in order to improve the estimates of the modal damping ratios. This method was used to estimate the damping ratios from a simulated bridge model subjected to moving traffic loads. The results of this method were compared to other established OMA methods, such as Frequency Domain Decomposition (FDD) and BSS methods, and they were found to be more accurate and more reliable, even for modes that had their PSDs distorted or altered by driving frequencies.
A boosting method for maximizing the partial area under the ROC curve
Directory of Open Access Journals (Sweden)
Eguchi Shinto
2010-06-01
Full Text Available Abstract Background The receiver operating characteristic (ROC curve is a fundamental tool to assess the discriminant performance for not only a single marker but also a score function combining multiple markers. The area under the ROC curve (AUC for a score function measures the intrinsic ability for the score function to discriminate between the controls and cases. Recently, the partial AUC (pAUC has been paid more attention than the AUC, because a suitable range of the false positive rate can be focused according to various clinical situations. However, existing pAUC-based methods only handle a few markers and do not take nonlinear combination of markers into consideration. Results We have developed a new statistical method that focuses on the pAUC based on a boosting technique. The markers are combined componentially for maximizing the pAUC in the boosting algorithm using natural cubic splines or decision stumps (single-level decision trees, according to the values of markers (continuous or discrete. We show that the resulting score plots are useful for understanding how each marker is associated with the outcome variable. We compare the performance of the proposed boosting method with those of other existing methods, and demonstrate the utility using real data sets. As a result, we have much better discrimination performances in the sense of the pAUC in both simulation studies and real data analysis. Conclusions The proposed method addresses how to combine the markers after a pAUC-based filtering procedure in high dimensional setting. Hence, it provides a consistent way of analyzing data based on the pAUC from maker selection to marker combination for discrimination problems. The method can capture not only linear but also nonlinear association between the outcome variable and the markers, about which the nonlinearity is known to be necessary in general for the maximization of the pAUC. The method also puts importance on the accuracy of
Yang, Chuan-Xiao; Sun, Xiang-Ying; Liu, Bin
2009-06-01
From the digital images of the red complex which resulted in the interaction of nitrite with N-(1-naphthyl) ethylenediamine dihydrochloride and P-Aminobenzene sulfonic acid, it could be seen that the solution colors obviously increased with increasing the concentration of nitrite ion. The JPEG format of the digital images was transformed into gray-scale format by origin 7.0 software, and the gray values were measured with scion image software. It could be seen that the gray values of the digital image obviously increased with increasing the concentration of nitrite ion, too. Thus a novel digital imaging colorimetric (DIC) method to determine nitrogen oxides (NO(x)) contents in air was developed. Based on the red, green and blue (RGB) tricolor theory, the principle of the digital imaging colorimetric method and the influential factors on digital imaging were discussed. The present method was successfully applied to the determination of the daily changes curve of nitrogen oxides in the atmosphere and NO2- in synthetic samples with the recovery of 97.3%-104.0%, and the relative standard deviation (RSD) was less than 5.0%. The results of the determination were consistent with those obtained by spectrophotometric method.
Photonic devices on planar and curved substrates and methods for fabrication thereof
Energy Technology Data Exchange (ETDEWEB)
Bartl, Michael H.; Barhoum, Moussa; Riassetto, David
2016-08-02
A versatile and rapid sol-gel technique for the fabrication of high quality one-dimensional photonic bandgap materials. For example, silica/titania multi-layer materials may be fabricated by a sol-gel chemistry route combined with dip-coating onto planar or curved substrate. A shock-cooling step immediately following the thin film heat-treatment process is introduced. This step was found important in the prevention of film crack formation--especially in silica/titania alternating stack materials with a high number of layers. The versatility of this sol-gel method is demonstrated by the fabrication of various Bragg stack-type materials with fine-tuned optical properties by tailoring the number and sequence of alternating layers, the film thickness and the effective refractive index of the deposited thin films. Measured optical properties show good agreement with theoretical simulations confirming the high quality of these sol-gel fabricated optical materials.
Method and Excel VBA Algorithm for Modeling Master Recession Curve Using Trigonometry Approach.
Posavec, Kristijan; Giacopetti, Marco; Materazzi, Marco; Birk, Steffen
2017-11-01
A new method was developed and implemented into an Excel Visual Basic for Applications (VBAs) algorithm utilizing trigonometry laws in an innovative way to overlap recession segments of time series and create master recession curves (MRCs). Based on a trigonometry approach, the algorithm horizontally translates succeeding recession segments of time series, placing their vertex, that is, the highest recorded value of each recession segment, directly onto the appropriate connection line defined by measurement points of a preceding recession segment. The new method and algorithm continues the development of methods and algorithms for the generation of MRC, where the first published method was based on a multiple linear/nonlinear regression model approach (Posavec et al. 2006). The newly developed trigonometry-based method was tested on real case study examples and compared with the previously published multiple linear/nonlinear regression model-based method. The results show that in some cases, that is, for some time series, the trigonometry-based method creates narrower overlaps of the recession segments, resulting in higher coefficients of determination R 2 , while in other cases the multiple linear/nonlinear regression model-based method remains superior. The Excel VBA algorithm for modeling MRC using the trigonometry approach is implemented into a spreadsheet tool (MRCTools v3.0 written by and available from Kristijan Posavec, Zagreb, Croatia) containing the previously published VBA algorithms for MRC generation and separation. All algorithms within the MRCTools v3.0 are open access and available free of charge, supporting the idea of running science on available, open, and free of charge software. © 2017, National Ground Water Association.
Directory of Open Access Journals (Sweden)
A. F. Sabitov
2017-01-01
Full Text Available The effectiveness of correction of the dynamic characteristics of gas temperature sensors in automatic control systems for the operation of aircraft gas turbine engines depends on the accuracy of the time constants of the sensors used from heat exchange conditions. The aim of this work was to develop a new method for determining the characteristic curves of the thermal inertia of gas temperature sensors.The new technique does not require finding the time constants of gas temperature sensors on the experimental transient characteristics. Characteristic curves for each time constant are defined as hyperbolic dependencies on the heat transfer coefficient of the gas temperature sensors sensing element with the gas flow. Parameters of hyperbolic dependencies are proposed to be established using two-dimensional regression analysis. For this purpose, special software has been developed in the Mathcad 14 and Mathcad 15. The software allows inputting the original data from the transient characteristics to the corresponding vectors or from tables in Excel format. It is shown that the transient characteristics in three-dimensional coordinates«time – heat transfer coefficient – the value of the transition characteristic» form a surface whose parameters are parameters of the desired hyperbolic dependencies.For a specific application of the technique, the regression functions for the dynamic characteristics of gas temperature sensors corresponding to the first and second orders are given. Analysis of the characteristic dependencies suggests that the proposed method more accurately establishes the dependence of the dynamic characteristics of aircraft gas temperature sensors on heat exchange conditions.It is shown that the algorithm of two-dimensional regression analysis realizes finding more accurate values of the parameters of the characteristic dependencies. The found parameters of the characteristic dependencies in a best way reach the surface of the
Antibody reactions methods in safety standards
International Nuclear Information System (INIS)
Shubik, V.M.; Sirasdinov, V.G.; Zasedatelev, A.A.; Kal'nitskij, S.A.; Livshits, R.E.
1978-01-01
Results of determinations are presented of autoantibodies in white rats to which the radionuclides 137 Cs, 226 Ra, and 90 Sr that show different distribution patterns in the body, have been administered chronically. Autoantiboby production is found to increase when the absorbed doses are close to or exceeding seven- to tenfold the maximum permissible values. The results obtained point to the desirability of autoantibody determination in studies aimed at setting hygienic standards for the absorption of radioactive substances
Chrismianto, Deddy; Zakki, Ahmad Fauzan; Arswendo, Berlian; Kim, Dong Joon
2015-12-01
Optimization analysis and computational fluid dynamics (CFDs) have been applied simultaneously, in which a parametric model plays an important role in finding the optimal solution. However, it is difficult to create a parametric model for a complex shape with irregular curves, such as a submarine hull form. In this study, the cubic Bezier curve and curve-plane intersection method are used to generate a solid model of a parametric submarine hull form taking three input parameters into account: nose radius, tail radius, and length-height hull ratio ( L/ H). Application program interface (API) scripting is also used to write code in the ANSYS design modeler. The results show that the submarine shape can be generated with some variation of the input parameters. An example is given that shows how the proposed method can be applied successfully to a hull resistance optimization case. The parametric design of the middle submarine type was chosen to be modified. First, the original submarine model was analyzed, in advance, using CFD. Then, using the response surface graph, some candidate optimal designs with a minimum hull resistance coefficient were obtained. Further, the optimization method in goal-driven optimization (GDO) was implemented to find the submarine hull form with the minimum hull resistance coefficient ( C t ). The minimum C t was obtained. The calculated difference in C t values between the initial submarine and the optimum submarine is around 0.26%, with the C t of the initial submarine and the optimum submarine being 0.001 508 26 and 0.001 504 29, respectively. The results show that the optimum submarine hull form shows a higher nose radius ( r n ) and higher L/ H than those of the initial submarine shape, while the radius of the tail ( r t ) is smaller than that of the initial shape.
Standard Testing Methods for Satellite Communication Systems
Stoner, Jerry
2005-01-01
University space programs continue to push the envelope of small satellite technology. Because budgets are often limited, and equipment costs can often be prohibitive to even well-established space programs, it becomes necessary to maximize the benefit/cost ratio of testing methods. Expensive testing is often not an option, nor is it realistic. Traditional methods such as anechoic chambers or antenna test ranges are not options, and testing the craft on the ground is not practical. Because of...
Miranda Guedes, Rui
2018-02-01
Long-term creep of viscoelastic materials is experimentally inferred through accelerating techniques based on the time-temperature superposition principle (TTSP) or on the time-stress superposition principle (TSSP). According to these principles, a given property measured for short times at a higher temperature or higher stress level remains the same as that obtained for longer times at a lower temperature or lower stress level, except that the curves are shifted parallel to the horizontal axis, matching a master curve. These procedures enable the construction of creep master curves with short-term experimental tests. The Stepped Isostress Method (SSM) is an evolution of the classical TSSP method. Higher reduction of the required number of test specimens to obtain the master curve is achieved by the SSM technique, since only one specimen is necessary. The classical approach, using creep tests, demands at least one specimen per each stress level to produce a set of creep curves upon which TSSP is applied to obtain the master curve. This work proposes an analytical method to process the SSM raw data. The method is validated using numerical simulations to reproduce the SSM tests based on two different viscoelastic models. One model represents the viscoelastic behavior of a graphite/epoxy laminate and the other represents an adhesive based on epoxy resin.
Method for Constructing Standardized Simulated Root Canals.
Schulz-Bongert, Udo; Weine, Franklin S.
1990-01-01
The construction of visual and manipulative aids, clear resin blocks with root-canal-like spaces, for simulation of root canals is explained. Time, materials, and techniques are discussed. The method allows for comparison of canals, creation of any configuration of canals, and easy presentation during instruction. (MSE)
Guimarães, L B de M; Anzanello, M J; Renner, J S
2012-05-01
This paper presents a method for implementing multifunctional work teams in a footwear company that followed the Taylor/Ford system for decades. The suggested framework first applies a Learning Curve (LC) modeling to assess whether rotation between tasks of different complexities affects workers' learning rate and performance. Next, the Macroergonomic Work Analysis (MA) method (Guimarães, 1999, 2009) introduces multifunctional principles in work teams towards workers' training and resources improvement. When applied to a pilot line consisting of 100 workers, the intervention-reduced work related accidents in 80%, absenteeism in 45.65%, and eliminated work related musculoskeletal disorders (WMSD), medical consultations, and turnover. Further, the output rate of the multifunctional team increased average 3% compared to the production rate of the regular lines following the Taylor/Ford system (with the same shoe model being manufactured), while the rework and spoilage rates were reduced 85% and 69%, respectively. Copyright © 2011 Elsevier Ltd and The Ergonomics Society. All rights reserved.
Directory of Open Access Journals (Sweden)
Burns Malcolm J
2005-12-01
Full Text Available Abstract Background As real-time quantitative PCR (RT-QPCR is increasingly being relied upon for the enforcement of legislation and regulations dependent upon the trace detection of DNA, focus has increased on the quality issues related to the technique. Recent work has focused on the identification of factors that contribute towards significant measurement uncertainty in the real-time quantitative PCR technique, through investigation of the experimental design and operating procedure. However, measurement uncertainty contributions made during the data analysis procedure have not been studied in detail. This paper presents two additional approaches for standardising data analysis through the novel application of statistical methods to RT-QPCR, in order to minimise potential uncertainty in results. Results Experimental data was generated in order to develop the two aspects of data handling and analysis that can contribute towards measurement uncertainty in results. This paper describes preliminary aspects in standardising data through the application of statistical techniques to the area of RT-QPCR. The first aspect concerns the statistical identification and subsequent handling of outlying values arising from RT-QPCR, and discusses the implementation of ISO guidelines in relation to acceptance or rejection of outlying values. The second aspect relates to the development of an objective statistical test for the comparison of calibration curves. Conclusion The preliminary statistical tests for outlying values and comparisons between calibration curves can be applied using basic functions found in standard spreadsheet software. These two aspects emphasise that the comparability of results arising from RT-QPCR needs further refinement and development at the data-handling phase. The implementation of standardised approaches to data analysis should further help minimise variation due to subjective judgements. The aspects described in this paper will
The “curved lead pathway” method to enable a single lead to reach any two intracranial targets
Ding, Chen-Yu; Yu, Liang-Hong; Lin, Yuan-Xiang; Chen, Fan; Lin, Zhang-Ya; Kang, De-Zhi
2017-01-01
Deep brain stimulation is an effective way to treat movement disorders, and a powerful research tool for exploring brain functions. This report proposes a “curved lead pathway” method for lead implantation, such that a single lead can reach in sequence to any two intracranial targets. A new type of stereotaxic system for implanting a curved lead to the brain of human/primates was designed, the auxiliary device needed for this method to be used in rat/mouse was fabricated and verified in rat, and the Excel algorithm used for automatically calculating the necessary parameters was implemented. This “curved lead pathway” method of lead implantation may complement the current method, make lead implantation for multiple targets more convenient, and expand the experimental techniques of brain function research.
Recrystallization curve study of zircaloy-4 with DRX line width method
International Nuclear Information System (INIS)
Juarez, G; Buioli, C; Samper, R; Vizcaino, P
2012-01-01
X-ray diffraction peak broadening analysis is a method that allows to characterize the plastic deformation in metals. This technique is a complement of transmission electron microscopy (TEM) to determine dislocation densities. So that, both techniques may cover a wide range in the analysis of metals deformation. The study of zirconium alloys is an issue of usual interest in the nuclear industry since such materials present the best combination of good mechanical properties, corrosion behavior and low neutron cross section. It is worth noting there are two factors to be taken into account in the application of the method developed for this purpose: the characteristic anisotropy of the hexagonals and the strong texture that these alloys acquire during the manufacturing process. In order to assess the recrystallization curve of Zircaloy-4, a powder of this alloy was produced through filing. Then, fractions of the powder were subjected to thermal treatments at different temperatures for the same time. Since the powder has a random crystallographic orientation, the texture effect practically disappears; this is the reason why the Williamson and Hall method may be easily used, producing good fittings and predicting confidence values of diffraction domain size and the accumulated deformation. The temperatures selected for the thermal treatments were 1000, 700, 600, 500, 420, 300 and 200 o C during 2 hs. As a result of these annealings, powders in different recovery stages were obtained (completely recrystallized, partially recrystallized and non-recrystallized structures with different levels of stress relieve). The obtained values were also compared with the non annealed powder ones. All the microstructural evolution through the annealings was followed by optical microscopy (author)
Miscellaneous standard methods for Apis mellifera research
DEFF Research Database (Denmark)
Human, Hannelie; Brodschneider, Robert; Dietemann, Vincent
2013-01-01
and storing as well as determining individual weight of bees. The precise timing of developmental stages is also an important aspect of sampling individuals for experiments. In order to investigate and manipulate functional processes in honey bees, e. g. memory formation and retrieval and gene expression......, microinjection is often used. A method that is used by both researchers and beekeepers is the marking of queens that serves not only to help to locate her during her life, but also enables the dating of queens. Creating multiple queen colonies allows the beekeeper to maintain spare queens, increase brood...... production or ask questions related to reproduction. On colony level, very useful techniques are the measurement of intra hive mortality using dead bee traps, weighing of full hives, collecting pollen and nectar, and digital monitoring of brood development via location recognition. At the population level...
Burger, Jessica L.
2015-07-16
© This article not subject to U.S. Copyright. Published 2015 by the American Chemical Society. Incremental but fundamental changes are currently being made to fuel composition and combustion strategies to diversify energy feedstocks, decrease pollution, and increase engine efficiency. The increase in parameter space (by having many variables in play simultaneously) makes it difficult at best to propose strategic changes to engine and fuel design by use of conventional build-and-test methodology. To make changes in the most time- and cost-effective manner, it is imperative that new computational tools and surrogate fuels are developed. Currently, sets of fuels are being characterized by industry groups, such as the Coordinating Research Council (CRC) and other entities, so that researchers in different laboratories have access to fuels with consistent properties. In this work, six gasolines (FACE A, C, F, G, I, and J) are characterized by the advanced distillation curve (ADC) method to determine the composition and enthalpy of combustion in various distillate volume fractions. Tracking the composition and enthalpy of distillate fractions provides valuable information for determining structure property relationships, and moreover, it provides the basis for the development of equations of state that can describe the thermodynamic properties of these complex mixtures and lead to development of surrogate fuels composed of major hydrocarbon classes found in target fuels.
Analyses of growth curves of Nellore cattle by Bayesian method via Gibbs sampling
Directory of Open Access Journals (Sweden)
Nobre P.R.C.
2003-01-01
Full Text Available Growth curves of Nellore cattle were analyzed using body weights measured at ages ranging from 1 day (birth weight to 733 days. Traits considered were birth weight, 10 to 110 days weight, 102 to 202 days weight, 193 to 293 days weight, 283 to 383 days weight, 376 to 476 days weight, 551 to 651 days weight, and 633 to 733 days weight. Two data samples were created: one with 79,849 records from herds that had missing traits and another with 74,601 from herds with no missing traits. Records preadjusted to a fixed age were analyzed by a multiple trait model (MTM, which included the effects of contemporary group, age of dam class, additive direct, additive maternal, and maternal permanent environment. Analyses were carried out by a Bayesian method for all nine traits. The random regression model (RRM included the effects of age of animal, contemporary group, age of dam class, additive direct, permanent environment, additive maternal, and maternal permanent environment. Legendre cubic polynomials were used to describe random effects. MTM estimated covariance components and genetic parameters for birth weight and sequential weights and RRM for all ages. Due to the fact that covariance components based on RRM were inflated for herds with missing traits, MTM should be used and converted to covariance functions.
Cho, Seong-Ho; Han, Ji-Deuk; Kim, Jung-Han; Lee, Shi-Hyun; Jo, Ji-Bong; Kim, Chul-Hoon; Kim, Bok-Joo
2017-06-01
Sialolithiasis, the most common salivary gland pathology, is caused by calculi in the gland itself and its duct. While patients with small sialoliths can undergo conservative treatment, those with standard-size or larger sialoliths require sialolithotomy. In the present case study, we removed two sialoliths located beneath the mucosa in the posterior and anterior regions of Wharton's duct, respectively. For the posterior calculus, we performed sialolithotomy via an intra-oral approach; thereafter, the small anterior calculus near the duct orifice was removed by hydraulic power. This method has not previously been reported. There were no complications either during the operation or postoperatively, and the salivary function of the gland remained normal.
Assessment of p-y curves from numerical methods for a non-slender monopile in cohesionless soil
Energy Technology Data Exchange (ETDEWEB)
Ibsen, L.B.; Ravn Roesen, H. [Aalborg Univ.. Dept. of Civil Engineering, Aalborg (Denmark); Hansen, Mette; Kirk Wolf, T. [COWI, Kgs. Lyngby (Denmark); Lange Rasmussen, K. [Niras, Aalborg (Denmark)
2013-06-15
In current design the monopile is a widely used solution as foundation of offshore wind turbines. Winds and waves subject the monopile to considerable lateral loads. The behaviour of monopiles under lateral loading is not fully understood and the current design guidances apply the p-y curve method in a Winkler model approach. The p-y curve method was originally developed for jag-piles used in the oil and gas industry which are much more slender than the monopile foundation. In recent years the 3D finite element analysis has become a tool in the investigation of complex geotechnical situations, such as the laterally loaded monopile. In this paper a 3D FEA is conducted as basis of an extraction of p-y curves, as a basis for an evaluation of the traditional curves. Two different methods are applied to create a list of data points used for the p-y curves: A force producing a similar response as seen in the ULS situation is applied stepwise; hereby creating the most realistic soil response. This method, however, does not generate sufficient data points around the rotation point of the pile. Therefore, also a forced horizontal displacement of the entire pile is applied, whereby displacements are created over the entire length of the pile. The response is extracted from the interface and the nearby soil elements respectively, as to investigate the influence this has on the computed curves. p-y curves are obtained near the rotation point by evaluation of soil response during a prescribed displacement but the response is not in clear agreement with the response during an applied load. Two different material models are applied. It is found that the applied material models have a significant influence on the stiffness of the evaluated p-y curves. The p-y curves evaluated by means of FEA are compared to the conventional p-y curve formulation which provides a much stiffer response. It is found that the best response is computed by implementing the Hardening Soil model and
Identification of replication origins in archaeal genomes based on the Z-curve method
Directory of Open Access Journals (Sweden)
Ren Zhang
2005-01-01
Full Text Available The Z-curve is a three-dimensional curve that constitutes a unique representation of a DNA sequence, i.e., both the Z-curve and the given DNA sequence can be uniquely reconstructed from the other. We employed Z-curve analysis to identify one replication origin in the Methanocaldococcus jannaschii genome, two replication origins in the Halobacterium species NRC-1 genome and one replication origin in the Methanosarcina mazei genome. One of the predicted replication origins of Halobacterium species NRC-1 is the same as a replication origin later identified by in vivo experiments. The Z-curve analysis of the Sulfolobus solfataricus P2 genome suggested the existence of three replication origins, which is also consistent with later experimental results. This review aims to summarize applications of the Z-curve in identifying replication origins of archaeal genomes, and to provide clues about the locations of as yet unidentified replication origins of the Aeropyrum pernix K1, Methanococcus maripaludis S2, Picrophilus torridus DSM 9790 and Pyrobaculum aerophilum str. IM2 genomes.
Non-standard photography methods in audiovisual journalism
Géla, František
2015-01-01
The aim of the diploma thesis "Non- standard photography methods in audiovisual journalism" is to present image and production methods that are used in television news and journalism - particularly those ones that defy standard methods. These methods appear, considering technological development and the endeavour of making neutral visual space of television journalism, more attractive. First chapter of the thesis presents television news, its history, characteristics, elements and typology of...
Directory of Open Access Journals (Sweden)
Yin eLiu
2014-09-01
Full Text Available Background: Molecular genetic alterations with prognostic significance have been described in childhood acute myeloid leukemia (AML. The aim of this study was to establish cost-effective techniques to detect mutations of FMS-like tyrosine kinase 3 (FLT3, Nucleophosmin 1 (NPM1, and a partial tandem duplication within the mixed lineage leukemia (MLL-PTD genes in childhood AML. Procedure: Ninety-nine children with newly diagnosed AML were included in this study. We developed a fluoresent dye SYTO-82 based high resolution melting curve (HRM anaylsis to detect FLT3 internal tandem duplication (FLT3-ITD, FLT3 tyrosine kinase domain (FLT3-TKD and NPM1 mutations. MLL-PTD was screened by real-time quantitative PCR. Results: The HRM methodology correlated well with gold standard Sanger sequencing with less cost. Among the 99 patients studied, the FLT3-ITD mutation was associated with significantly worse event free survival (EFS. Patients with the NPM1 mutation had significantly better EFS and overall survival. However, HRM was not sensitive enough for minimal residual disease monitoring. Conclusions: HRM was a rapid and efficient method for screening of FLT3 and NPM1 gene mutations. It was both affordable and accurate, especially in resource underprivileged regions. Our results indicated that HRM could be a useful clinical tool for rapid and cost effective screening of the FLT3 and NPM1 mutations in AML patients.
An external standard method for quantification of human cytomegalovirus by PCR
International Nuclear Information System (INIS)
Rongsen, Shen; Liren, Ma; Fengqi, Zhou; Qingliang, Luo
1997-01-01
An external standard method for PCR quantification of HCMV was reported. [α- 32 P]dATP was used as a tracer. 32 P-labelled specific amplification product was separated by agarose gel electrophoresis. A gel piece containing the specific product band was excised and counted in a plastic scintillation counter. Distribution of [α- 32 P]dATP in the electrophoretic gel plate and effect of separation between the 32 P-labelled specific product and free [α- 32 P]dATP were observed. A standard curve for quantification of HCMV by PCR was established and detective results of quality control templets were presented. The external standard method and the electrophoresis separation effect were appraised. The results showed that the method could be used for relative quantification of HCMV. (author)
[Preparation of sub-standard samples and XRF analytical method of powder non-metallic minerals].
Kong, Qin; Chen, Lei; Wang, Ling
2012-05-01
In order to solve the problem that standard samples of non-metallic minerals are not satisfactory in practical work by X-ray fluorescence spectrometer (XRF) analysis with pressed powder pellet, a method was studied how to make sub-standard samples according to standard samples of non-metallic minerals and to determine how they can adapt to analysis of mineral powder samples, taking the K-feldspar ore in Ebian-Wudu, Sichuan as an example. Based on the characteristic analysis of K-feldspar ore and the standard samples by X-ray diffraction (XRD) and chemical methods, combined with the principle of the same or similar between the sub-standard samples and unknown samples, the experiment developed the method of preparation of sub-standard samples: both of the two samples above mentioned should have the same kind of minerals and the similar chemical components, adapt mineral processing, and benefit making working curve. Under the optimum experimental conditions, a method for determination of SiO2, Al2O3, Fe2O3, TiO2, CaO, MgO, K2O and Na2O of K-feldspar ore by XRF was established. Thedetermination results are in good agreement with classical chemical methods, which indicates that this method was accurate.
International Nuclear Information System (INIS)
Rijssel, Jos van; Kuipers, Bonny W.M.; Erné, Ben H.
2014-01-01
A numerical inversion method known from the analysis of light scattering by colloidal dispersions is now applied to magnetization curves of ferrofluids. The distribution of magnetic particle sizes or dipole moments is determined without assuming that the distribution is unimodal or of a particular shape. The inversion method enforces positive number densities via a non-negative least squares procedure. It is tested successfully on experimental and simulated data for ferrofluid samples with known multimodal size distributions. The created computer program MINORIM is made available on the web. - Highlights: • A method from light scattering is applied to analyze ferrofluid magnetization curves. • A magnetic size distribution is obtained without prior assumption of its shape. • The method is tested successfully on ferrofluids with a known size distribution. • The practical limits of the method are explored with simulated data including noise. • This method is implemented in the program MINORIM, freely available online
Vozinaki, Anthi Eirini K.; Karatzas, George P.; Sibetheros, Ioannis A.; Varouchakis, Emmanouil A.
2014-05-01
Damage curves are the most significant component of the flood loss estimation models. Their development is quite complex. Two types of damage curves exist, historical and synthetic curves. Historical curves are developed from historical loss data from actual flood events. However, due to the scarcity of historical data, synthetic damage curves can be alternatively developed. Synthetic curves rely on the analysis of expected damage under certain hypothetical flooding conditions. A synthetic approach was developed and presented in this work for the development of damage curves, which are subsequently used as the basic input to a flood loss estimation model. A questionnaire-based survey took place among practicing and research agronomists, in order to generate rural loss data based on the responders' loss estimates, for several flood condition scenarios. In addition, a similar questionnaire-based survey took place among building experts, i.e. civil engineers and architects, in order to generate loss data for the urban sector. By answering the questionnaire, the experts were in essence expressing their opinion on how damage to various crop types or building types is related to a range of values of flood inundation parameters, such as floodwater depth and velocity. However, the loss data compiled from the completed questionnaires were not sufficient for the construction of workable damage curves; to overcome this problem, a Weighted Monte Carlo method was implemented, in order to generate extra synthetic datasets with statistical properties identical to those of the questionnaire-based data. The data generated by the Weighted Monte Carlo method were processed via Logistic Regression techniques in order to develop accurate logistic damage curves for the rural and the urban sectors. A Python-based code was developed, which combines the Weighted Monte Carlo method and the Logistic Regression analysis into a single code (WMCLR Python code). Each WMCLR code execution
International Nuclear Information System (INIS)
Perez-Lopez, Esteban
2014-01-01
The quantitative chemical analysis has been importance in research. Also, aspects like: quality control, sales of services and other areas of interest. Some instrumental analysis methods for quantification with linear calibration curve have presented limitations, because the short liner dynamic ranges of the analyte, or sometimes, by limiting the technique itself. The need has been to investigate a little more about the convenience of using quadratic calibration curves for analytical quantification, with which it has seeked demonstrate that has been a valid calculation model for chemical analysis instruments. An analysis base method is used on the technique of atomic absorption spectroscopy and in particular a determination of magnesium in a drinking water sample of the Tacares sector North of Grecia. A nonlinear calibration curve was used and specifically a curve with quadratic behavior. The same was compared with the test results obtained for the equal analysis with a linear calibration curve. The results have showed that the methodology has been valid for the determination referred with all confidence, since the concentrations have been very similar and, according to the used hypothesis testing, can be considered equal. (author) [es
The development of a curved beam element model applied to finite elements method
International Nuclear Information System (INIS)
Bento Filho, A.
1980-01-01
A procedure for the evaluation of the stiffness matrix for a thick curved beam element is developed, by means of the minimum potential energy principle, applied to finite elements. The displacement field is prescribed through polynomial expansions, and the interpolation model is determined by comparison of results obtained by the use of a sample of different expansions. As a limiting case of the curved beam, three cases of straight beams, with different dimensional ratios are analised, employing the approach proposed. Finally, an interpolation model is proposed and applied to a curved beam with great curvature. Desplacements and internal stresses are determined and the results are compared with those found in the literature. (Author) [pt
International Nuclear Information System (INIS)
Harvey, John A.; Rodrigues, Miesher L.; Kearfott, Kimberlee J.
2011-01-01
A computerized glow curve analysis (GCA) program for handling of thermoluminescence data originating from WinREMS is presented. The MATLAB program fits the glow peaks using the first-order kinetics model. Tested materials are LiF:Mg,Ti, CaF 2 :Dy, CaF 2 :Tm, CaF 2 :Mn, LiF:Mg,Cu,P, and CaSO 4 :Dy, with most having an average figure of merit (FOM) of 1.3% or less, with CaSO 4 :Dy 2.2% or less. Output is a list of fit parameters, peak areas, and graphs for each fit, evaluating each glow curve in 1.5 s or less. - Highlights: → Robust algorithm for performing thermoluminescent dosimeter glow curve analysis. → Written in MATLAB so readily implemented on variety of computers. → Usage of figure of merit demonstrated for six different materials.
Standard test method for Young's modulus, tangent modulus, and chord modulus
American Society for Testing and Materials. Philadelphia
2004-01-01
1.1 This test method covers the determination of Young's modulus, tangent modulus, and chord modulus of structural materials. This test method is limited to materials in which and to temperatures and stresses at which creep is negligible compared to the strain produced immediately upon loading and to elastic behavior. 1.2 Because of experimental problems associated with the establishment of the origin of the stress-strain curve described in 8.1, the determination of the initial tangent modulus (that is, the slope of the stress-strain curve at the origin) and the secant modulus are outside the scope of this test method. 1.3 The values stated in SI units are to be regarded as standard. No other units of measurement are included in this standard. 1.4 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory require...
Standard methods for sampling freshwater fishes: Opportunities for international collaboration
Bonar, Scott A.; Mercado-Silva, Norman; Hubert, Wayne A.; Beard, Douglas; Dave, Göran; Kubečka, Jan; Graeb, Brian D. S.; Lester, Nigel P.; Porath, Mark T.; Winfield, Ian J.
2017-01-01
With publication of Standard Methods for Sampling North American Freshwater Fishes in 2009, the American Fisheries Society (AFS) recommended standard procedures for North America. To explore interest in standardizing at intercontinental scales, a symposium attended by international specialists in freshwater fish sampling was convened at the 145th Annual AFS Meeting in Portland, Oregon, in August 2015. Participants represented all continents except Australia and Antarctica and were employed by state and federal agencies, universities, nongovernmental organizations, and consulting businesses. Currently, standardization is practiced mostly in North America and Europe. Participants described how standardization has been important for management of long-term data sets, promoting fundamental scientific understanding, and assessing efficacy of large spatial scale management strategies. Academics indicated that standardization has been useful in fisheries education because time previously used to teach how sampling methods are developed is now more devoted to diagnosis and treatment of problem fish communities. Researchers reported that standardization allowed increased sample size for method validation and calibration. Group consensus was to retain continental standards where they currently exist but to further explore international and intercontinental standardization, specifically identifying where synergies and bridges exist, and identify means to collaborate with scientists where standardization is limited but interest and need occur.
Test of nonexponential deviations from decay curve of 52V using continuous kinetic function method
International Nuclear Information System (INIS)
Tran Dai Nghiep; Vu Hoang Lam; Vo Tuong Hanh; Do Nguyet Minh; Nguyen Ngoc Son
1993-01-01
The present work is aimed at a formulation of an experimental approach to search the proposed description of an attempt to test them in case of 52 V. Some theoretical description of decay processes are formulated in clarified forms. The continuous kinetic function (CKF) method is used for analysis of experimental data and CKF for purely exponential case is considered as a standard for comparison between theoretical and experimental data. The degree of agreement is defined by the factor of goodness. Typical deviations of oscillation behavior of 52 V decay were observed in a wide range of time. The proposed deviation related to interaction between decay products and environment is research. A complex type of decay is discussed. (author). 10 refs, 2 tabs, 5 figs
A bottom-up method to develop pollution abatement cost curves for coal-fired utility boilers
This paper illustrates a new method to create supply curves for pollution abatement using boiler-level data that explicitly accounts for technology costs and performance. The Coal Utility Environmental Cost (CUECost) model is used to estimate retrofit costs for five different NO...
Schipper, H.R.; Grünewald, S.; Eigenraam, P.; Raghunath, P.; Kok, M.A.D.
2014-01-01
Free-form buildings tend to be expensive. By optimizing the production process, economical and well-performing precast concrete structures can be manufactured. In this paper, a method is presented that allows producing highly accurate double curved-elements without the need for milling two expensive
International Nuclear Information System (INIS)
Silva, A.F.; Welz, B.; Loos-Vollebregt, M.T.C. de
2008-01-01
Pyrolysis curves in electrothermal atomic absorption spectrometry (ET AAS) and electrothermal vaporization inductively coupled plasma mass spectrometry (ETV-ICP-MS) have been compared for As, Se and Pb in lobster hepatopancreas certified reference material using Pd/Mg as the modifier. The ET AAS pyrolysis curves confirm that the analytes are not lost from the graphite furnace up to a pyrolysis temperature of 800 deg. C. Nevertheless, a downward slope of the pyrolysis curve was observed for these elements in the biological material using ETV-ICP-MS. This could be related to a gain of sensitivity at low pyrolysis temperatures due to the matrix, which can act as carrier and/or promote changes in the plasma ionization equilibrium. Experiments with the addition of ascorbic acid to the aqueous standards confirmed that the higher intensities obtained in ETV-ICP-MS are related to the presence of organic compounds in the slurry. Pyrolysis curves for As, Se and Pb in coal and coal fly ash were also investigated using the same Pd/Mg modifier. Carbon intensities were measured in all samples using different pyrolysis temperatures. It was observed that pyrolysis curves for the three analytes in all slurry samples were similar to the corresponding graphs that show the carbon intensity for the same slurries for pyrolysis temperatures from 200 deg. C up to 1000 deg. C
Schipper, H.R.
2015-01-01
The production of precast, concrete elements with complex, double-curved geometry is expensive due to the high costcosts of the necessary moulds and the limited possibilities for mould reuse. Currently, CNC-milled foam moulds are the solution applied mostly in projects, offering good aesthetic
Standard test methods for rockwell hardness of metallic materials
American Society for Testing and Materials. Philadelphia
2008-01-01
1.1 These test methods cover the determination of the Rockwell hardness and the Rockwell superficial hardness of metallic materials by the Rockwell indentation hardness principle. This standard provides the requirements for Rockwell hardness machines and the procedures for performing Rockwell hardness tests. 1.2 This standard includes additional requirements in annexes: Verification of Rockwell Hardness Testing Machines Annex A1 Rockwell Hardness Standardizing Machines Annex A2 Standardization of Rockwell Indenters Annex A3 Standardization of Rockwell Hardness Test Blocks Annex A4 Guidelines for Determining the Minimum Thickness of a Test Piece Annex A5 Hardness Value Corrections When Testing on Convex Cylindrical Surfaces Annex A6 1.3 This standard includes nonmandatory information in appendixes which relates to the Rockwell hardness test. List of ASTM Standards Giving Hardness Values Corresponding to Tensile Strength Appendix X1 Examples of Procedures for Determining Rockwell Hardness Uncertainty Appendix X...
Standard test methods for rockwell hardness of metallic materials
American Society for Testing and Materials. Philadelphia
2011-01-01
1.1 These test methods cover the determination of the Rockwell hardness and the Rockwell superficial hardness of metallic materials by the Rockwell indentation hardness principle. This standard provides the requirements for Rockwell hardness machines and the procedures for performing Rockwell hardness tests. 1.2 This standard includes additional requirements in annexes: Verification of Rockwell Hardness Testing Machines Annex A1 Rockwell Hardness Standardizing Machines Annex A2 Standardization of Rockwell Indenters Annex A3 Standardization of Rockwell Hardness Test Blocks Annex A4 Guidelines for Determining the Minimum Thickness of a Test Piece Annex A5 Hardness Value Corrections When Testing on Convex Cylindrical Surfaces Annex A6 1.3 This standard includes nonmandatory information in appendixes which relates to the Rockwell hardness test. List of ASTM Standards Giving Hardness Values Corresponding to Tensile Strength Appendix X1 Examples of Procedures for Determining Rockwell Hardness Uncertainty Appendix X...
2002-01-01
The Atlas of Stress-Strain Curves, Second Edition is substantially bigger in page dimensions, number of pages, and total number of curves than the previous edition. It contains over 1,400 curves, almost three times as many as in the 1987 edition. The curves are normalized in appearance to aid making comparisons among materials. All diagrams include metric (SI) units, and many also include U.S. customary units. All curves are captioned in a consistent format with valuable information including (as available) standard designation, the primary source of the curve, mechanical properties (including hardening exponent and strength coefficient), condition of sample, strain rate, test temperature, and alloy composition. Curve types include monotonic and cyclic stress-strain, isochronous stress-strain, and tangent modulus. Curves are logically arranged and indexed for fast retrieval of information. The book also includes an introduction that provides background information on methods of stress-strain determination, on...
Bauer, James M.; Grav, Tommy; Buratti, Bonnie J.; Hicks, Michael D.
2006-09-01
During its 2005 January opposition, the saturnian system could be viewed at an unusually low phase angle. We surveyed a subset of Saturn's irregular satellites to obtain their true opposition magnitudes, or nearly so, down to phase angle values of 0.01°. Combining our data taken at the Palomar 200-inch and Cerro Tololo Inter-American Observatory's 4-m Blanco telescope with those in the literature, we present the first phase curves for nearly half the irregular satellites originally reported by Gladman et al. [2001. Nature 412, 163-166], including Paaliaq (SXX), Siarnaq (SXXIX), Tarvos (SXXI), Ijiraq (SXXII), Albiorix (SXVI), and additionally Phoebe's narrowest angle brightness measured to date. We find centaur-like steepness in the phase curves or opposition surges in most cases with the notable exception of three, Albiorix and Tarvos, which are suspected to be of similar origin based on dynamical arguments, and Siarnaq.
An analytical method for the thermoluminescence growth curve and its validity
International Nuclear Information System (INIS)
Sunta, C.M.; Yoshimura, E.M.; Okuno, E.
1994-01-01
The radiative recombination probability P of a carrier released from an active TL trap is computed for the case when the TL traps are interactive with the deep traps. It is shown that the value of P remains constant throughout the TL readout heating when the capture coefficients of the recombination centre and the deep trap are equal. When these coefficients are not equal, P undergoes change but its variation with temperature is still insignificant under a wide variety of conditions. A net value P applicable to the area of the glow curve is calculated and compared with the initial (when heating is begun) value of P, which is used in the analytical expression for the area of the glow curve. (Author)
Standard Test Method for Environmental Resistance of Aerospace Transparencies
American Society for Testing and Materials. Philadelphia
2010-01-01
1.1 This test method covers determination of the effects of exposure to thermal shock, condensing humidity, and simulated weather on aerospace transparent enclosures. 1.2 This test method is not recommended for quality control nor is it intended to provide a correlation to actual service life. 1.3 The values stated in SI units are to be regarded as standard. No other units of measurement are included in this standard. 1.3.1 Exceptions—Certain inch-pound units are furnished in parentheses (not mandatory) and certain temperatures in Fahrenheit associated with other standards are also furnished. 1.4 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.
Deep-learnt classification of light curves
DEFF Research Database (Denmark)
Mahabal, Ashish; Gieseke, Fabian; Pai, Akshay Sadananda Uppinakudru
2017-01-01
Astronomy light curves are sparse, gappy, and heteroscedastic. As a result standard time series methods regularly used for financial and similar datasets are of little help and astronomers are usually left to their own instruments and techniques to classify light curves. A common approach is to d...
Marginal abatement cost curves for policy recommendation – A method for energy system analysis
International Nuclear Information System (INIS)
Tomaschek, Jan
2015-01-01
The transport sector is seen as one of the key factors for driving future energy consumption and greenhouse gas (GHG) emissions. In order to rank possible measures marginal abatement cost curves have become a tool to graphically represent the relationship between abatement costs and emission reduction. This paper demonstrates how to derive marginal abatement cost curves for well-to-wheel GHG emissions of the transport sector considering the full energy provision chain and the interlinkages and interdependencies within the energy system. Presented marginal abatement cost curves visualize substitution effects between measures for different marginal mitigation costs. The analysis makes use of an application of the energy system model generator TIMES for South Africa (TIMES-GEECO). For the example of Gauteng province, this study exemplary shows that the transport sector is not the first sector to address for cost-efficient reduction of GHG emissions. However, the analysis also demonstrates that several options are available to mitigate transport related GHG emissions at comparable low marginal abatement costs. This methodology can be transferred to other economic sectors as well as to other regions in the world to derive cost-efficient GHG reduction strategies
Standard test methods of tension testing of metallic foil
American Society for Testing and Materials. Philadelphia
1993-01-01
1.1 These test methods cover the tension testing of metallic foil at room temperature in thicknesses less than 0.006 in. (0.150 mm). Note 1—Exception to these methods may be necessary in individual specifications or test methods for a particular material. 1.2 Units—The values stated in inch-pound units are to be regarded as standard. The values given in parentheses are mathematical conversions to SI units that are provided for information only and are not considered standard. 1.3 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.
American Society for Testing and Materials. Philadelphia
2010-01-01
1.1 This is a quantitative test method applicable to determining the mass percent of uranium isotopes in uranium hexafluoride (UF6) samples with 235U concentrations between 0.1 and 5.0 mass %. 1.2 This test method may be applicable for the entire range of 235U concentrations for which adequate standards are available. 1.3 This test method is for analysis by a gas magnetic sector mass spectrometer with a single collector using interpolation to determine the isotopic concentration of an unknown sample between two characterized UF6 standards. 1.4 This test method is to replace the existing test method currently published in Test Methods C761 and is used in the nuclear fuel cycle for UF6 isotopic analyses. 1.5 The values stated in SI units are to be regarded as standard. No other units of measurement are included in this standard. 1.6 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appro...
Use of Monte Carlo Methods for determination of isodose curves in brachytherapy
International Nuclear Information System (INIS)
Vieira, Jose Wilson
2001-08-01
Brachytherapy is a special form of cancer treatment in which the radioactive source is very close to or inside the tumor with the objective of causing the necrosis of the cancerous tissue. The intensity of cell response to the radiation varies according to the tissue type and degree of differentiation. Since the malign cells are less differentiated than the normal ones, they are more sensitive to the radiation. This is the basis for radiotherapy techniques. Institutes that work with the application of high dose rates use sophisticated computer programs to calculate the necessary dose to achieve the necrosis of the tumor and the same time, minimizing the irradiation of tissues and organs of the neighborhood. With knowledge the characteristics of the source and the tumor, it is possible to trace isodose curves with the necessary information for planning the brachytherapy in patients. The objective of this work is, using Monte Carlo techniques, to develop a computer program - the ISODOSE - which allows to determine isodose curves in turn of linear radioactive sources used in brachytherapy. The development of ISODOSE is important because the available commercial programs, in general, are very expensive and practically inaccessible to small clinics. The use of Monte Carlo techniques is viable because they avoid problems inherent to analytic solutions as, for instance , the integration of functions with singularities in its domain. The results of ISODOSE were compared with similar data found in the literature and also with those obtained at the institutes of radiotherapy of the 'Hospital do Cancer do Recife' and of the 'Hospital Portugues do Recife'. ISODOSE presented good performance, mainly, due to the Monte Carlo techniques, that allowed a quite detailed drawing of the isodose curves in turn of linear sources. (author)
Internal Standard Method for the Determination of Au and some ...
African Journals Online (AJOL)
Abstract. A method is described for the determination of Au, Pt, Pd, Ru and Rh in a converter matte sample, using inductively coupled plasma optical emission spectrometry (ICP-OES), with Y or Sc as internal standard. The results obtained by this method are discussed and compared with values obtained by an independent ...
International Nuclear Information System (INIS)
McCabe, D.E.; Zerbst, U.; Heerens, J.
1993-01-01
This report covers the resolution of several issues that are relevant to the ductile to brittle transition range of structural steels. One of this issues was to compare a statistical-based weakest-link method to constraint data adjustment methods for modeling the specimen size effects on fracture toughness. Another was to explore the concept of a universal transition temperature curve shape (Master Curve). Data from a Materials Properties Council round robin activity were used to test the proposals empirically. The findings of this study are inclosed in an activity for the development of a draft standard test procedure ''Test Practice for Fracture Toughness in the Transition Range''. (orig.) [de
Energy Technology Data Exchange (ETDEWEB)
Spinler, E.A.; Baldwin, B.A. [Phillips Petroleum Co., Bartlesville, OK (United States)
1997-08-01
A method is being developed for direct experimental determination of capillary pressure curves from saturation distributions produced during centrifuging fluids in a rock plug. A free water level is positioned along the length of the plugs to enable simultaneous determination of both positive and negative capillary pressures. Octadecane as the oil phase is solidified by temperature reduction while centrifuging to prevent fluid redistribution upon removal from the centrifuge. The water saturation is then measured via magnetic resonance imaging. The saturation profile within the plug and the calculation of pressures for each point of the saturation profile allows for a complete capillary pressure curve to be determined from one experiment. Centrifuging under oil with a free water level into a 100 percent water saturated plug results in the development of a primary drainage capillary pressure curve. Centrifuging similarly at an initial water saturation in the plug results in the development of an imbibition capillary pressure curve. Examples of these measurements are presented for Berea sandstone and chalk rocks.
Standard test method for conducting potentiodynamic polarization resistance measurements
American Society for Testing and Materials. Philadelphia
1997-01-01
1.1 This test method covers an experimental procedure for polarization resistance measurements which can be used for the calibration of equipment and verification of experimental technique. The test method can provide reproducible corrosion potentials and potentiodynamic polarization resistance measurements. 1.2 The values stated in SI units are to be regarded as standard. No other units of measurement are included in this standard. 1.3 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.
Statistical methods for evaluating the attainment of cleanup standards
Energy Technology Data Exchange (ETDEWEB)
Gilbert, R.O.; Simpson, J.C.
1992-12-01
This document is the third volume in a series of volumes sponsored by the US Environmental Protection Agency (EPA), Statistical Policy Branch, that provide statistical methods for evaluating the attainment of cleanup Standards at Superfund sites. Volume 1 (USEPA 1989a) provides sampling designs and tests for evaluating attainment of risk-based standards for soils and solid media. Volume 2 (USEPA 1992) provides designs and tests for evaluating attainment of risk-based standards for groundwater. The purpose of this third volume is to provide statistical procedures for designing sampling programs and conducting statistical tests to determine whether pollution parameters in remediated soils and solid media at Superfund sites attain site-specific reference-based standards. This.document is written for individuals who may not have extensive training or experience with statistical methods. The intended audience includes EPA regional remedial project managers, Superfund-site potentially responsible parties, state environmental protection agencies, and contractors for these groups.
Standardization of Laboratory Methods for the PERCH Study
Karron, Ruth A.; Morpeth, Susan C.; Bhat, Niranjan; Levine, Orin S.; Baggett, Henry C.; Brooks, W. Abdullah; Feikin, Daniel R.; Hammitt, Laura L.; Howie, Stephen R. C.; Knoll, Maria Deloria; Kotloff, Karen L.; Madhi, Shabir A.; Scott, J. Anthony G.; Thea, Donald M.; Adrian, Peter V.; Ahmed, Dilruba; Alam, Muntasir; Anderson, Trevor P.; Antonio, Martin; Baillie, Vicky L.; Dione, Michel; Endtz, Hubert P.; Gitahi, Caroline; Karani, Angela; Kwenda, Geoffrey; Maiga, Abdoul Aziz; McClellan, Jessica; Mitchell, Joanne L.; Morailane, Palesa; Mugo, Daisy; Mwaba, John; Mwansa, James; Mwarumba, Salim; Nyongesa, Sammy; Panchalingam, Sandra; Rahman, Mustafizur; Sawatwong, Pongpun; Tamboura, Boubou; Toure, Aliou; Whistler, Toni; O’Brien, Katherine L.; Murdoch, David R.
2017-01-01
Abstract The Pneumonia Etiology Research for Child Health study was conducted across 7 diverse research sites and relied on standardized clinical and laboratory methods for the accurate and meaningful interpretation of pneumonia etiology data. Blood, respiratory specimens, and urine were collected from children aged 1–59 months hospitalized with severe or very severe pneumonia and community controls of the same age without severe pneumonia and were tested with an extensive array of laboratory diagnostic tests. A standardized testing algorithm and standard operating procedures were applied across all study sites. Site laboratories received uniform training, equipment, and reagents for core testing methods. Standardization was further assured by routine teleconferences, in-person meetings, site monitoring visits, and internal and external quality assurance testing. Targeted confirmatory testing and testing by specialized assays were done at a central reference laboratory. PMID:28575358
Standard test method for instrumented impact testing of metallic materials
American Society for Testing and Materials. Philadelphia
2009-01-01
1.1 This standard establishes the requirements for performing instrumented Charpy V-Notch (CVN) and instrumented Miniaturized Charpy V-Notch (MCVN) impact tests on metallic materials. This method, which is based on experience developed testing steels, provides further information (in addition to the total absorbed energy) on the fracture behavior of the tested materials. Minimum requirements are given for measurement and recording equipment such that similar sensitivity and comparable total absorbed energy measurements to those obtained in Test Methods E 23 and E 2248 are achieved. 1.2 The values stated in SI units are to be regarded as the standard. 1.3 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.
Toward a standard method for determination of waterborne radon
International Nuclear Information System (INIS)
Vitz, E.
1990-01-01
When the USEPA specifies the maximum contaminant level (MCL) for any contaminant, a standard method for analysis must be simultaneously stipulated. Promulgation of the proposed MCL and standard method for radon in drinking water is expected by early next year, but a six-month comment period and revision will precede final enactment. The standard method for radon in drinking water will probably specify that either the Lucas cell technique or liquid scintillation spectrometry be used. This paper reports results which support a standard method with the following features: samples should be collected by an explicitly stated technique to control degassing, in glass vials with or without scintillation cocktail, and possibly in duplicate; samples should be measured by liquid scintillation spectroscopy in a specified energy window', in a glass vial with particular types of cocktails; radium standards should be prepared with controlled quench levels and specified levels of carriers, but radium-free controls prepared by a specified method should be used in interlaboratory comparison studies
Schipper, H.R.; Grünewald, S.; Eigenraam, P.; Raghunath, P.; Kok, M.A.D.
2014-01-01
Free-form buildings tend to be expensive. By optimizing the production process, economical and well-performing precast concrete structures can be manufactured. In this paper, a method is presented that allows producing highly accurate double curved-elements without the need for milling two expensive mould surfaces per single element. The flexible mould is fully reusable and the benefits of applying self-compacting concrete are utilised. The flexible mould process work as follows: Thin concret...
Standardized methods for photography in procedural dermatology using simple equipment.
Hexsel, Doris; Hexsel, Camile L; Dal'Forno, Taciana; Schilling de Souza, Juliana; Silva, Aline F; Siega, Carolina
2017-04-01
Photography is an important tool in dermatology. Reproducing the settings of before photos after interventions allows more accurate evaluation of treatment outcomes. In this article, we describe standardized methods and tips to obtain photographs, both for clinical practice and research procedural dermatology, using common equipment. Standards for the studio, cameras, photographer, patients, and framing are presented in this article. © 2017 The International Society of Dermatology.
Standard Test Method for Abrasive Wear Resistance of Cemented
American Society for Testing and Materials. Philadelphia
2005-01-01
1.1 This test method covers the determination of abrasive wear resistance of cemented carbides. 1.2 The values stated in inch-pound units are to be regarded as the standard. The SI equivalents of inch-pound units are in parentheses and may be approximate. 1.3 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.
Standard Test Method for Measured Speed of Oil Diffusion Pumps
American Society for Testing and Materials. Philadelphia
1982-01-01
1.1 This test method covers the determination of the measured speed (volumetric flow rate) of oil diffusion pumps. 1.2 The values stated in inch-pound units are to be regarded as the standard. The metric equivalents of inch-pound units may be approximate. 1.3 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.
Standard Test Method for Determining Poisson's Ratio of Honeycomb Cores
American Society for Testing and Materials. Philadelphia
2002-01-01
1.1 This test method covers the determination of the honeycomb Poisson's ratio from the anticlastic curvature radii, see . 1.2 The values stated in SI units are to be regarded as the standard. The inch-pound units given may be approximate. This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.
Standard Test Method for Shear Fatigue of Sandwich Core Materials
American Society for Testing and Materials. Philadelphia
2000-01-01
1.1 This test method covers determination of the effect of repeated shear loads on sandwich core materials. 1.2 The values stated in SI units are to be regarded as the standard. The inch-pound units given may be approximate. 1.3 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.
Standard Test Method for Dimensional Stability of Sandwich Core Materials
American Society for Testing and Materials. Philadelphia
2002-01-01
1.1 This test method covers the determination of the sandwich core dimensional stability in the two plan dimensions. 1.2 The values stated in SI units are to be regarded as the standard. The inch-pound units given may be approximate. This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.
International Nuclear Information System (INIS)
Okano, Yasushi; Yamano, Hidemasa
2016-01-01
A method to obtain a hazard curve of a forest fire was developed. The method has four steps: a logic tree formulation, a response surface evaluation, a Monte Carlo simulation, and an annual exceedance frequency calculation. The logic tree consists domains of 'forest fire breakout and spread conditions', 'weather conditions', 'vegetation conditions', and 'forest fire simulation conditions.' Condition parameters of the logic boxes are static if stable during a forest fire or not sensitive to a forest fire intensity, and non-static parameters are variables whose frequency/probability is given based on existing databases or evaluations. Response surfaces of a reaction intensity and a fireline intensity were prepared by interpolating outputs from a number of forest fire propagation simulations by fire area simulator (FARSITE). The Monte Carlo simulation was performed where one sample represented a set of variable parameters of the logic boxes and a corresponding intensity was evaluated from the response surface. The hazard curve, i.e. an annual exceedance frequency of the intensity, was therefore calculated from the histogram of the Monte Carlo simulation outputs. The new method was applied to evaluate hazard curves of a reaction intensity and a fireline intensity for a typical location around a sodium-cooled fast reactor in Japan. (author)
Liu, K; Mitchell, K J; Chapman, W W; Savova, G K; Sioutos, N; Rubin, D L; Crowley, R S
2013-01-01
Developing a two-step method for formative evaluation of statistical Ontology Learning (OL) algorithms that leverages existing biomedical ontologies as reference standards. In the first step optimum parameters are established. A 'gap list' of entities is generated by finding the set of entities present in a later version of the ontology that are not present in an earlier version of the ontology. A named entity recognition system is used to identify entities in a corpus of biomedical documents that are present in the 'gap list', generating a reference standard. The output of the algorithm (new entity candidates), produced by statistical methods, is subsequently compared against this reference standard. An OL method that performs perfectly will be able to learn all of the terms in this reference standard. Using evaluation metrics and precision-recall curves for different thresholds and parameters, we compute the optimum parameters for each method. In the second step, human judges with expertise in ontology development evaluate each candidate suggested by the algorithm configured with the optimum parameters previously established. These judgments are used to compute two performance metrics developed from our previous work: Entity Suggestion Rate (ESR) and Entity Acceptance Rate (EAR). Using this method, we evaluated two statistical OL methods for OL in two medical domains. For the pathology domain, we obtained 49% ESR, 28% EAR with the Lin method and 52% ESR, 39% EAR with the Church method. For the radiology domain, we obtain 87% ESA, 9% EAR using Lin method and 96% ESR, 16% EAR using Church method. This method is sufficiently general and flexible enough to permit comparison of any OL method for a specific corpus and ontology of interest.
A new post-frac evaluation method for shale gas wells based on fracturing curves
Directory of Open Access Journals (Sweden)
Xiaobing Bian
2016-03-01
Full Text Available Post-fracturing evaluation by using limited data is of great significance to continuous improvement of the fracturing programs. In this paper, a fracturing curve was divided into two stages (i.e., prepad fluid injection and main fracturing so as to further understand the parameters of reservoirs and artificial fractures. The brittleness and plasticity of formations were qualitatively identified by use of the statistics of formation fracture frequency, and average pressure dropping range and rate during the prepad fluid injection. The composite brittleness index was quantitatively calculated by using the energy zones in the process of fracturing. It is shown from the large-scale true triaxial physical simulation results that the complexity of fractures is reflected by the pressure fluctuation frequency and amplitude in the main fracturing curve, and combined with the brittleness and plasticity of formations, the fracture morphology far away from the well can be diagnosed. Well P, a shale gas well in SE Chongqing, was taken as an example for post-fracturing evaluation. It is shown that the shale beds are of stronger heterogeneity along the extension directions of horizontal wells, and with GR 260 API as the dividing line between brittleness and plasticity in this area, complex fracture systems tend to form in brittleness-prone formations. In Well P, half of the fractures are single fractures, so it is necessary to carry out fine subsection and turnaround fracturing so as to improve development effects. This paper provides a theoretical basis for improving the fracturing well design and increasing the effective stimulated volume in this area.
Statistical benchmarking in utility regulation: Role, standards and methods
International Nuclear Information System (INIS)
Newton Lowry, Mark; Getachew, Lullit
2009-01-01
Statistical benchmarking is being used with increasing frequency around the world in utility rate regulation. We discuss how and where benchmarking is in use for this purpose and the pros and cons of regulatory benchmarking. We then discuss alternative performance standards and benchmarking methods in regulatory applications. We use these to propose guidelines for the appropriate use of benchmarking in the rate setting process. The standards, which we term the competitive market and frontier paradigms, have a bearing on method selection. These along with regulatory experience suggest that benchmarking can either be used for prudence review in regulation or to establish rates or rate setting mechanisms directly
An Empirical Fitting Method for Type Ia Supernova Light Curves: A Case Study of SN 2011fe
Energy Technology Data Exchange (ETDEWEB)
Zheng, WeiKang; Filippenko, Alexei V., E-mail: zwk@astro.berkeley.edu [Department of Astronomy, University of California, Berkeley, CA 94720-3411 (United States)
2017-03-20
We present a new empirical fitting method for the optical light curves of Type Ia supernovae (SNe Ia). We find that a variant broken-power-law function provides a good fit, with the simple assumption that the optical emission is approximately the blackbody emission of the expanding fireball. This function is mathematically analytic and is derived directly from the photospheric velocity evolution. When deriving the function, we assume that both the blackbody temperature and photospheric velocity are constant, but the final function is able to accommodate these changes during the fitting procedure. Applying it to the case study of SN 2011fe gives a surprisingly good fit that can describe the light curves from the first-light time to a few weeks after peak brightness, as well as over a large range of fluxes (∼5 mag, and even ∼7 mag in the g band). Since SNe Ia share similar light-curve shapes, this fitting method has the potential to fit most other SNe Ia and characterize their properties in large statistical samples such as those already gathered and in the near future as new facilities become available.
Standard Test Method for Thermal Oxidative Resistance of Carbon Fibers
American Society for Testing and Materials. Philadelphia
1982-01-01
1.1 This test method covers the apparatus and procedure for the determination of the weight loss of carbon fibers, exposed to ambient hot air, as a means of characterizing their oxidative resistance. 1.2 The values stated in SI units are to be regarded as standard. The values given in parentheses are mathematical conversions to inch-pound units which are provided for information only and are not considered standard. 1.3 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use. For specific hazard information, see Section 8.
International Nuclear Information System (INIS)
Jesenik, M.; Gorican, V.; Trlep, M.; Hamler, A.; Stumberger, B.
2006-01-01
A lot of magnetic materials are anisotropic. In the 3D finite element method calculation, anisotropy of the material is taken into account. Anisotropic magnetic material is described with magnetization curves for different magnetization directions. The 3D transient calculation of the rotational magnetic field in the sample of the round rotational single sheet tester with circular sample considering eddy currents is made and compared with the measurement to verify the correctness of the method and to analyze the magnetic field in the sample
International Nuclear Information System (INIS)
Yu, Shiwei; Zhang, Junjie; Zheng, Shuhong; Sun, Han
2015-01-01
This study aims to estimate carbon intensity abatement potential in China at the regional level by proposing a particle swarm optimization–genetic algorithm (PSO–GA) multivariate environmental learning curve estimation method. The model uses two independent variables, namely, per capita gross domestic product (GDP) and the proportion of the tertiary industry in GDP, to construct carbon intensity learning curves (CILCs), i.e., CO 2 emissions per unit of GDP, of 30 provinces in China. Instead of the traditional ordinary least squares (OLS) method, a PSO–GA intelligent optimization algorithm is used to optimize the coefficients of a learning curve. The carbon intensity abatement potentials of the 30 Chinese provinces are estimated via PSO–GA under the business-as-usual scenario. The estimation reveals the following results. (1) For most provinces, the abatement potentials from improving a unit of the proportion of the tertiary industry in GDP are higher than the potentials from raising a unit of per capita GDP. (2) The average potential of the 30 provinces in 2020 will be 37.6% based on the emission's level of 2005. The potentials of Jiangsu, Tianjin, Shandong, Beijing, and Heilongjiang are over 60%. Ningxia is the only province without intensity abatement potential. (3) The total carbon intensity in China weighted by the GDP shares of the 30 provinces will decline by 39.4% in 2020 compared with that in 2005. This intensity cannot achieve the 40%–45% carbon intensity reduction target set by the Chinese government. Additional mitigation policies should be developed to uncover the potentials of Ningxia and Inner Mongolia. In addition, the simulation accuracy of the CILCs optimized by PSO–GA is higher than that of the CILCs optimized by the traditional OLS method. - Highlights: • A PSO–GA-optimized multi-factor environmental learning curve method is proposed. • The carbon intensity abatement potentials of the 30 Chinese provinces are estimated by
Robust steganographic method utilizing properties of MJPEG compression standard
Directory of Open Access Journals (Sweden)
Jakub Oravec
2015-06-01
Full Text Available This article presents design of steganographic method, which uses video container as cover data. Video track was recorded by webcam and was further encoded by compression standard MJPEG. Proposed method also takes in account effects of lossy compression. The embedding process is realized by switching places of transform coefficients, which are computed by Discrete Cosine Transform. The article contains possibilities, used techniques, advantages and drawbacks of chosen solution. The results are presented at the end of the article.
Report on the Standardization Project "Formal Methods in Conformance Testing"
Baumgarten, B.; Hogrefe, D.; Heymer, S.; Burkhardt, H.-J.; Giessler, A.; Tretmans, G.J.
1996-01-01
This paper presents the latest developments in the “Formal Methods in Conformance Testing��? (FMCT) project of ISO and ITU–T. The project has been initiated to study the role of formal description techniques in the conformance testing process. The goal is to develop a standard that defines the
Standard methods for rearing and selection of Apis mellifera queens
DEFF Research Database (Denmark)
Büchler, Ralph; Andonov, Sreten; Bienefeld, Kaspar
2013-01-01
Here we cover a wide range of methods currently in use and recommended in modern queen rearing, selection and breeding. The recommendations are meant to equally serve as standards for both scientific and practical beekeeping purposes. The basic conditions and different management techniques for q...
Evaluating the Capacity of Standard Investment Appraisal Methods
M.M. Akalu
2002-01-01
textabstractThe survey findings indicate the existence of gap between theory and practice of capital budgeting. Standard appraisal methods have shown a wider project value discrepancy, which is beyond and above the contingency limit. In addition, the research has found the growing trend in the use
Internal standard method for the determination of Gold and the ...
African Journals Online (AJOL)
Olof Vorster
demand the world over for gold (Au) and the platinum group metals (PGMs) consisting of Pt, Pd, Ru, Rh, Ir and Os. Inductively coupled plasma optical emission spectrometry (ICP-OES) is widely used for the quantitative analysis of the PGMs using the internal standardization method.2,9. Although interferences are less of a.
Standard test method for dynamic tear testing of metallic materials
American Society for Testing and Materials. Philadelphia
1983-01-01
1.1 This test method covers the dynamic tear (DT) test using specimens that are 3/16 in. to 5/8 in. (5 mm to 16 mm) inclusive in thickness. 1.2 This test method is applicable to materials with a minimum thickness of 3/16 in. (5 mm). 1.3 The pressed-knife procedure described for sharpening the notch tip generally limits this test method to materials with a hardness level less than 36 HRC. Note 1—The designation 36 HRC is a Rockwell hardness number of 36 on Rockwell C scale as defined in Test Methods E 18. 1.4 The values stated in inch-pound units are to be regarded as standard. The values given in parentheses are mathematical conversions to SI units that are provided for information only and are not considered standard. 1.5 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.
Shimauchi, Akiko; Abe, Hiroyuki; Schacht, David V; Yulei, Jian; Pineda, Federico D; Jansen, Sanaz A; Ganesh, Rajiv; Newstead, Gillian M
2015-08-01
To quantify kinetic heterogeneity of breast masses that were initially detected with dynamic contrast-enhanced MRI, using whole-lesion kinetic distribution data obtained from computer-aided evaluation (CAE), and to compare that with standard kinetic curve analysis. Clinical MR images from 2006 to 2011 with breast masses initially detected with MRI were evaluated with CAE. The relative frequencies of six kinetic patterns (medium-persistent, medium-plateau, medium-washout, rapid-persistent, rapid-plateau, rapid-washout) within the entire lesion were used to calculate kinetic entropy (KE), a quantitative measure of enhancement pattern heterogeneity. Initial uptake (IU) and signal enhancement ratio (SER) were obtained from the most-suspicious kinetic curve. Mann-Whitney U test and ROC analysis were conducted for differentiation of malignant and benign masses. Forty benign and 37 malignant masses comprised the case set. IU and SER were not significantly different between malignant and benign masses, whereas KE was significantly greater for malignant than benign masses (p = 0.748, p = 0.083, and p kinetic heterogeneity of whole-lesion time-curve data with KE has the potential to improve differentiation of malignant from benign breast masses on breast MRI. • Kinetic heterogeneity can be quantified by computer-aided evaluation of breast MRI • Kinetic entropy was greater in malignant masses than benign masses • Kinetic entropy has the potential to improve differentiation of breast masses.
Hanafiah, Hazlenah; Jemain, Abdul Aziz
2013-11-01
In recent years, the study of fertility has been getting a lot of attention among research abroad following fear of deterioration of fertility led by the rapid economy development. Hence, this study examines the feasibility of developing fertility forecasts based on age structure. Lee Carter model (1992) is applied in this study as it is an established and widely used model in analysing demographic aspects. A singular value decomposition approach is incorporated with an ARIMA model to estimate age specific fertility rates in Peninsular Malaysia over the period 1958-2007. Residual plots is used to measure the goodness of fit of the model. Fertility index forecast using random walk drift is then utilised to predict the future age specific fertility. Results indicate that the proposed model provides a relatively good and reasonable data fitting. In addition, there is an apparent and continuous decline in age specific fertility curves in the next 10 years, particularly among mothers' in their early 20's and 40's. The study on the fertility is vital in order to maintain a balance between the population growth and the provision of facilities related resources.
Huang, Shijie; Liu, Zanzan; Wen, Huixin; Li, Li; Li, Qingge; Huang, Jianwei
2015-02-01
To develop a high-throughput rapid method for Vibrio (V.) cholerae molecular typing based on Melting Curve-based Multilocus Melt Typing (McMLMT). Seven housekeeping genes of V.cholerae were screened out, and for each gene, the specific primers were designed for correspondent genes as well as 4 probes covering polymorphism loci of sequences. After optimizing all parameters, a method of melting-curve analysis following asymmetric PCR was established with dual-fluorescent-reporter in two reaction tubes for each gene. A set of 28 Tm-values was obtained for each strain and then translated into a set of code of allelic genes, standing for the strain's McMLMT type (MT). Meanwhile, sequences of the 7-locus polymorphism were typed according to the method of MLST. To evaluate the efficiency and reliability of McMLMT, the data were compared with that of sequence-typing and PFGE using BioNumerics software. McMLMT method was established and refined for rapid typing of V. cholerae that a dozen of strains can be finished testing in a 3-hours PCR running using 96-well plates. 108 strains were analyzed and 28-Tm-values could be grouped and encoded according to 7 housekeeping gene to obtain the code set of allelic genes, and classified into 18 types (D = 0.723 3). Sequences of the 7 genes' polymorphism areas were directly clustered into the same 18 types with reference to MLST method. 46 of the strains, each represented a different PFGE type, could be classified into 13 types (D = 0.614 5) with McMLMT method and A- K groups at 85% similarity (D = 0.858 9) with PFGE method. McMLMT method is a rapid high-throughput molecular typing method for batches of strains with a resolution equal to MLST method and comparable to PFGE group.
Effectiveness of permethrin standard and modified methods in scabies treatment
Directory of Open Access Journals (Sweden)
Saleha Sungkar
2014-06-01
Full Text Available Background: Permethrin is the drug of choice for scabies with side effects such as erythema, pain, itching and prickling sensation. Whole-body (standard topical application of permethrin causes discomfort; thus, modified application of permethrin to the lesion only, followed with baths twice daily using soap was proposed. The objective of the study is to know the effectiveness of standard against lesion-only application of permethrin in scabies treatment.Methods: An experimental study was conducted in pesantren in East Jakarta and data was collected in May-July 2012. Diagnosis of scabies was made through anamnesis and skin examination. Subjects positive for scabies were divided into three groups: one standard method group (whole-body topical application and two modified groups (lesion-only application followed by the use of regular soap and antiseptic soap group. The three groups were evaluated weekly for three consecutive weeks. Data was processed using SPSS 20 and analyzed by Kruskal-Wallis test.Results: Total of 94 subjects was scabies positive (prevalence 50% but only 69 subjects were randomly picked to be analyzed. The cure rate at the end of week III of the standard method group was 95.7%, modified treatment followed by the use of regular soap was 91.3%, and modified treatment followed by the use of antiseptic soap was 78.3% (p = 0.163. The recurrence rate of standard treatment was 8.7%, modified treatment followed by the use of regular soap was 13% and modified treatment followed by the use of antiseptic soap was 26.1% (p = 0.250.Conclusion: The standard scabies treatment was as effective as the modified scabies treatment.
Radioactive standards and calibration methods for contamination monitoring instruments
Energy Technology Data Exchange (ETDEWEB)
Yoshida, Makoto [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment
1997-06-01
Contamination monitoring in the facilities for handling unsealed radioactive materials is one of the most important procedures for radiation protection as well as radiation dose monitoring. For implementation of the proper contamination monitoring, radiation measuring instruments should not only be suitable to the purpose of monitoring, but also be well calibrated for the objective qualities of measurement. In the calibration of contamination monitoring instruments, quality reference activities need to be used. They are supplied in different such as extended sources, radioactive solutions or radioactive gases. These reference activities must be traceable to the national standards or equivalent standards. On the other hand, the appropriate calibration methods must be applied for each type of contamination monitoring instruments. In this paper, the concepts of calibration for contamination monitoring instruments, reference sources, determination methods of reference quantities and practical calibration methods of contamination monitoring instruments, including the procedures carried out in Japan Atomic Energy Research Institute and some relevant experimental data. (G.K.)
Pneumatic gouge versus standard method for iliac crest harvesting.
Duncan, R W; McGuire, R A; Meydrech, E F
1994-08-01
Fifty consecutive patients undergoing posterior lumbar fusion by a single surgeon were prospectively randomized in a study designed to evaluate the efficacy of using a pneumatic oscillating gouge to obtain posterior outer table iliac crest bone graft versus the standard method of using osteotomes and gouges. Variables analyzed included graft harvesting time, blood loss, weight of graft obtained, and graft site morbidity. Mean graft harvesting time with the pneumatic gouge was 1 minute 44 seconds (range, 1 min 5 sec to 3 min 15 sec) compared with the standard method time of 4 minutes 4 seconds (range, 2 min 15 sec to 8 min 56 sec) (P = 0.0001). Blood loss was also less, with a mean of 25.4 cc for the pneumatic gouge compared with 65.2 cc using the standard method (P = 0.0001). There were no complications with the graft site in either group. We conclude that the pneumatic gouge is a viable alternative to standard bone graft harvesting techniques. Benefits include shorter operative time and decreased blood loss without an increased morbidity.
Directory of Open Access Journals (Sweden)
Jiang Lin
2016-01-01
Full Text Available The overall efficiency of PV arrays is affected by hot spots which should be detected and diagnosed by applying responsible monitoring techniques. The method using the IR thermal image to detect hot spots has been studied as a direct, noncontact, nondestructive technique. However, IR thermal images suffer from relatively high stochastic noise and non-uniformity clutter, so the conventional methods of image processing are not effective. The paper proposes a method to detect hotspots based on curve fitting of gray histogram. The result of MATLAB simulation proves the method proposed in the paper is effective to detect the hot spots suppressing the noise generated during the process of image acquisition.
American Society for Testing and Materials. Philadelphia
2011-01-01
1.1 This test method is applicable to the isotopic analysis of uranium hexafluoride (UF6) with 235U concentrations less than or equal to 5 % and 234U, 236U concentrations of 0.0002 to 0.1 %. 1.2 This test method may be applicable to the analysis of the entire range of 235U isotopic compositions providing that adequate Certified Reference Materials (CRMs or traceable standards) are available. 1.3 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety health practices and determine the applicability of regulatory limitations prior to use.
Maximum Energy Output of a DFIG Wind Turbine Using an Improved MPPT-Curve Method
Directory of Open Access Journals (Sweden)
Dinh-Chung Phan
2015-10-01
Full Text Available A new method is proposed for obtaining the maximum power output of a doubly-fed induction generator (DFIG wind turbine to control the rotor- and grid-side converters. The efficiency of maximum power point tracking that is obtained by the proposed method is theoretically guaranteed under assumptions that represent physical conditions. Several control parameters may be adjusted to ensure the quality of control performance. In particular, a DFIG state-space model and a control technique based on the Lyapunov function are adopted to derive the control method. The effectiveness of the proposed method is verified via numerical simulations of a 1.5-MW DFIG wind turbine using MATLAB/Simulink. The simulation results show that when the proposed method is used, the wind turbine is capable of properly tracking the optimal operation point; furthermore, the generator’s available energy output is higher when the proposed method is used than it is when the conventional method is used instead.
Standard Test Method for Laboratory Aging of Sandwich Constructions
American Society for Testing and Materials. Philadelphia
1999-01-01
1.1 This test method covers the determination of the resistance of sandwich panels to severe exposure conditions as measured by the change in selected properties of the material after exposure. The exposure cycle to which the specimen is subjected is an arbitrary test having no correlation with natural weathering conditions. 1.2 The values stated in SI units are to be regarded as the standard. The inch-pound units given may be approximate. 1.3 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.
Standard test method for macroetching metals and alloys
American Society for Testing and Materials. Philadelphia
2000-01-01
1.1 These test procedures describe the methods of macro- etching metals and alloys to reveal their macrostructure. 1.2 The values stated in inch-pound units are to be regarded as the standard. The SI equivalents of inch-pound units may be approximate. 1.3 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use. For specific warning statements, see 6.2, 7.1, 8.1.3, 8.2.1, 8.8.3, 8.10.1.1, and 8.13.2.
Positron emission mammography (PEM): reviewing standardized semiquantitative method.
Yamamoto, Yayoi; Tasaki, Youichiro; Kuwada, Yukiko; Ozawa, Yukihiko; Katayama, Atsushi; Kanemaki, Yoshihide; Enokido, Katsutoshi; Nakamura, Seigo; Kubouchi, Kouichi; Morita, Satoshi; Noritake, Mutsumi; Nakajima, Yasuo; Inoue, Tomio
2013-11-01
To validate semiquantitative analysis of positron emission mammography (PEM). Fifty women with histologically confirmed breast lesions were retrospectively enrolled. Semiquantitative uptake values (4 methods), the maximum PEM uptake value (PUVmax), and the lesion-to-background (LTB) value (3 methods) were measured. LTB is a ratio of the lesion's PUVmax to the mean background; LTB1, LTB2, and LTB3 (which were calculated on different background) were used to designate the three values measured. Interobserver reliability between two readers for PUVmax and the LTBs was tested using the interobserver correlation coefficient (ICC). The likelihood ratio test was used to evaluate the relationship between ICCs. Receiver operating characteristic (ROC) curves were calculated for all methods. Diagnostic accuracy in differentiating benign tissue from malignant tissue was compared between PUVmax and LTB1. The ICC rate was 0.971 [95 % confidence interval (CI) 0.943-0.986] for PUVmax, 0.873 (95 % CI 0.758-0.935) for LTB1, 0.965 (95 % CI 0.925-0.983) for LTB2, and 0.895 (95 % CI 0.799-0.946) for LTB3. However, there were some technical difficulties in the practical use of LTB2 and LTB3. The likelihood ratio test between PUVmax and LTB1 was statistically significant (p PEM in semiquantitative analysis.
Standard test method for liquid impingement erosion using rotating apparatus
American Society for Testing and Materials. Philadelphia
2010-01-01
1.1 This test method covers tests in which solid specimens are eroded or otherwise damaged by repeated discrete impacts of liquid drops or jets. Among the collateral forms of damage considered are degradation of optical properties of window materials, and penetration, separation, or destruction of coatings. The objective of the tests may be to determine the resistance to erosion or other damage of the materials or coatings under test, or to investigate the damage mechanisms and the effect of test variables. Because of the specialized nature of these tests and the desire in many cases to simulate to some degree the expected service environment, the specification of a standard apparatus is not deemed practicable. This test method gives guidance in setting up a test, and specifies test and analysis procedures and reporting requirements that can be followed even with quite widely differing materials, test facilities, and test conditions. It also provides a standardized scale of erosion resistance numbers applicab...
Quantitative data standardization of X-ray based densitometry methods
Sergunova, K. A.; Petraikin, A. V.; Petrjajkin, F. A.; Akhmad, K. S.; Semenov, D. S.; Potrakhov, N. N.
2018-02-01
In the present work is proposed the design of special liquid phantom for assessing the accuracy of quantitative densitometric data. Also are represented the dependencies between the measured bone mineral density values and the given values for different X-ray based densitometry techniques. Shown linear graphs make it possible to introduce correction factors to increase the accuracy of BMD measurement by QCT, DXA and DECT methods, and to use them for standardization and comparison of measurements.
International Nuclear Information System (INIS)
Watanabe, Yoshirou; Sakai, Akira; Inada, Mitsuo; Shiraishi, Tomokuni; Kobayashi, Akitoshi
1982-01-01
S2-gated (the second heart sound) method was designed by authors. In 6 normal subjects and 16 patients (old myocardial infarction 12 cases, hypertension 2 cases and aortic regurgitation 2 cases), radioisotope (RI) angiography using S2-gated equilibrium method was performed. In RI angiography, sup(99m)Tc-human serum albumin (HSA) 555MBq (15mCi) as tracer, PDP11/34 as minicomputer and PCG/ECG symchromizer (Metro Inst.) were used. Then left ventricular (LV) volume curve by S2-gated and electrocardiogram (ECG) R wave-gated method were obtained. Using LV volume curve, left ventricular ejection fraction (EF), mean ejection rate (mER, s -1 ), mean filling rate (mFR, -1 ) and rapid filling fraction (RFF) were calculated. mFR indicated mean filling rate during rapid filling phase. RFF was defined as the filling fraction during rapid filling phase among stroke volume. S2-gated method was reliable in evaluation of early diastolic phase, compared with ECG-gated method. There was the difference between RFF in normal group and myocardial infarction (MI) group (p < 0.005). RFF in 2 groups were correlated with EF (r = 0.82, p < 0.01). RFF was useful in evaluating MI cases who had normal EF values. The comparison with mER by ECG-gated and mFR by S2-gated was useful in evaluating MI cases who had normal mER values. mFR was remarkably lower than mER in MI group, but was equal to mER in normal group approximately. In conclusion, the evaluation using RFF and mFR by S2-gated method was useful in MI cases who had normal systolic phase indices. (author)
Gosselin-Théberge, Maxime; Taboada, Eduardo; Guy, Rebecca A
2016-10-01
Campylobacter is a major public health and economic burden in developed and developing countries. This study evaluated published real-time PCR (qPCR) assays for detection of Campylobacter to enable selection of the best assays for quantification of C. spp. and C. jejuni in environmental water samples. A total of 9 assays were compared: three for thermotolerant C. spp. targeting the 16S rRNA and six for C. jejuni targeting different genes. These assays were tested in the wet-lab for specificity and sensitivity against a collection of 60, genetically diverse, Campylobacter isolates from environmental water. All three qPCR assays targeting C. spp. were positive when tested against the 60 isolates, whereas, assays targeting C. jejuni differed among each other in terms of specificity and sensitivity. Three C. jejuni-specific assays that demonstrated good specificity and sensitivity when tested in the wet-lab showed concordant results with in silico-predicted results obtained against a set of 211 C. jejuni and C. coli genome sequences. Two of the assays targeting C. spp. and C. jejuni were selected to compare DNA concentration estimation, using spectrophotometry and digital PCR (dPCR), in order to calibrate standard curves (SC) for greater accuracy of qPCR-based quantification. Average differences of 0.56±0.12 and 0.51±0.11 log fold copies were observed between the spectrophotometry-based SC preparation and the dPCR preparation for C. spp. and C. jejuni, respectively, demonstrating an over-estimation of Campylobacter concentration when spectrophotometry was used to calibrate the DNA SCs. Our work showed differences in quantification of aquatic environmental isolates of Campylobacter between qPCR assays and method-specific bias in SC preparation. This study provided an objective analysis of qPCR assays targeting Campylobacter in the literature and provides a framework for evaluating novel assays. Crown Copyright © 2016. Published by Elsevier B.V. All rights reserved.
Reflector construction by sound path curves - A method of manual reflector evaluation in the field
International Nuclear Information System (INIS)
Siciliano, F.; Heumuller, R.
1985-01-01
In order to describe the time-of-flight behavior of various reflectors we have set up models and derived from them analytical and graphic approaches to reflector reconstruction. In the course of this work, maximum achievable accuracy and possible simplifications were investigated. The aim of the time-of-flight reconstruction method is to determine the points of a reflector on the basis of a sound path function (sound path as the function of the probe index position). This method can only be used on materials which are isotropic in terms of sound velocity since the method relies on time of flight being converted into sound path. This paper deals only with two-dimensional reconstruction, in other words all statements relate to the plane of incidence. The method is based on the fact that the geometrical location of the points equidistant from a certain probe index position is a circle. If circles with radiuses equal to the associated sound path are drawn for various search unit positions the points of intersection of the circles are the desired reflector points
DEFF Research Database (Denmark)
Gardner, Ian A.; Greiner, Matthias
2006-01-01
Receiver-operating characteristic (ROC) curves provide a cutoff-independent method for the evaluation of continuous or ordinal tests used in clinical pathology laboratories. The area under the curve is a useful overall measure of test accuracy and can be used to compare different tests (or differ...
Comparison of Standard and Fast Charging Methods for Electric Vehicles
Directory of Open Access Journals (Sweden)
Petr Chlebis
2014-01-01
Full Text Available This paper describes a comparison of standard and fast charging methods used in the field of electric vehicles and also comparison of their efficiency in terms of electrical energy consumption. The comparison was performed on three-phase buck converter, which was designed for EV’s fast charging station. The results were obtained by both mathematical and simulation methods. The laboratory model of entire physical application, which will be further used for simulation results verification, is being built in these days.
Study on method of data standardization in interferometric testing
Chen, Wei
2010-10-01
As a rule, Interferometers are used to test the figure in the polishing phase of optical component, it could provide advance tutor suggestion for manufacturing. It is unable to get the whole wave-front interferogram usually because phase-shift Interferometry is sensitive to environment vibration, so the exactly interference data of the optical surface could not be obtained. Various spatial point on the tested optical component will be given by calculation method about arithmetic average value of equal accuracy is provied. This paper describes the testing results of optical components in size Φ1200mm, it is proved the method could eliminate the vibration effectively and get the standardization data.
American Society for Testing and Materials. Philadelphia
2006-01-01
1.1 This standard covers the determination of the resistance to stable crack extension in metallic materials in terms of the critical crack-tip-opening angle (CTOAc), ψc and/or the crack-opening displacement (COD), δ5 resistance curve (1). This method applies specifically to fatigue pre-cracked specimens that exhibit low constraint (crack-length-to-thickness and un-cracked ligament-to-thickness ratios greater than or equal to 4) and that are tested under slowly increasing remote applied displacement. The recommended specimens are the compact-tension, C(T), and middle-crack-tension, M(T), specimens. The fracture resistance determined in accordance with this standard is measured as ψc (critical CTOA value) and/or δ5 (critical COD resistance curve) as a function of crack extension. Both fracture resistance parameters are characterized using either a single-specimen or multiple-specimen procedures. These fracture quantities are determined under the opening mode (Mode I) of loading. Influences of environment a...
Directory of Open Access Journals (Sweden)
Bo Sun
2015-10-01
Full Text Available The article presents informations on birth population and policy change in China, along with data on neonatal and perinatal morbidity and mortality, covering four decades. Care standard and cost-effectiveness are also analyzed, highlighting the measures that significantly improved general and specified maternal and infant care, and established modern perinatal care system. Moreover, results from multicenter studies – through nation-wide or province-wide collaborative NICU network for respiratory diseases – are reported. Development of neonatal-perinatal care in China is representative in its transition over more than 3 decades from a poor condition into a modernized one. Public health care policy and professionally integrated service mode played pivotal roles, whereas social economic and cultural factors play either synergistic or detrimental roles for such a transition. The progress in Chinese neonatal-perinatal care is also influenced by international collaboration and exchange, and in a sense followed right the foot-print of international pioneers and their colleagues. In foreseeable future, many Chinese perinatal and neonatal centers would actively participate in international collaborations aiming at improving not only domestic but developing country neonatal-perinatal care as a whole. Proceedings of the 11th International Workshop on Neonatology and Satellite Meetings · Cagliari (Italy · October 26th-31st, 2015 · From the womb to the adultGuest Editors: Vassilios Fanos (Cagliari, Italy, Michele Mussap (Genoa, Italy, Antonio Del Vecchio (Bari, Italy, Bo Sun (Shanghai, China, Dorret I. Boomsma (Amsterdam, the Netherlands, Gavino Faa (Cagliari, Italy, Antonio Giordano (Philadelphia, USA
Cutibacterium acnes molecular typing: time to standardize the method.
Dagnelie, M-A; Khammari, A; Dréno, B; Corvec, S
2018-03-12
The Gram-positive, anaerobic-aerotolerant bacterium Cutibacterium acnes is a commensal of human healthy skin, subdivided into six main phylogenetic groups or phylotypes: IA1, IA2, IB, IC, II and III. To decipher how far C. acnes specific subgroups are involved in disease physiopathology, different molecular typing methods have been developed to identify these subgroups (i.e. phylotypes, clonal complexes, SLST-types). However, as there were several molecular typing methods developed over the last decade, comparing the results from one article to another became a difficult task. Based on the scientific literature, the aim of this narrative review is to propose a standardized method to perform C. acnes molecular typing, according to the degree of resolution needed (phylotypes, clonal complexes, or SLST-types). We discuss the different typing methods existing with a critical point of view, raising the advantages/drawbacks, and identify the most frequently used. Consequently, we propose a consensus algorithm according to the needed phylogeny resolution level. We first propose to use multiplex PCR for phylotype identification, MLST9 for clonal complex determination, and SLST for phylogeny investigation including numerous isolates. There is an obvious need to create a consensus about C. acnes molecular typing methods. This standardization will facilitate the comparison of the results from one article to another, and also the interpretations of clinical data. Copyright © 2018. Published by Elsevier Ltd.
Melchior, A.-L.; Ansari, R.; Aubourg, E.; Baillon, P.; Bareyre, P.; Bauer, F.; Beaulieu, J.-Ph.; Bouquet, A.; Brehin, S.; Cavalier, F.; Char, S.; Couchot, F.; Coutures, C.; Ferlet, R.; Fernandez, J.; Gaucherel, C.; Giraud-Heraud, Y.; Glicenstein, J.-F.; Goldman, B.; Gondolo, P.; Gros, M.; Guibert, J.; Gry, C.; Hardin, D.; Kaplan, J.; de Kat, J.; Lachieze-Rey, M.; Laurent, B.; Lesquoy, E.; Magneville, Ch.; Mansoux, B.; Marquette, J.-B.; Maurice, E.; Milsztajn, A.; Moniez, M.; Moreau, O.; Moscoso, L.; Palanque-Delabrouille, N.; Perdereau, O.; Prevot, L.; Renault, C.; Queinnec, F.; Rich, J.; Spiro, M.; Vigroux, L.; Zylberajch, S.; Vidal-Madjar, A.; Magneville, Ch.
1999-01-01
The presence and abundance of MAssive Compact Halo Objects (MACHOs) towards the Large Magellanic Cloud (LMC) can be studied with microlensing searches. The 10 events detected by the EROS and MACHO groups suggest that objects with 0.5 Mo could fill 50% of the dark halo. This preferred mass is quite surprising, and increasing the presently small statistics is a crucial issue. Additional microlensing of stars too dim to be resolved in crowded fields should be detectable using the Pixel Method. We present here an application of this method to the EROS 91-92 data (one tenth of the whole existing data set). We emphasize the data treatment required for monitoring pixel fluxes. Geometric and photometric alignments are performed on each image. Seeing correction and error estimates are discussed. 3.6" x 3.6" super-pixel light curves, thus produced, are very stable over the 120 days time-span. Fluctuations at a level of 1.8% of the flux in blue and 1.3% in red are measured on the pixel light curves. This level of stabil...
Gondim, Carina de Souza; Junqueira, Roberto Gonçalves; de Souza, Scheilla Vitorino Carvalho; Callao, M Pilar; Ruisánchez, Itziar
2017-06-01
A strategy for determining performance parameters of two-class multivariate qualitative methods was proposed. As case study, multivariate classification methods based on mid-infrared (MIR) spectroscopy coupled with the soft independent modelling of class analogy (SIMCA) technique for detection of hydrogen peroxide and formaldehyde in milk were developed. From the outputs (positive/negative/inconclusive) of the samples, which were unadulterated and adulterated at target value, the main performance parameters were obtained. Sensitivity and specificity values for the unadulterated and adulterated classes were satisfactory. Inconclusive ratios 12% and 21%, respectively, for hydrogen peroxide and formaldehyde were obtained. To evaluate the performance parameters related to concentration, Probability of Detection (POD) curves were established, estimating the decision limit, the capacity of detection and the unreliability region. When inconclusive outputs were obtained, two additional concentration limits were defined: the decision limit with inconclusive outputs and the detection capability with inconclusive outputs. The POD curves showed that for concentrations below 3.7gL -1 of hydrogen peroxide and close to zero of formaldehyde, the chance of giving a positive output (adulterated sample) was lower than 5%. For concentrations at or above 11.3gL -1 of hydrogen peroxide and 10mgL -1 of formaldehyde, the probability of giving a negative output was also lower than 5%. Copyright © 2017 Elsevier B.V. All rights reserved.
Liu, Boshi; Huang, Renliang; Yu, Yanjun; Su, Rongxin; Qi, Wei; He, Zhimin
2018-01-01
Ochratoxin A (OTA) is a type of mycotoxin generated from the metabolism of Aspergillus and Penicillium , and is extremely toxic to humans, livestock, and poultry. However, traditional assays for the detection of OTA are expensive and complicated. Other than OTA aptamer, OTA itself at high concentration can also adsorb on the surface of gold nanoparticles (AuNPs), and further inhibit AuNPs salt aggregation. We herein report a new OTA assay by applying the localized surface plasmon resonance effect of AuNPs and their aggregates. The result obtained from only one single linear calibration curve is not reliable, and so we developed a "double calibration curve" method to address this issue and widen the OTA detection range. A number of other analytes were also examined, and the structural properties of analytes that bind with the AuNPs were further discussed. We found that various considerations must be taken into account in the detection of these analytes when applying AuNP aggregation-based methods due to their different binding strengths.
Analysis and Extension of the PCA Method, Estimating a Noise Curve from a Single Image
Directory of Open Access Journals (Sweden)
Miguel Colom
2016-12-01
Full Text Available In the article 'Image Noise Level Estimation by Principal Component Analysis', S. Pyatykh, J. Hesser, and L. Zheng propose a new method to estimate the variance of the noise in an image from the eigenvalues of the covariance matrix of the overlapping blocks of the noisy image. Instead of using all the patches of the noisy image, the authors propose an iterative strategy to adaptively choose the optimal set containing the patches with lowest variance. Although the method measures uniform Gaussian noise, it can be easily adapted to deal with signal-dependent noise, which is realistic with the Poisson noise model obtained by a CMOS or CCD device in a digital camera.
PIV Measurement of Pulsatile Flows in 3D Curved Tubes Using Refractive Index Matching Method
International Nuclear Information System (INIS)
Hong, Hyeon Ji; Ji, Ho Seong; Kim, Kyung Chun
2016-01-01
Three-dimensional models of stenosis blood vessels were prepared using a 3D printer. The models included a straight pipe with axisymmetric stenosis and a pipe that was bent 10° from the center of stenosis. A refractive index matching method was utilized to measure accurate velocity fields inside the 3D tubes. Three different pulsatile flows were generated and controlled by changing the rotational speed frequency of the peristaltic pump. Unsteady velocity fields were measured by a time-resolved particle image velocimetry method. Periodic shedding of vortices occurred and moves depended on the maximum velocity region. The sizes and the positions of the vortices and symmetry are influenced by mean Reynolds number and tube geometry. In the case of the bent pipe, a recirculation zone observed at the post-stenosis could explain the possibility of blood clot formation and blood clot adhesion in view of hemodynamics.
Obuchowski, Nancy A.; Bullen, Jennifer A.
2018-04-01
Receiver operating characteristic (ROC) analysis is a tool used to describe the discrimination accuracy of a diagnostic test or prediction model. While sensitivity and specificity are the basic metrics of accuracy, they have many limitations when characterizing test accuracy, particularly when comparing the accuracies of competing tests. In this article we review the basic study design features of ROC studies, illustrate sample size calculations, present statistical methods for measuring and comparing accuracy, and highlight commonly used ROC software. We include descriptions of multi-reader ROC study design and analysis, address frequently seen problems of verification and location bias, discuss clustered data, and provide strategies for testing endpoints in ROC studies. The methods are illustrated with a study of transmission ultrasound for diagnosing breast lesions.
Miura, Tsutomu; Chiba, Koichi; Kuroiwa, Takayoshi; Narukawa, Tomohiro; Hioki, Akiharu; Matsue, Hideaki
2010-09-15
Neutron activation analysis (NAA) coupled with an internal standard method was applied for the determination of As in the certified reference material (CRM) of arsenobetaine (AB) standard solutions to verify their certified values. Gold was used as an internal standard to compensate for the difference of the neutron exposure in an irradiation capsule and to improve the sample-to-sample repeatability. Application of the internal standard method significantly improved linearity of the calibration curve up to 1 microg of As, too. The analytical reliability of the proposed method was evaluated by k(0)-standardization NAA. The analytical results of As in AB standard solutions of BCR-626 and NMIJ CRM 7901-a were (499+/-55)mgkg(-1) (k=2) and (10.16+/-0.15)mgkg(-1) (k=2), respectively. These values were found to be 15-20% higher than the certified values. The between-bottle variation of BCR-626 was much larger than the expanded uncertainty of the certified value, although that of NMIJ CRM 7901-a was almost negligible. Copyright (c) 2010 Elsevier B.V. All rights reserved.
Energy Technology Data Exchange (ETDEWEB)
Milligan, M R
1996-04-01
As an intermittent resource, capturing the temporal variation in windpower is an important issue in the context of utility production cost modeling. Many of the production cost models use a method that creates a cumulative probability distribution that is outside the time domain. The purpose of this report is to examine two production cost models that represent the two major model types: chronological and load duration cure models. This report is part of the ongoing research undertaken by the Wind Technology Division of the National Renewable Energy Laboratory in utility modeling and wind system integration.
An endogenous standard, radioisotopic ratio method in NAA
International Nuclear Information System (INIS)
Byrne, A.R.; Dermelj, M.
1997-01-01
A derivative form of NAA is proposed which is based on the use of an endogenous internal standard of already known concentration in the sample. If a comparator with a known ratio of the determinand and endogenous standard are co-irradiated with the sample, the determinand concentration is derived in terms of the endogenous standard concentration and the activity ratios of the two induced nuclides in the sample and comparator. As well as eliminating the sample mass and greatly reducing errors caused by pulse pile-up and geometrical differences, it was shown that in the radiochemical mode, if the endogenous standard is chosen so that the induced activity is radioisotopic with that from the determinand, the radiochemical yield is also eliminated and the risk non-achievement of isotopic exchange greatly reduced. The method is demonstrated with good results on reference materials for the determination of I, Mn and Ni. The advantages and disadvantages of this approach are discussed. It is suggested that it may be of application in quality control and in extending the range of certified elements in reference materials. (author)
Leem, Dohyun; Kim, Jin-Hwan; Barlat, Frédéric; Song, Jung Han; Lee, Myoung-Gyu
2018-03-01
An inverse approach based on the virtual fields method (VFM) is presented to identify the material hardening parameters under dynamic deformation. This dynamic-VFM (D-VFM) method does not require load information for the parameter identification. Instead, it utilizes acceleration fields in a specimen's gage region. To investigate the feasibility of the proposed inverse approach for dynamic deformation, the virtual experiments using dynamic finite element simulations were conducted. The simulation could provide all the necessary data for the identification such as displacement, strain, and acceleration fields. The accuracy of the identification results was evaluated by changing several parameters such as specimen geometry, velocity, and traction boundary conditions. The analysis clearly shows that the D-VFM which utilizes acceleration fields can be a good alternative to the conventional identification procedure that uses load information. Also, it was found that proper deformation conditions are required for generating sufficient acceleration fields during dynamic deformation to enhance the identification accuracy with the D-VFM.
Standard Test Method for Normal Spectral Emittance at Elevated Temperatures
American Society for Testing and Materials. Philadelphia
1972-01-01
1.1 This test method describes a highly accurate technique for measuring the normal spectral emittance of electrically conducting materials or materials with electrically conducting substrates, in the temperature range from 600 to 1400 K, and at wavelengths from 1 to 35 μm. 1.2 The test method requires expensive equipment and rather elaborate precautions, but produces data that are accurate to within a few percent. It is suitable for research laboratories where the highest precision and accuracy are desired, but is not recommended for routine production or acceptance testing. However, because of its high accuracy this test method can be used as a referee method to be applied to production and acceptance testing in cases of dispute. 1.3 The values stated in SI units are to be regarded as the standard. The values in parentheses are for information only. 1.4 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this stan...
Directory of Open Access Journals (Sweden)
Marek Krynke
2013-02-01
Full Text Available In slewing bearings, a great number of contact pairs are present on the contact surfaces between the rolling elements and raceways of the bearing. Computations to determine the load of the individual rolling elements, taking into account the flexibility of the bearing ring, are most often carried out using the finite element method. Construction of a FEM full model of the bearing, taking into account the shape of the rolling elements and the determination of the contact problem for every rolling element, leads to a singularity of stiffness matrix, which in turn makes the problem impossible to solve. In FEM models the rolling elements are replaced by one-dimensional finite elements (linear elements to simplify the computation procedure and to obtain an optimal time for computations. replaced by truss elements with a material non-linear characteristic located between the raceway centres of the curvatures in their axial section, are presented in the paper
49V Standardization by the CIEMAT/NIST LSC method
International Nuclear Information System (INIS)
Rodriguez Barquero, L.; Los Arcos, J.M.; Jimenez, A.; Ortiz, F.
1998-01-01
The sample preparation procedure for LSC standardization of a solution of 49 VCl 5 is described and the time stability of samples is analyzed in four commercial scintillators, HiSafe II, HiSafe III, Ultima-Gold and Insta-Gel Plus. Acceptable stability was obtained in HiSafe III and Ultima-Gold. A self-consistent procedure was developed and successfully applied to the determination of the activity concentration of 49 V. The samples were standardized by the CIEMAT/NIST method to a combined uncertainty of 3.4% in the interval of figure of merit 1.2-2.5 ( 3 H equivalent efficiency 40%-20%)
Improving healthcare middleware standards with semantic methods and technologies.
Román, Isabel; Calvillo, Jorge; Roa, Laura M; Madinabeitia, Germán
2008-01-01
A critical issue in healthcare informatics is to facilitate the integration and interoperability of applications. This goal can be achieved through an open architecture based on a middleware independent from specific applications; useful for working with existing systems, as well as for the integration of new systems. Several standard organizations are making efforts toward this target. This work is based on the EN 12967-1,2,3, developed by CEN, that follows the ODP (Open Distributed Processing) methodology, providing a specification of distributed systems based on the definition of five viewpoints. However, only the three upper viewpoints are used to produce EN 12967, the two lower viewpoints should be considered in the implementation context. We are using Semantic Grid for lower views and Semantic Web and Web Services for the definition of the upper views. We analyze benefits of using these methods and technologies and expose methodology for the development of this semantic healthcare middleware observing European Standards.
THE STANDARDIZED CANDLE METHOD FOR TYPE II PLATEAU SUPERNOVAE
International Nuclear Information System (INIS)
Olivares E, Felipe; Hamuy, Mario; Pignata, Giuliano; Maza, Jose; Bersten, Melina; Phillips, Mark M.; Morrel, Nidia I.; Suntzeff, Nicholas B.; Filippenko, Alexei V.; Kirshner, Robert P.; Matheson, Thomas
2010-01-01
In this paper, we study the 'standardized candle method' using a sample of 37 nearby (redshift z V ) = 0.2 mag. The correlation between plateau luminosity and expansion velocity previously reported in the literature is recovered. Using this relation and assuming a standard reddening law (R V = 3.1), we obtain Hubble diagrams (HDs) in the BVI bands with dispersions of ∼0.4 mag. Allowing R V to vary and minimizing the spread in the HDs, we obtain a dispersion range of 0.25-0.30 mag, which implies that these objects can deliver relative distances with precisions of 12%-14%. The resulting best-fit value of R V is 1.4 ± 0.1.
National Research Council Canada - National Science Library
2001-01-01
The Standard CMMI Appraisal Method for Process Improvement (SCAMPI(Service Mark)) is designed to provide benchmark quality ratings relative to Capability Maturity Model(registered) Integration (CMMI(Service Mark)) models...
Kentel, E.; Dogulu, N.
2016-12-01
Understanding catchment hydrology is a fundamental concern for hydrologists and water resources planners. In this context, given the increasing demand for streamflow information at sparsely gauged or ungauged catchments, there has been great interest in estimating flow duration curve (FDC) due to its many practical applications. Statistical methods have been widely used for the modelling of FDCs at ungauged sites. These methods usually rely on estimation of flow quantiles, or quantitative characteristics of the FDCs representing their shape such as slope and parameters of statistical distribution, often in the context of regionalization. However, there are limited studies using methods of machine learning. Potential of various machine learning approaches for estimating FDCs is yet to be explored although these methods have successfully and extensively applied to solve various other water resources management and hydrological problems. This study addresses this gap by presenting a comparative performance evaluation of the methods: i) Multiple Linear Regression (MLR), ii) Regression Tree (RT), iii) Artificial Neural Network (ANN), iv) Adaptive Neuro-Fuzzy Inference System (ANFIS). Comparison of these methods is done for FDCs of the Western Black Sea catchment in Turkey modelled by relating flow quantiles to a number of variables representing catchment and climate characteristics. Accuracy of predicted FDCs is assessed by three different measures: the Root Mean Squared Error (RMSE), the Nash-Sutcliffe Efficiency (NSE) and the Percent Bias (PBIAS).
DEFF Research Database (Denmark)
Bernstein, Daniel J.; Birkner, Peter; Lange, Tanja
2013-01-01
This paper introduces EECM-MPFQ, a fast implementation of the elliptic-curve method of factoring integers. EECM-MPFQ uses fewer modular multiplications than the well-known GMP-ECM software, takes less time than GMP-ECM, and finds more primes than GMP-ECM. The main improvements above the modular......-arithmetic level are as follows: (1) use Edwards curves instead of Montgomery curves; (2) use extended Edwards coordinates; (3) use signed-sliding-window addition-subtraction chains; (4) batch primes to increase the window size; (5) choose curves with small parameters and base points; (6) choose curves with large...
Analysis of Indonesian educational system standard with KSIM cross-impact method
Arridjal, F.; Aldila, D.; Bustamam, A.
2017-07-01
The Result of The Programme of International Student Assessment (PISA) on 2012 shows that Indonesia is on 64'th position from 65 countries in Mathematics Mean Score. The 2013 Learning Curve Mapping, Indonesia is included in the 10th category of countries with the lowest performance on cognitive skills aspect, i.e. 37'th position from 40 countries. Competency is built by 3 aspects, one of them is cognitive aspect. The low result of mapping on cognitive aspect, describe the low of graduate competences as an output of Indonesia National Education System (INES). INES adopting a concept Eight Educational System Standards (EESS), one of them is graduate competency standard which connected directly with Indonesia's students. This research aims is to model INES by using KSIM cross-impact. Linear regression models of EESS constructed using the accreditation national data of Senior High Schools in Indonesia. The results then interpreted as impact value on the construction of KSIM cross-impact INES. The construction is used to analyze the interaction of EESS and doing numerical simulation for possible public policy in the education sector, i.e. stimulate the growth of education staff standard, content, process and infrastructure. All simulations of public policy has been done with 2 methods i.e with a multiplier impact method and with constant intervention method. From numerical simulation result, it is shown that stimulate the growth standard of content in the construction KSIM cross-impact EESS is the best option for public policy to maximize the growth of graduate competency standard.
Standardized Method for High-throughput Sterilization of Arabidopsis Seeds.
Lindsey, Benson E; Rivero, Luz; Calhoun, Chistopher S; Grotewold, Erich; Brkljacic, Jelena
2017-10-17
Arabidopsis thaliana (Arabidopsis) seedlings often need to be grown on sterile media. This requires prior seed sterilization to prevent the growth of microbial contaminants present on the seed surface. Currently, Arabidopsis seeds are sterilized using two distinct sterilization techniques in conditions that differ slightly between labs and have not been standardized, often resulting in only partially effective sterilization or in excessive seed mortality. Most of these methods are also not easily scalable to a large number of seed lines of diverse genotypes. As technologies for high-throughput analysis of Arabidopsis continue to proliferate, standardized techniques for sterilizing large numbers of seeds of different genotypes are becoming essential for conducting these types of experiments. The response of a number of Arabidopsis lines to two different sterilization techniques was evaluated based on seed germination rate and the level of seed contamination with microbes and other pathogens. The treatments included different concentrations of sterilizing agents and times of exposure, combined to determine optimal conditions for Arabidopsis seed sterilization. Optimized protocols have been developed for two different sterilization methods: bleach (liquid-phase) and chlorine (Cl2) gas (vapor-phase), both resulting in high seed germination rates and minimal microbial contamination. The utility of these protocols was illustrated through the testing of both wild type and mutant seeds with a range of germination potentials. Our results show that seeds can be effectively sterilized using either method without excessive seed mortality, although detrimental effects of sterilization were observed for seeds with lower than optimal germination potential. In addition, an equation was developed to enable researchers to apply the standardized chlorine gas sterilization conditions to airtight containers of different sizes. The protocols described here allow easy, efficient, and
The eXtensible Access Method (XAM Standard
Directory of Open Access Journals (Sweden)
Steve Todd
2009-10-01
Full Text Available Normal 0 false false false MicrosoftInternetExplorer4 Recent developments in the storage industry have resulted in the creation of an industry standard application programmer’s interface (API known as XAM, the eXtensible Access Method. The XAM API focuses on the creation and management of reference information (otherwise known as fixed content. Storage vendors supporting the XAM API will provide new benefits to applications that are creating and managing large amounts of fixed content. The benefits described by this paper merit consideration and research by developers creating applications for Digital Curators.
Approximation by planar elastic curves
DEFF Research Database (Denmark)
Brander, David; Gravesen, Jens; Nørbjerg, Toke Bjerge
2016-01-01
We give an algorithm for approximating a given plane curve segment by a planar elastic curve. The method depends on an analytic representation of the space of elastic curve segments, together with a geometric method for obtaining a good initial guess for the approximating curve. A gradient-driven...
Probing the A1 to L10 transformation in FeCuPt using the first order reversal curve method
Directory of Open Access Journals (Sweden)
Dustin A. Gilbert
2014-08-01
Full Text Available The A1-L10 phase transformation has been investigated in (001 FeCuPt thin films prepared by atomic-scale multilayer sputtering and rapid thermal annealing (RTA. Traditional x-ray diffraction is not always applicable in generating a true order parameter, due to non-ideal crystallinity of the A1 phase. Using the first-order reversal curve (FORC method, the A1 and L10 phases are deconvoluted into two distinct features in the FORC distribution, whose relative intensities change with the RTA temperature. The L10 ordering takes place via a nucleation-and-growth mode. A magnetization-based phase fraction is extracted, providing a quantitative measure of the L10 phase homogeneity.
Standard Test Method for Measuring Binocular Disparity in Transparent Parts
American Society for Testing and Materials. Philadelphia
2009-01-01
1.1 This test method covers the amount of binocular disparity that is induced by transparent parts such as aircraft windscreens, canopies, HUD combining glasses, visors, or goggles. This test method may be applied to parts of any size, shape, or thickness, individually or in combination, so as to determine the contribution of each transparent part to the overall binocular disparity present in the total “viewing system” being used by a human operator. 1.2 This test method represents one of several techniques that are available for measuring binocular disparity, but is the only technique that yields a quantitative figure of merit that can be related to operator visual performance. 1.3 This test method employs apparatus currently being used in the measurement of optical angular deviation under Method F 801. 1.4 The values stated in inch-pound units are to be regarded as standard. The values given in parentheses are mathematical conversions to SI units that are provided for information only and are not con...
Farahnak, P.; Urbanek, M.; Džugan, J.
2017-09-01
Forming Limit Curve (FLC) is a well-known tool for the evaluation of failure in sheet metal process. However, its experimental determination and evaluation are rather complex. From theoretical point of view, FLC describes initiation of the instability not fracture. During the last years Digital Image Correlation (DIC) techniques have been developed extensively. Throughout this paper, all the measurements were done using DIC and as it is reported in the literature, different approaches to capture necking and fracture phenomena using Cross Section Method (CSM), Time dependent Method (TDM) and Thinning Method (TM) were investigated. Each aforementioned method has some advantages and disadvantages. Moreover, a cruciform specimen was used in order to cover whole FLC in the range between uniaxial to equi-biaxial tension and as an alternative for Nakajima test. Based on above-mentioned uncertainty about the fracture strain, some advanced numerical failure models can describe necking and fracture phenomena accurately with consideration of anisotropic effects. It is noticeable that in this paper, dog-bone, notch and circular disk specimens are used to calibrate Johnson-Cook (J-C) fracture model. The results are discussed for mild steel DC01.
Directory of Open Access Journals (Sweden)
Konings Maurits K
2012-08-01
Full Text Available Abstract Background In this paper a new non-invasive, operator-free, continuous ventricular stroke volume monitoring device (Hemodynamic Cardiac Profiler, HCP is presented, that measures the average stroke volume (SV for each period of 20 seconds, as well as ventricular volume-time curves for each cardiac cycle, using a new electric method (Ventricular Field Recognition with six independent electrode pairs distributed over the frontal thoracic skin. In contrast to existing non-invasive electric methods, our method does not use the algorithms of impedance or bioreactance cardiography. Instead, our method is based on specific 2D spatial patterns on the thoracic skin, representing the distribution, over the thorax, of changes in the applied current field caused by cardiac volume changes during the cardiac cycle. Since total heart volume variation during the cardiac cycle is a poor indicator for ventricular stroke volume, our HCP separates atrial filling effects from ventricular filling effects, and retrieves the volume changes of only the ventricles. Methods ex-vivo experiments on a post-mortem human heart have been performed to measure the effects of increasing the blood volume inside the ventricles in isolation, leaving the atrial volume invariant (which can not be done in-vivo. These effects have been measured as a specific 2D pattern of voltage changes on the thoracic skin. Furthermore, a working prototype of the HCP has been developed that uses these ex-vivo results in an algorithm to decompose voltage changes, that were measured in-vivo by the HCP on the thoracic skin of a human volunteer, into an atrial component and a ventricular component, in almost real-time (with a delay of maximally 39 seconds. The HCP prototype has been tested in-vivo on 7 human volunteers, using G-suit inflation and deflation to provoke stroke volume changes, and LVot Doppler as a reference technique. Results The ex-vivo measurements showed that ventricular filling
Standard Test Method for Cavitation Erosion Using Vibratory Apparatus
American Society for Testing and Materials. Philadelphia
2010-01-01
1.1 This test method covers the production of cavitation damage on the face of a specimen vibrated at high frequency while immersed in a liquid. The vibration induces the formation and collapse of cavities in the liquid, and the collapsing cavities produce the damage to and erosion (material loss) of the specimen. 1.2 Although the mechanism for generating fluid cavitation in this method differs from that occurring in flowing systems and hydraulic machines (see 5.1), the nature of the material damage mechanism is believed to be basically similar. The method therefore offers a small-scale, relatively simple and controllable test that can be used to compare the cavitation erosion resistance of different materials, to study in detail the nature and progress of damage in a given material, or—by varying some of the test conditions—to study the effect of test variables on the damage produced. 1.3 This test method specifies standard test conditions covering the diameter, vibratory amplitude and frequency of the...
Standard test method for measurement of soil resistivity using the two-electrode soil box method
American Society for Testing and Materials. Philadelphia
2005-01-01
1.1 This test method covers the equipment and a procedure for the measurement of soil resistivity, for samples removed from the ground, for use in the control of corrosion of buried structures. 1.2 Procedures allow for this test method to be used n the field or in the laboratory. 1.3 The test method procedures are for the resistivity measurement of soil samples in the saturated condition and in the as-received condition. 1.4 The values stated in SI units are to be regarded as the standard. The values given in parentheses are for information only. Soil resistivity values are reported in ohm-centimeter. This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and to determine the applicability of regulatory limitations prior to use.
Ben Abdessalem, A.; Jenson, F.; Calmon, P.
2016-02-01
This contribution provides an example of the possible advantages of adopting a Bayesian inversion approach to uncertainty quantification in nondestructive inspection methods. In such problem, the uncertainty associated to the random parameters is not always known and needs to be characterised from scattering signal measurements. The uncertainties may then correctly propagated in order to determine a reliable probability of detection curve. To this end, we establish a general Bayesian framework based on a non-parametric maximum likelihood function formulation and some priors from expert knowledge. However, the presented inverse problem is time-consuming and computationally intensive. To cope with this difficulty, we replace the real model by a surrogate one in order to speed-up the model evaluation and to make the problem to be computationally feasible for implementation. The least squares support vector regression is adopted as metamodelling technique due to its robustness to deal with non-linear problems. We illustrate the usefulness of this methodology through the control of tube with enclosed defect using ultrasonic inspection method.
International Nuclear Information System (INIS)
Christensen, S.M.
1976-01-01
A method known as covariant geodesic point separation is developed to calculate the vacuum expectation value of the stress tensor for a massive scalar field in an arbitrary gravitational field. The vacuum expectation value will diverge because the stress-tensor operator is constructed from products of field operators evaluated at the same space-time point. To remedy this problem, one of the field operators is taken to a nearby point. The resultant vacuum expectation value is finite and may be expressed in terms of the Hadamard elementary function. This function is calculated using a curved-space generalization of Schwinger's proper-time method for calculating the Feynman Green's function. The expression for the Hadamard function is written in terms of the biscalar of geodetic interval which gives a measure of the square of the geodesic distance between the separated points. Next, using a covariant expansion in terms of the tangent to the geodesic, the stress tensor may be expanded in powers of the length of the geodesic. Covariant expressions for each divergent term and for certain terms in the finite portion of the vacuum expectation value of the stress tensor are found. The properties, uses, and limitations of the results are discussed
Park, Young-Seok; Chang, Mi-Sook; Lee, Seung-Pyo
2011-01-01
This study attempted to establish three-dimensional average curves of the gingival line of maxillary teeth using reconstructed virtual models to utilize as guides for dental implant restorations. Virtual models from 100 full-mouth dental stone cast sets were prepared with a three-dimensional scanner and special reconstruction software. Marginal gingival lines were defined by transforming the boundary points to the NURBS (nonuniform rational B-spline) curve. Using an iterative closest point algorithm, the sample models were aligned and the gingival curves were isolated. Each curve was tessellated by 200 points using a uniform interval. The 200 tessellated points of each sample model were averaged according to the index of each model. In a pilot experiment, regression and fitting analysis of one obtained average curve was performed to depict it as mathematical formulae. The three-dimensional average curves of six maxillary anterior teeth, two maxillary right premolars, and a maxillary right first molar were obtained, and their dimensions were measured. Average curves of the gingival lines of young people were investigated. It is proposed that dentists apply these data to implant platforms or abutment designs to achieve ideal esthetics. The curves obtained in the present study may be incorporated as a basis for implant component design to improve the biologic nature and related esthetics of restorations.
Simulating Supernova Light Curves
Energy Technology Data Exchange (ETDEWEB)
Even, Wesley Paul [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Dolence, Joshua C. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-05-05
This report discusses supernova light simulations. A brief review of supernovae, basics of supernova light curves, simulation tools used at LANL, and supernova results are included. Further, it happens that many of the same methods used to generate simulated supernova light curves can also be used to model the emission from fireballs generated by explosions in the earth’s atmosphere.
Linhart, S. Mike; Nania, Jon F.; Sanders, Curtis L.; Archfield, Stacey A.
2012-01-01
-mean-square error ranged from 13.0 to 5.3 percent. Root-mean-square-error observations standard-deviation-ratio values ranged from 0.80 to 0.40. Percent-bias values ranged from 25.4 to 4.0 percent. Untransformed streamflow Nash-Sutcliffe efficiency values ranged from 0.84 to 0.35. The logarithm (base 10) streamflow Nash-Sutcliffe efficiency values ranged from 0.86 to 0.56. For the streamgage with the best agreement between observed and estimated streamflow, higher streamflows appear to be underestimated. For the streamgage with the worst agreement between observed and estimated streamflow, low flows appear to be overestimated whereas higher flows seem to be underestimated. Estimated cumulative streamflows for the period October 1, 2004, to September 30, 2009, are underestimated by -25.8 and -7.4 percent for the closest and poorest comparisons, respectively. For the Flow Duration Curve Transfer method, results of the validation study conducted by using the same six streamgages show that differences between the root-mean-square error and the mean absolute error ranged from 437 to 93.9 ft3/s, with the larger value signifying a greater occurrence of outliers between observed and estimated streamflows. Root-mean-square-error values ranged from 906 to 169 ft3/s. Values of the percent root-mean-square-error ranged from 67.0 to 25.6 percent. The logarithm (base 10) streamflow percent root-mean-square error ranged from 12.5 to 4.4 percent. Root-mean-square-error observations standard-deviation-ratio values ranged from 0.79 to 0.40. Percent-bias values ranged from 22.7 to 0.94 percent. Untransformed streamflow Nash-Sutcliffe efficiency values ranged from 0.84 to 0.38. The logarithm (base 10) streamflow Nash-Sutcliffe efficiency values ranged from 0.89 to 0.48. For the streamgage with the closest agreement between observed and estimated streamflow, there is relatively good agreement between observed and estimated streamflows. For the streamgage with the poorest agreement between observed and
International Nuclear Information System (INIS)
Santos, Calink Indiara do Livramento; Carvalho, Melissa Souza; Raphael, Ellen; Ferrari, Jefferson Luis; Schiavon, Marco Antonio; Dantas, Clecio
2016-01-01
In this work a colloidal approach to synthesize water-soluble CdSe quantum dots (QDs) bearing a surface ligand, such as thioglycolic acid (TGA), 3-mercaptopropionic acid (MPA), glutathione (GSH), or thioglycerol (TGH) was applied. The synthesized material was characterized by X-ray diffraction (XRD), Fourier-transform infrared spectroscopy (FT-IR), UV-visible spectroscopy (UV-Vis), and fluorescence spectroscopy (PL). Additionally, a comparative study of the optical properties of different CdSe QDs was performed, demonstrating how the surface ligand affected crystal growth. The particles sizes were calculated from a polynomial function that correlates the particle size with the maximum fluorescence position. Curve resolution methods (EFA and MCR-ALS) were employed to decompose a series of fluorescence spectra to investigate the CdSe QDs size distribution and determine the number of fraction with different particle size. The results for the MPA-capped CdSe sample showed only two main fraction with different particle sizes with maximum emission at 642 and 686 nm. The calculated diameters from these maximum emission were, respectively, 2.74 and 3.05 nm. (author)
Directory of Open Access Journals (Sweden)
Elin Yusibani
2013-12-01
Full Text Available Application of a curved vibrating wire method (CVM to measure gas viscosity has been widely used. A ﬁne Tungsten wire with 50 mm of diameter is bent into a semi-circular shape and arranged symmetrically in a magnetic ﬁeld of about 0.2 T. The frequency domain is used for calculating the viscosity as a response for forced oscillation of the wire. Internal friction is one of the parameter in the CVM which is has to be measured beforeahead. Internal friction coefﬁcien for the wire material which is the inverse of the quality factor has to be measured in a vacuum condition. The term involving internal friction actually represents the effective resistance of motion due to all non-viscous damping phenomena including internal friction and magnetic damping. The testing of internal friction measurement shows that at different induced voltage and elevated temperature at a vacuum condition, it gives the value of internal friction for Tungsten is around 1 to 4 10-4.
Huang, Norden E. (Inventor)
2004-01-01
A computer implemented physical signal analysis method includes four basic steps and the associated presentation techniques of the results. The first step is a computer implemented Empirical Mode Decomposition that extracts a collection of Intrinsic Mode Functions (IMF) from nonlinear, nonstationary physical signals. The decomposition is based on the direct extraction of the energy associated with various intrinsic time scales in the physical signal. Expressed in the IMF's, they have well-behaved Hilbert Transforms from which instantaneous frequencies can be calculated. The second step is the Hilbert Transform which produces a Hilbert Spectrum. Thus, the invention can localize any event on the time as well as the frequency axis. The decomposition can also be viewed as an expansion of the data in terms of the IMF's. Then, these IMF's, based on and derived from the data, can serve as the basis of that expansion. The local energy and the instantaneous frequency derived from the IMF's through the Hilbert transform give a full energy-frequency-time distribution of the data which is designated as the Hilbert Spectrum. The third step filters the physical signal by combining a subset of the IMFs. In the fourth step, a curve may be fitted to the filtered signal which may not have been possible with the original, unfiltered signal.
Energy Technology Data Exchange (ETDEWEB)
Santos, Calink Indiara do Livramento; Carvalho, Melissa Souza; Raphael, Ellen; Ferrari, Jefferson Luis; Schiavon, Marco Antonio, E-mail: schiavon@ufsj.edu.br [Universidade Federal de Sao Joao del-Rei (UFSJ), MG (Brazil). Grupo de Pesquisa em Quimica de Materiais; Dantas, Clecio [Universidade Estadual do Maranhao (LQCINMETRIA/UEMA), Caxias, MA (Brazil). Lab. de Quimica Computacional Inorganica e Quimiometria
2016-11-15
In this work a colloidal approach to synthesize water-soluble CdSe quantum dots (QDs) bearing a surface ligand, such as thioglycolic acid (TGA), 3-mercaptopropionic acid (MPA), glutathione (GSH), or thioglycerol (TGH) was applied. The synthesized material was characterized by X-ray diffraction (XRD), Fourier-transform infrared spectroscopy (FT-IR), UV-visible spectroscopy (UV-Vis), and fluorescence spectroscopy (PL). Additionally, a comparative study of the optical properties of different CdSe QDs was performed, demonstrating how the surface ligand affected crystal growth. The particles sizes were calculated from a polynomial function that correlates the particle size with the maximum fluorescence position. Curve resolution methods (EFA and MCR-ALS) were employed to decompose a series of fluorescence spectra to investigate the CdSe QDs size distribution and determine the number of fraction with different particle size. The results for the MPA-capped CdSe sample showed only two main fraction with different particle sizes with maximum emission at 642 and 686 nm. The calculated diameters from these maximum emission were, respectively, 2.74 and 3.05 nm. (author)
Study of adsorption states in ZnO—Ag gas-sensitive ceramics using the ECTV curves method
Directory of Open Access Journals (Sweden)
Lyashkov A. Yu.
2013-12-01
Full Text Available The ZnO—Ag ceramic system as the material for semiconductor sensors of ethanol vapors was proposed quite a long time ago. The main goal of this work was to study surface electron states of this system and their relation with the electric properties of the material. The quantity of doping with Ag2O was changed in the range of 0,1–2,0% of mass. The increase of the Ag doping leads to a shift of the Fermi level down (closer to the valence zone. The paper presents research results on electrical properties of ZnO-Ag ceramics using the method of thermal vacuum curves of electrical conductivity. Changes in the electrical properties during heating in vacuum in the temperature range of 300—800 K were obtained and discussed. The increase of Tvac leads to removal of oxygen from the surface of samples The oxygen is adsorbed in the form of O2– and O– ions and is the acceptor for ZnO. This results in the lowering of the inter-crystallite potential barriers in the ceramic. The surface electron states (SES above the Fermi level are virtually uncharged. The increase of the conductivity causes desorption of oxygen from the SES settled below the Fermi level of the semiconductor. The model allows evaluating the depth of the Fermi level in the inhomogeneous semiconductor materials.
Establishing the standard method of cochlear implant in Rongchang pig.
Chen, Wei; Yi, Haijin; Zhang, Liang; Ji, Fei; Yuan, Shuolong; Zhang, Yue; Ren, Lili; Li, Jianan; Chen, Lei; Guo, Weiwei; Yang, Shiming
2017-05-01
In this investigation, a large mammal, Rongchang pigs were used to successfully establish a research platform for cochlear implant study on the routine use of it in clinic. The aim of this study was to establish a standard method of cochlear implant in a large mammal-pig. Rongchang pigs were selected, then divided into two groups: normal-hearing group (Mitf +/+) and mutation group with hearing loss (Mitf -/-). Cochlear implants were used and ABR and EABR were recorded. The implanted electrodes were observed by X-ray and HE stains. The success with cochlear implant and the best electrode position could be defined in all animals, the coiling of the cochlea reached 1.5-1.75 turns. Immediately after the operation of cochlear implants, the ABR threshold of the operated ear (right) could not be derived for each frequency at 120 dB SPL. Moreover, 7 days after surgery, the low-frequency ABR threshold of the operated ear (right) could be derived partly at 100 dB SPL, but the high-frequency ABR threshold could not be derived at 120 dB SPL. Immediately or 1 week after cochlear implants, the EABR threshold was 90 CL in the Mitf +/+ group. This was obviously lower than the 190 CL in the Mitf -/- group.
A Method for Developing Standard Patient Education Program.
Lura, Carolina Bryne; Hauch, Sophie Misser Pallesgaard; Gøeg, Kirstine Rosenbeck; Pape-Haugaard, Louise
2018-01-01
In Denmark, patients being treated on Haematology Outpatients Departments get instructed to self-manage their blood sample collection from Central Venous Catheter (CVC). However, this is a complex and risky procedure, which can jeopardize patient safety. The aim of the study was to suggest a method for developing standard digital patient education programs for patients in self-administration of blood samples drawn from CVC. The Design Science Research Paradigm was used to develop a digital patient education program, called PAVIOSY, to increase patient safety during execution of the blood sample collection procedure by using videos for teaching as well as procedural support. A step-by-step guide was developed and used as basis for making the videos. Quality assurance through evaluation with a nurse was conducted on both the step-by-step guide and the videos. The quality assurance evaluation of the videos showed; 1) Errors due to the order of the procedure can be determined by reviewing the videos despite that the guide was followed. 2) Videos can be used to identify errors - important for patient safety - in the procedure, which are not identifiable in a written script. To ensure correct clinical content of the educational patient system, health professionals must be engaged early in the development of content and design phase.
Kollmann-Camaiora, A; Brogly, N; Alsina, E; Gilsanz, F
2017-10-01
Although ultrasound is a basic competence for anaesthesia residents (AR) there is few data available on the learning process. This prospective observational study aims to assess the learning process of ultrasound-guided continuous femoral nerve block and to determine the number of procedures that a resident would need to perform in order to reach proficiency using the cumulative sum (CUSUM) method. We recruited 19 AR without previous experience. Learning curves were constructed using the CUSUM method for ultrasound-guided continuous femoral nerve block considering 2 success criteria: a decrease of pain score>2 in a [0-10] scale after 15minutes, and time required to perform it. We analyse data from 17 AR for a total of 237 ultrasound-guided continuous femoral nerve blocks. 8/17 AR became proficient for pain relief, however all the AR who did more than 12 blocks (8/8) became proficient. As for time of performance 5/17 of AR achieved the objective of 12minutes, however all the AR who did more than 20 blocks (4/4) achieved it. The number of procedures needed to achieve proficiency seems to be 12, however it takes more procedures to reduce performance time. The CUSUM methodology could be useful in training programs to allow early interventions in case of repeated failures, and develop competence-based curriculum. Copyright © 2017 Sociedad Española de Anestesiología, Reanimación y Terapéutica del Dolor. Publicado por Elsevier España, S.L.U. All rights reserved.
Feldman, Liane S; Cao, Jiguo; Andalib, Amin; Fraser, Shannon; Fried, Gerald M
2009-08-01
Although the "learning curve" is commonly analyzed by splitting the data into arbitrary chunks of experience, this does not allow for precise estimation of where the curve plateaus or the rate at which learning is achieved. Our objective was to describe a simple way to characterize the learning curve for a fundamental laparoscopic task. Sixteen medical students performed 40 repetitions of the Fundamentals of Laparoscopic Surgery (FLS) pegboard task and were scored using validated metrics. A learning curve was plotted and nonlinear regression was used to fit an inverse curve (Y = a - b/X), yielding an estimate of a (asymptote) and b (slope) for each subject. Two values were derived from these estimates: "learning plateau," defined as the theoretical best score achievable (when X = infinity, Y = a) and the "learning rate," defined as the number of trials required to reach 90% of potential (Y = 0.9a when X = 10 *b/a). Analysis of variance (ANOVA) was used to compare subjects reporting an interest in a surgical career (n = 4) to those not interested (n = 4) or undecided (n = 8). Data expressed as mean values +/- standard deviations. The raw starting score was 48 +/- 24, increasing to 94 +/- 8 for the 40th trial. The curve-fitting estimated "learning plateau" was 90 +/- 10 (range, 61-99), whereas the "learning rate," or the number of trials to 90% of potential, was 6 +/- 2 (range, 2-11). Subjects not interested in a surgical career had lower starting scores and learning plateau and slower learning rate compared with subjects interested in surgery or undecided (ANOVA; P learning plateau and learning speed for this fundamental laparoscopic task. These parameters allowed for comparisons to be made within subgroups of subjects and may have utility as an outcome for educational interventions designed to impact the learning curve.
Kobayashi, R.; Koketsu, K.
2008-12-01
Great earthquakes along the Sagami trough, where the Philippine Sea slab is subducting, have repeatedly occurred. The 1703 Genroku and 1923 (Taisho) Kanto earthquakes (M 8.2 and M 7.9, respectively) are known as typical ones, and cause severe damages in the metropolitan area. The recurrence periods of Genroku- and Taisho-type earthquakes inferred from studies of wave cut terraces are about 200-400 and 2000 years, respectively (e.g., Earthquake Research Committee, 2004). We have inferred the source process of the 1923 Kanto earthquake from geodetic, teleseismic, and strong motion data (Kobayashi and Koketsu, 2005). Two asperities of the 1923 Kanto earthquake are located around the western part of Kanagawa prefecture (the base of the Izu peninsula) and around the Miura peninsula. After we adopted an updated fault plane model, which is based on a recent model of the Philippine Sea slab, the asperity around the Miura peninsula moves to the north (Sato et al., 2005). We have also investigated the slip distribution of the 1703 Genroku earthquake. We used crustal uplift and subsidence data investigated by Shishikura (2003), and inferred the slip distribution by using the same geometry of the fault as the 1923 Kanto earthquake. The peak of slip of 16 m is located the southern part of the Boso peninsula. Shape of the upper surface of the Philippine Sea slab is important to constrain extent of the asperities well. Sato et al. (2005) presented the shape in inland part, but less information in oceanic part except for the Tokyo bay. Kimura (2006) and Takeda et al. (2007) presented the shape in oceanic part. In this study, we compiled these slab models, and planed to reanalyze the slip distributions of the 1703 and 1923 earthquakes. We developed a new curved fault plane on the plate boundary between the Philippine Sea slab and inland plate. The curved fault plane was divided into 56 triangle subfaults. Point sources for the Green's function calculations are located at centroids
Song, Jin-Zi; Wan, Min; Xu, Hui; Yao, Xiu-Jun; Zhang, Bo; Wang, Jin-Hong
2009-09-01
The major idea of this article is to discuss standardization and normalization for the product standard of medical devices. Analyze the problem related to the physical performance requirements and test methods during product standard drafting process and make corresponding suggestions.
International Nuclear Information System (INIS)
Liang, Fusheng; Zhao, Ji; Ji, Shijun; Zhang, Bing; Fan, Cheng
2017-01-01
The B-spline curve has been widely used in the reconstruction of measurement data. The error-bounded sampling points reconstruction can be achieved by the knot addition method (KAM) based B-spline curve fitting. In KAM, the selection pattern of initial knot vector has been associated with the ultimate necessary number of knots. This paper provides a novel initial knots selection method to condense the knot vector required for the error-bounded B-spline curve fitting. The initial knots are determined by the distribution of features which include the chord length (arc length) and bending degree (curvature) contained in the discrete sampling points. Firstly, the sampling points are fitted into an approximate B-spline curve Gs with intensively uniform knot vector to substitute the description of the feature of the sampling points. The feature integral of Gs is built as a monotone increasing function in an analytic form. Then, the initial knots are selected according to the constant increment of the feature integral. After that, an iterative knot insertion (IKI) process starting from the initial knots is introduced to improve the fitting precision, and the ultimate knot vector for the error-bounded B-spline curve fitting is achieved. Lastly, two simulations and the measurement experiment are provided, and the results indicate that the proposed knot selection method can reduce the number of ultimate knots available. (paper)
Mohammed, Yassene; Pan, Jingxi; Zhang, Suping; Han, Jun; Borchers, Christoph H
2018-03-01
Targeted proteomics using MRM with stable-isotope-labeled internal-standard (SIS) peptides is the current method of choice for protein quantitation in complex biological matrices. Better quantitation can be achieved with the internal standard-addition method, where successive increments of synthesized natural form (NAT) of the endogenous analyte are added to each sample, a response curve is generated, and the endogenous concentration is determined at the x-intercept. Internal NAT-addition, however, requires multiple analyses of each sample, resulting in increased sample consumption and analysis time. To compare the following three methods, an MRM assay for 34 high-to-moderate abundance human plasma proteins is used: classical internal SIS-addition, internal NAT-addition, and external NAT-addition-generated in buffer using NAT and SIS peptides. Using endogenous-free chicken plasma, the accuracy is also evaluated. The internal NAT-addition outperforms the other two in precision and accuracy. However, the curves derived by internal vs. external NAT-addition differ by only ≈3.8% in slope, providing comparable accuracies and precision with good CV values. While the internal NAT-addition method may be "ideal", this new external NAT-addition can be used to determine the concentration of high-to-moderate abundance endogenous plasma proteins, providing a robust and cost-effective alternative for clinical analyses or other high-throughput applications. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Directory of Open Access Journals (Sweden)
Vickers Andrew
2010-09-01
Full Text Available Abstract Background Decision curve analysis (DCA has been proposed as an alternative method for evaluation of diagnostic tests, prediction models, and molecular markers. However, DCA is based on expected utility theory, which has been routinely violated by decision makers. Decision-making is governed by intuition (system 1, and analytical, deliberative process (system 2, thus, rational decision-making should reflect both formal principles of rationality and intuition about good decisions. We use the cognitive emotion of regret to serve as a link between systems 1 and 2 and to reformulate DCA. Methods First, we analysed a classic decision tree describing three decision alternatives: treat, do not treat, and treat or no treat based on a predictive model. We then computed the expected regret for each of these alternatives as the difference between the utility of the action taken and the utility of the action that, in retrospect, should have been taken. For any pair of strategies, we measure the difference in net expected regret. Finally, we employ the concept of acceptable regret to identify the circumstances under which a potentially wrong strategy is tolerable to a decision-maker. Results We developed a novel dual visual analog scale to describe the relationship between regret associated with "omissions" (e.g. failure to treat vs. "commissions" (e.g. treating unnecessary and decision maker's preferences as expressed in terms of threshold probability. We then proved that the Net Expected Regret Difference, first presented in this paper, is equivalent to net benefits as described in the original DCA. Based on the concept of acceptable regret we identified the circumstances under which a decision maker tolerates a potentially wrong decision and expressed it in terms of probability of disease. Conclusions We present a novel method for eliciting decision maker's preferences and an alternative derivation of DCA based on regret theory. Our approach may
THE STANDARD SINGLE COST METHOD AND THE EFFICIENCY OF INDUSTRIAL COMPANIES’ MANAGEMENT
Directory of Open Access Journals (Sweden)
Claudiu C. CONSTANTINESCU
2008-12-01
Full Text Available This article briefly describes the premises for the application of the standard direct cost calculation method in industry, the standard single cost calculation method, the stages of standard cost calculation per product and the calculation methods of standards per product. It also briefly underlines the possibilities of cost calculation and monitoring of deviation of the costs of raw materials and other materials as compared to the pre-established standard costs.
Standard test method for atom percent fission in uranium and plutonium fuel (Neodymium-148 Method)
American Society for Testing and Materials. Philadelphia
1996-01-01
1.1 This test method covers the determination of stable fission product 148Nd in irradiated uranium (U) fuel (with initial plutonium (Pu) content from 0 to 50 %) as a measure of fuel burnup (1-3). 1.2 It is possible to obtain additional information about the uranium and plutonium concentrations and isotopic abundances on the same sample taken for burnup analysis. If this additional information is desired, it can be obtained by precisely measuring the spike and sample volumes and following the instructions in Test Method E267. 1.3 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.
Tsalatsanis, Athanasios; Hozo, Iztok; Vickers, Andrew; Djulbegovic, Benjamin
2010-09-16
Decision curve analysis (DCA) has been proposed as an alternative method for evaluation of diagnostic tests, prediction models, and molecular markers. However, DCA is based on expected utility theory, which has been routinely violated by decision makers. Decision-making is governed by intuition (system 1), and analytical, deliberative process (system 2), thus, rational decision-making should reflect both formal principles of rationality and intuition about good decisions. We use the cognitive emotion of regret to serve as a link between systems 1 and 2 and to reformulate DCA. First, we analysed a classic decision tree describing three decision alternatives: treat, do not treat, and treat or no treat based on a predictive model. We then computed the expected regret for each of these alternatives as the difference between the utility of the action taken and the utility of the action that, in retrospect, should have been taken. For any pair of strategies, we measure the difference in net expected regret. Finally, we employ the concept of acceptable regret to identify the circumstances under which a potentially wrong strategy is tolerable to a decision-maker. We developed a novel dual visual analog scale to describe the relationship between regret associated with "omissions" (e.g. failure to treat) vs. "commissions" (e.g. treating unnecessary) and decision maker's preferences as expressed in terms of threshold probability. We then proved that the Net Expected Regret Difference, first presented in this paper, is equivalent to net benefits as described in the original DCA. Based on the concept of acceptable regret we identified the circumstances under which a decision maker tolerates a potentially wrong decision and expressed it in terms of probability of disease. We present a novel method for eliciting decision maker's preferences and an alternative derivation of DCA based on regret theory. Our approach may be intuitively more appealing to a decision-maker, particularly
Standard test method for distribution coefficients of inorganic species by the batch method
American Society for Testing and Materials. Philadelphia
2010-01-01
1.1 This test method covers the determination of distribution coefficients of chemical species to quantify uptake onto solid materials by a batch sorption technique. It is a laboratory method primarily intended to assess sorption of dissolved ionic species subject to migration through pores and interstices of site specific geomedia. It may also be applied to other materials such as manufactured adsorption media and construction materials. Application of the results to long-term field behavior is not addressed in this method. Distribution coefficients for radionuclides in selected geomedia are commonly determined for the purpose of assessing potential migratory behavior of contaminants in the subsurface of contaminated sites and waste disposal facilities. This test method is also applicable to studies for parametric studies of the variables and mechanisms which contribute to the measured distribution coefficient. 1.2 The values stated in SI units are to be regarded as standard. No other units of measurement a...
Fischer, Leonard S; Lumsden, Antoinette; Leung, Felix W
2012-07-01
Water exchange colonoscopy has been reported to reduce examination discomfort and to provide salvage cleansing in unsedated or minimally sedated patients. The prolonged insertion time and perceived difficulty of insertion associated with water exchange have been cited as a barrier to its widespread use. To assess the feasibility of learning and using the water exchange method of colonoscopy in a U.S. community practice setting. Quality improvement program in nonacademic community endoscopy centers. Patients undergoing sedated diagnostic, surveillance, or screening colonoscopy. After direct coaching by a knowledgeable trainer, an experienced colonoscopist initiated colonoscopy using the water method. Whenever >5 min elapsed without advancing the colonoscope, conversion to air insufflation was made to ensure timely completion of the examination. Water Method Intention-to-treat (ITT) cecal intubation rate (CIR). Female patients had a significantly higher rate of past abdominal surgery and a significantly lower ITTCIR. The ITTCIR showed a progressive increase over time in both males and females to 85-90%. Mean insertion time was maintained at 9 to 10 min. The overall CIR was 99%. Use of water exchange did not preclude cecal intubation upon conversion to usual air insufflation in sedated patients examined by an experienced colonoscopist. With practice ITTCIR increased over time in both male and female patients. Larger volumes of water exchanged were associated with higher ITTCIR and better quality scores of bowel preparation. The data suggest that learning water exchange by a busy colonoscopist in a community practice setting is feasible and outcomes conform to accepted quality standards.
Sun, Yan; Strobel, Johannes; Newby, Timothy J.
2017-01-01
Adopting a two-phase explanatory sequential mixed methods research design, the current study examined the impact of student teaching experiences on pre-service teachers' readiness for technology integration. In phase-1 of quantitative investigation, 2-level growth curve models were fitted using online repeated measures survey data collected from…
29 CFR 1630.7 - Standards, criteria, or methods of administration.
2010-07-01
... Standards, criteria, or methods of administration. It is unlawful for a covered entity to use standards, criteria, or methods of administration, which are not job-related and consistent with business necessity... 29 Labor 4 2010-07-01 2010-07-01 false Standards, criteria, or methods of administration. 1630.7...
Lei, Yuchuan; Chen, Zhenqian; Shi, Juan
2017-12-01
Numerical simulations of condensation heat transfer of R134a in curved triangle microchannels with various curvatures are proposed. The model is established on the volume of fluid (VOF) approach and user-defined routines which including mass transfer at the vapor-liquid interface and latent heat. Microgravity operating condition is assumed in order to highlight the surface tension. The predictive accuracy of the model is assessed by comparing the simulated results with available correlations in the literature. Both an increased mass flux and the decreased hydraulic diameter could bring better heat transfer performance. No obvious effect of the wall heat flux is observed in condensation heat transfer coefficient. Changes in geometry and surface tension lead to a reduction of the condensate film thickness at the sides of the channel and accumulation of the condensate film at the corners of the channel. Better heat transfer performance is obtained in the curved triangle microchannels over the straight ones, and the performance could be further improved in curved triangle microchannels with larger curvatures. The minimum film thickness where most of the heat transfer process takes place exists near the corners and moves toward the corners in curved triangle microchannels with larger curvatures.
A.R. Ansari; B. Hossain; B. Koren (Barry); G.I. Shishkin (Gregori)
2007-01-01
textabstractWe investigate the model problem of flow of a viscous incompressible fluid past a symmetric curved surface when the flow is parallel to its axis. This problem is known to exhibit boundary layers. Also the problem does not have solutions in closed form, it is modelled by boundary-layer
Groot, L.F.M.|info:eu-repo/dai/nl/073642398
2008-01-01
The purpose of this paper is twofold. First, it exhibits that standard tools in the measurement of income inequality, such as the Lorenz curve and the Gini-index, can successfully be applied to the issues of inequality measurement of carbon emissions and the equity of abatement policies across
Standard test method for determining atmospheric chloride deposition rate by wet candle method
American Society for Testing and Materials. Philadelphia
2002-01-01
1.1 This test method covers a wet candle device and its use in measuring atmospheric chloride deposition (amount of chloride salts deposited from the atmosphere on a given area per unit time). 1.2 Data on atmospheric chloride deposition can be useful in classifying the corrosivity of a specific area, such as an atmospheric test site. Caution must be exercised, however, to take into consideration the season because airborne chlorides vary widely between seasons. 1.3 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.
Standard or Dialect? A new online elicitation method
Sloos, Marjoleine
2012-01-01
In dialectology, it is often necessary to obtain a measure for the level of dialectal accent shown by individual speakers, especially if statistical analysis is needed. This also applies to studies on standard variants which are "coloured" by regiolects or dialects. In this paper I explore the
Oh, Joo Han; Kim, Woo; Cayetano, Angel A
2017-06-01
Humeral retroversion is variable among individuals, and there are several measurement methods. This study was conducted to compare the concordance and reliability between the standard method and 5 other measurement methods on two-dimensional (2D) computed tomography (CT) scans. CT scans from 21 patients who underwent shoulder arthroplasty (19 women and 2 men; mean age, 70.1 years [range, 42 to 81 years]) were analyzed. The elbow transepicondylar axis was used as a distal reference. Proximal reference points included the central humeral head axis (standard method), the axis of the humeral center to 9 mm posterior to the posterior margin of the bicipital groove (method 1), the central axis of the bicipital groove -30° (method 2), the base axis of the triangular shaped metaphysis +2.5° (method 3), the distal humeral head central axis +2.4° (method 4), and contralateral humeral head retroversion (method 5). Measurements were conducted independently by two orthopedic surgeons. The mean humeral retroversion was 31.42° ± 12.10° using the standard method, and 29.70° ± 11.66° (method 1), 30.64° ± 11.24° (method 2), 30.41° ± 11.17° (method 3), 32.14° ± 11.70° (method 4), and 34.15° ± 11.47° (method 5) for the other methods. Interobserver reliability and intraobserver reliability exceeded 0.75 for all methods. On the test to evaluate the equality of the standard method to the other methods, the intraclass correlation coefficients (ICCs) of method 2 and method 4 were different from the ICC of the standard method in surgeon A ( p method 2 and method 3 were different form the ICC of the standard method in surgeon B ( p method 1) would be most concordant with the standard method even though all 5 methods showed excellent agreements.
Comparative evaluation of different methods of setting hygienic standards
International Nuclear Information System (INIS)
Ramzaev, P.V.; Rodionova, L.F.; Mashneva, N.I.
1978-01-01
Long-term experiments were carried out on white mice and rats to study the relative importance of various procedures used in setting hygienic standards for exposure to adverse factors. A variety of radionuclides and chemical substances were tested and the sensitivities to them of various indices of the bodily state were determined. For each index, statistically significant minimal effective concentrations of substances were established
[Reappraisal of the standard method (Light's criteria) for identifying pleural exudates].
Porcel, José M; Peña, José M; Vicente de Vera, Carmina; Esquerda, Aureli
2006-02-18
Light's criteria remain the best method for separating pleural exudates from transudates. We assessed their operating characteristics, as well as those resulting from omitting the pleural fluid to serum lactate dehydrogenase (LDH) ratio from the original criteria (abbreviated Light criteria), in a large series of patients. We also searched for the best combination of pleural fluid parameters, including protein, LDH and cholesterol that identify exudates. We conducted a retrospective study of 1,490 consecutive patients with pleural effusion who underwent a diagnostic thoracentesis. There were 1,192 exudates and 298 transudates. Sensitivity, specificity, area under ROC curve, and odds ratio for both individual and combined pleural fluid parameters were calculated. Light's criteria yielded 97.5% sensitivity and 80% specificity. Both abbreviated Light criteria (sensitivity: 95.4%; specificity: 83.3%) and the combined use in an "or" rule of pleural fluid protein and LDH (sensitivity: 95.4%; specificity: 80,2%) had similar discriminative properties than standard criteria. Diagnostic separation of pleural effusions into exudates or transudates can be done effectively thorough the abbreviated Light criteria when the serum LDH value is not available. On the other hand, if venipuncture wants to be avoided (an unusual circumstance) the combination of pleural fluid protein and LDH represents an alternative to Light's criteria.
Standard test methods for arsenic in uranium hexafluoride
American Society for Testing and Materials. Philadelphia
2005-01-01
1.1 These test methods are applicable to the determination of total arsenic in uranium hexafluoride (UF6) by atomic absorption spectrometry. Two test methods are given: Test Method A—Arsine Generation-Atomic Absorption (Sections 5-10), and Test Method B—Graphite Furnace Atomic Absorption (Appendix X1). 1.2 The test methods are equivalent. The limit of detection for each test method is 0.1 μg As/g U when using a sample containing 0.5 to 1.0 g U. Test Method B does not have the complete collection details for precision and bias data thus the method appears as an appendix. 1.3 Test Method A covers the measurement of arsenic in uranyl fluoride (UO2F2) solutions by converting arsenic to arsine and measuring the arsine vapor by flame atomic absorption spectrometry. 1.4 Test Method B utilizes a solvent extraction to remove the uranium from the UO2F2 solution prior to measurement of the arsenic by graphite furnace atomic absorption spectrometry. 1.5 Both insoluble and soluble arsenic are measured when UF6 is...
Directory of Open Access Journals (Sweden)
Jesús A. Rubiano
2012-06-01
Full Text Available The work done in the organization. COFFEE COLONIAL.SAS was to evaluate, analyze and verify key information for the identification and standardization of a method for caffeine in coffee roastingprocess by the technique of chromatography high performance liquid (HPLC.Assays were performed at different roasting process to get the right conditions for the samples, supported by transport phenomena such as, energy, fluids and heat, in addition to following the guidelinesof the Standard ISO 2859-1 Colombian NTC.To obtain a standard curve, leading to interpolate coffee samples, prepare a solution with Caffeine (Sigma-Aldrich, Analytical Reagent and Milli-Q water (Millipore Merck 40 ppm, to obtain a solution pattern, from the solution of caffeine more solutions were prepared by 5 micrograms per milliliter caffeine.To determine the most suitable mobile phase, it began testing different samples of caffeine of coffee with the same water-acetonitrile mobile phase in different proportions, once all the necessary tests and according to the results obtained, the mobile phase allow better separation and interaction with the stationary phase was composed of water-acetonitrile mobile phase in 20:80 ratio.After obtaining the calibration curve and the more appropriate mobile phase, where injections were made different chromatograms were obtained with bases and defined peaks, which are defined by giving a correlation coefficient of linearity close to one, after which the next step was to find the area under the curve to define the caffeine concentration of the samples and thus being able to express in percentage.
Directory of Open Access Journals (Sweden)
Zhang Guowei
2014-01-01
Full Text Available Based on a full-scale bookcase fire experiment, a fire development model is proposed for the whole process of localized fires in large-space buildings. We found that for localized fires in large-space buildings full of wooden combustible materials the fire growing phases can be simplified into a t2 fire with a 0.0346 kW/s2 fire growth coefficient. FDS technology is applied to study the smoke temperature curve for a 2 MW to 25 MW fire occurring within a large space with a height of 6 m to 12 m and a building area of 1 500 m2 to 10 000 m2 based on the proposed fire development model. Through the analysis of smoke temperature in various fire scenarios, a new approach is proposed to predict the smoke temperature curve. Meanwhile, a modified model of steel temperature development in localized fire is built. In the modified model, the localized fire source is treated as a point fire source to evaluate the flame net heat flux to steel. The steel temperature curve in the whole process of a localized fire could be accurately predicted by the above findings. These conclusions obtained in this paper could provide valuable reference to fire simulation, hazard assessment, and fire protection design.
The Standard Days Method: an addition to the arsenal of family planning method choice in Ethiopia.
Bekele, Biruhtesfa; Fantahun, Mesganaw
2012-07-01
The Standard Days Method ® (SDM) is a fertility awareness-based method of family planning that helps users to identify the fertile days of the reproductive cycle (Days 8-19). To prevent pregnancy users avoid unprotected sexual intercourse during these days. A cross-sectional community-based study was conducted from December 2007 to June 2008 in four operational areas of Pathfinder International Ethiopia. A total of 184 SDM users were included in the study. Quantitative and qualitative methods of data collection were used. The aim of the study was to examine the experience of introducing the SDM at community level in Ethiopia. Of the 184 participants, 80.4% were still using the SDM at the time of the survey, with 35% having used it for between 6 and 12 months, while 42% had used it for more than a year. The majority (83%) knew that a woman is most likely to conceive halfway through her menstrual cycle, and nearly 91% correctly said that the SDM does not confer protection from sexually transmitted infections/AIDS. A substantial majority (75%) had correctly identified what each colour-coded bead represents in the CycleBeads ®, and an aggregate of 90.5% of women practised all the elements of correct use. This study demonstrates the importance of the SDM in increasing the availability and accessibility of family planning, and the potential to improve family planning method choice and method mix by expanding use of the SDM.
Primary standardization of C-14 by means of CIEMAT/NIST, TDCR and 4πβ-γ methods
International Nuclear Information System (INIS)
Kuznetsova, Maria
2016-01-01
In this work, the primary standardization of 14 C solution, which emits beta particles of maximum energy 156 keV, was made by means of three different methods: CIEMAT/NIST and TDCR (Triple To Double Coincidence Ratio) methods in liquid scintillation systems and the tracing method, in the 4πβ-γ coincidence system. TRICARB LSC (Liquid Scintillator Counting) system, equipped with two photomultipliers tubes, was used for CIEMAT/NIST method, using a 3 H standard that emits beta particles with maximum energy of 18.7 keV, as efficiency tracing. HIDEX 300SL LSC system, equipped with three photomultipliers tubes, was used for TDCR method. Samples of 14 C and 3 H, for the liquid scintillator system, were prepared using three commercial scintillation cocktails, UltimaGold, Optiphase Hisafe3 and InstaGel-Plus, in order to compare the performance in the measurements. All samples were prepared with 15 mL scintillators, in glass vials with low potassium concentration. Known aliquots of radioactive solution were dropped onto the cocktail scintillators. In order to obtain the quenching parameter curve, a nitro methane carrier solution and 1 mL of distilled water were used. For measurements in the 4πβ-γ system, 60 Co was used as beta gamma emitter. SCS (software coincidence system) was applied and the beta efficiency was changed by using electronic discrimination. The behavior of the extrapolation curve was predicted with code ESQUEMA, using Monte Carlo technique. The 14 C activity obtained by the three methods applied in this work was compared and the results showed to be in agreement, within the experimental uncertainty. (author)
Standard test method for creep-fatigue testing
American Society for Testing and Materials. Philadelphia
2009-01-01
1.1 This test method covers the determination of mechanical properties pertaining to creep-fatigue deformation or crack formation in nominally homogeneous materials, or both by the use of test specimens subjected to uniaxial forces under isothermal conditions. It concerns fatigue testing at strain rates or with cycles involving sufficiently long hold times to be responsible for the cyclic deformation response and cycles to crack formation to be affected by creep (and oxidation). It is intended as a test method for fatigue testing performed in support of such activities as materials research and development, mechanical design, process and quality control, product performance, and failure analysis. The cyclic conditions responsible for creep-fatigue deformation and cracking vary with material and with temperature for a given material. 1.2 The use of this test method is limited to specimens and does not cover testing of full-scale components, structures, or consumer products. 1.3 This test method is primarily ...
Standard Test Methods for Constituent Content of Composite Materials
American Society for Testing and Materials. Philadelphia
2009-01-01
1.1 These test methods determine the constituent content of composite materials by one of two approaches. Method I physically removes the matrix by digestion or ignition by one of seven procedures, leaving the reinforcement essentially unaffected and thus allowing calculation of reinforcement or matrix content (by weight or volume) as well as percent void volume. Method II, applicable only to laminate materials of known fiber areal weight, calculates reinforcement or matrix content (by weight or volume), and the cured ply thickness, based on the measured thickness of the laminate. Method II is not applicable to the measurement of void volume. 1.1.1 These test methods are primarily intended for two-part composite material systems. However, special provisions can be made to extend these test methods to filled material systems with more than two constituents, though not all test results can be determined in every case. 1.1.2 The procedures contained within have been designed to be particularly effective for ce...
International Nuclear Information System (INIS)
Brandao, Jose Odinilson de C.; Souza, Priscilla L.G.; Santos, Joelan A.L.; Vilela, Eudice C.; Lima, Fabiana F.; Calixto, Merilane S.; Santos, Neide
2009-01-01
There is increasing concern about airline crew members (about one million worldwide) are exposed to measurable neutrons doses. Historically, cytogenetic biodosimetry assays have been based on quantifying asymmetrical chromosome alterations (dicentrics, centric rings and acentric fragments) in mytogen-stimulated T-lymphocytes in their first mitosis after radiation exposure. Increased levels of chromosome damage in peripheral blood lymphocytes are a sensitive indicator of radiation exposure and they are routinely exploited for assessing radiation absorbed dose after accidental or occupational exposure. Since radiological accidents are not common, not all nations feel that it is economically justified to maintain biodosimetry competence. However, dependable access to biological dosimetry capabilities is completely critical in event of an accident. In this paper the dose-response curve was measured for the induction of chromosomal alterations in peripheral blood lymphocytes after chronic exposure in vitro to neutron-gamma mixes field. Blood was obtained from one healthy donor and exposed to two neutron-gamma mixed field from sources 241 AmBe (20 Ci) at the Neutron Calibration Laboratory (NCL-CRCN/NE-PE-Brazil). The evaluated absorbed doses were 0.2 Gy; 1.0 Gy and 2.5 Gy. The dicentric chromosomes were observed at metaphase, following colcemid accumulation and 1000 well-spread metaphase figures were analyzed for the presence of dicentrics by two experienced scorers after painted by giemsa 5%. Our preliminary results showed a linear dependence between radiations absorbed dose and dicentric chromosomes frequencies. Dose-response curve described in this paper will contribute to the construction of calibration curve that will be used in our laboratory for biological dosimetry. (author)
International Nuclear Information System (INIS)
Alvarez, M.; Cano, W.
1986-10-01
Radioisotope induced X-ray fluorescence using Cd-109 was used for the determination of iron, nickel, copper, zinc, lead and mercury in water. These metals were concentrated by precipitation with the chelating agent APDC. The precipitated formed was filtered using a membrane filter. Cobalt was added as an internal standard. Minimum detection limit, sensitivities and calibration curves linearities have been obtained to find the limits of the method. The usefulness of the method is illustrated analysing synthetic standard solutions. As an application analytical results are given for water of a highly polluted river area. (Author)
A method for developing standard patient education program
DEFF Research Database (Denmark)
Lura, Carolina Bryne; Hauch, Sophie Misser Pallesgaard; Gøeg, Kirstine Rosenbeck
2018-01-01
procedure by using videos for teaching as well as procedural support. A step-by-step guide was developed and used as basis for making the videos. Quality assurance through evaluation with a nurse was conducted on both the step-by-step guide and the videos. The quality assurance evaluation of the videos...... for developing standard digital patient education programs for patients in self-administration of blood samples drawn from CVC. The Design Science Research Paradigm was used to develop a digital patient education program, called PAVIOSY, to increase patient safety during execution of the blood sample collection...... of the educational patient system, health professionals must be engaged early in the development of content and design phase....
Tanioka, Y.; Miranda, G. J. A.; Gusman, A. R.
2017-12-01
Recently, tsunami early warning technique has been improved using tsunami waveforms observed at the ocean bottom pressure gauges such as NOAA DART system or DONET and S-NET systems in Japan. However, for tsunami early warning of near field tsunamis, it is essential to determine appropriate source models using seismological analysis before large tsunamis hit the coast, especially for tsunami earthquakes which generated significantly large tsunamis. In this paper, we develop a technique to determine appropriate source models from which appropriate tsunami inundation along the coast can be numerically computed The technique is tested for four large earthquakes, the 1992 Nicaragua tsunami earthquake (Mw7.7), the 2001 El Salvador earthquake (Mw7.7), the 2004 El Astillero earthquake (Mw7.0), and the 2012 El Salvador-Nicaragua earthquake (Mw7.3), which occurred off Central America. In this study, fault parameters were estimated from the W-phase inversion, then the fault length and width were determined from scaling relationships. At first, the slip amount was calculated from the seismic moment with a constant rigidity of 3.5 x 10**10N/m2. The tsunami numerical simulation was carried out and compared with the observed tsunami. For the 1992 Nicaragua tsunami earthquake, the computed tsunami was much smaller than the observed one. For the 2004 El Astillero earthquake, the computed tsunami was overestimated. In order to solve this problem, we constructed a depth dependent rigidity curve, similar to suggested by Bilek and Lay (1999). The curve with a central depth estimated by the W-phase inversion was used to calculate the slip amount of the fault model. Using those new slip amounts, tsunami numerical simulation was carried out again. Then, the observed tsunami heights, run-up heights, and inundation areas for the 1992 Nicaragua tsunami earthquake were well explained by the computed one. The other tsunamis from the other three earthquakes were also reasonably well explained
Standard guide for three methods of assessing buried steel tanks
American Society for Testing and Materials. Philadelphia
1998-01-01
1.1 This guide covers procedures to be implemented prior to the application of cathodic protection for evaluating the suitability of a tank for upgrading by cathodic protection alone. 1.2 Three procedures are described and identified as Methods A, B, and C. 1.2.1 Method A—Noninvasive with primary emphasis on statistical and electrochemical analysis of external site environment corrosion data. 1.2.2 Method B—Invasive ultrasonic thickness testing with external corrosion evaluation. 1.2.3 Method C—Invasive permanently recorded visual inspection and evaluation including external corrosion assessment. 1.3 This guide presents the methodology and the procedures utilizing site and tank specific data for determining a tank's condition and the suitability for such tanks to be upgraded with cathodic protection. 1.4 The tank's condition shall be assessed using Method A, B, or C. Prior to assessing the tank, a preliminary site survey shall be performed pursuant to Section 8 and the tank shall be tightness test...
Hanh, Vu Thi; Kobayashi, Yutaro; Maebuchi, Motohiro; Nakamori, Toshihiro; Tanaka, Mitsuru; Matsui, Toshiro
2016-01-01
The aim of this study was to establish, through a standard addition method, a convenient quantification assay for dipeptides (GY, YG, SY, YS, and IY) in soybean hydrolysate using 2,4,6-trinitrobenzene sulfonate (TNBS) derivatization-aided LC-TOF-MS. Soybean hydrolysate samples (25.0 mg mL(-1)) spiked with target standards were subjected to TNBS derivatization. Under the optimal LC-MS conditions, five target dipeptides derivatized with TNBS were successfully detected. Examination of the standard addition curves, with a correlation coefficient of r(2) > 0.979, provided a reliable quantification of the target dipeptides, GY, YG, SY, YS, and IY, in soybean hydrolysate to be 424 ± 20, 184 ± 9, 2188 ± 199, 327 ± 16, and 2211 ± 133 μg g(-1) of hydrolysate, respectively. The proposed LC-MS assay is a reliable and convenient assay method, with no interference from matrix effects in hydrolysate, and with no requirement for the use of an isotope labeled internal standard. Copyright © 2015 Elsevier Ltd. All rights reserved.
Standard method of test for radioactive cesium in water
International Nuclear Information System (INIS)
Anon.
1975-01-01
Concentrations of radioactive Cs greater than 1 μCi/l in water were determined by gamma counting after separation by extraction. The method is limited to 134 Cs, 136 Cs, 137 Cs, and 138 Cs. The radioactive Cs is extracted at pH 7.0 as cesium tetraphenylborate in amyl acetate with EDTA present to prevent the extraction of undesirable fission products. The γ activity of a sample of the organic phase is determined by γ spectroscopy. Large amounts of Na + , K + , Cs + , Rb + , NH 4 + , Ag + , and free acid interfere with the separation process in the procedure. The overall precision of the method is +-5 percent
International Nuclear Information System (INIS)
Civalek, Oemer
2005-01-01
The nonlinear dynamic response of doubly curved shallow shells resting on Winkler-Pasternak elastic foundation has been studied for step and sinusoidal loadings. Dynamic analogues of Von Karman-Donnel type shell equations are used. Clamped immovable and simply supported immovable boundary conditions are considered. The governing nonlinear partial differential equations of the shell are discretized in space and time domains using the harmonic differential quadrature (HDQ) and finite differences (FD) methods, respectively. The accuracy of the proposed HDQ-FD coupled methodology is demonstrated by numerical examples. The shear parameter G of the Pasternak foundation and the stiffness parameter K of the Winkler foundation have been found to have a significant influence on the dynamic response of the shell. It is concluded from the present study that the HDQ-FD methodolgy is a simple, efficient, and accurate method for the nonlinear analysis of doubly curved shallow shells resting on two-parameter elastic foundation
Standard Test Method for Contamination Outgassing Characteristics of Spacecraft Materials
American Society for Testing and Materials. Philadelphia
2009-01-01
1.1 This test method covers a technique for generating data to characterize the kinetics of the release of outgassing products from materials. This technique will determine both the total mass flux evolved by a material when exposed to a vacuum environment and the deposition of this flux on surfaces held at various specified temperatures. 1.2 This test method describes the test apparatus and related operating procedures for evaluating the total mass flux that is evolved from a material being subjected to temperatures that are between 298 and 398 K. Pressures external to the sample effusion cell are less than 7 × 10−3 Pa (5 × 10−5 torr). Deposition rates are measured during material outgassing tests. A test procedure for collecting data and a test method for processing and presenting the collected data are included. 1.3 This test method can be used to produce the data necessary to support mathematical models used for the prediction of molecular contaminant generation, migration, and deposition. 1.4 Al...
Standard methods for research on apis mellifera gut symbionts
Gut microbes can play an important role in digestion, disease resistance, and the general health of animals, but little is known about the biology of gut symbionts in Apis mellifera. This paper is part of a series on honey bee research methods, providing protocols for studying gut symbionts. We desc...
Standard methods for research on Apis mellifera gut symbionts
Gut microbes can play an important role in digestion, disease resistance, and the general health of animals, but little is known about the biology of gut symbionts in Apis mellifera. This paper is part of a series on honey bee research methods, providing protocols for studying gut symbionts. We desc...
International Nuclear Information System (INIS)
Medeiros, Marcos P.C.; Rebello, Wilson F.; Andrade, Edson R.; Silva, Ademir X.
2015-01-01
Nuclear explosions are usually described in terms of its total yield and associated shock wave, thermal radiation and nuclear radiation effects. The nuclear radiation produced in such events has several components, consisting mainly of alpha and beta particles, neutrinos, X-rays, neutrons and gamma rays. For practical purposes, the radiation from a nuclear explosion is divided into i nitial nuclear radiation , referring to what is issued within one minute after the detonation, and 'residual nuclear radiation' covering everything else. The initial nuclear radiation can also be split between 'instantaneous or 'prompt' radiation, which involves neutrons and gamma rays from fission and from interactions between neutrons and nuclei of surrounding materials, and 'delayed' radiation, comprising emissions from the decay of fission products and from interactions of neutrons with nuclei of the air. This work aims at presenting isodose curves calculations at ground level by Monte Carlo simulation, allowing risk assessment and consequences modeling in radiation protection context. The isodose curves are related to neutrons produced by the prompt nuclear radiation from a hypothetical nuclear explosion with a total yield of 20 KT. Neutron fluency and emission spectrum were based on data available in the literature. Doses were calculated in the form of ambient dose equivalent due to neutrons H*(10) n - . (author)
Energy Technology Data Exchange (ETDEWEB)
Medeiros, Marcos P.C.; Rebello, Wilson F.; Andrade, Edson R., E-mail: rebello@ime.eb.br, E-mail: daltongirao@yahoo.com.br [Instituto Militar de Engenharia (IME), Rio de Janeiro, RJ (Brazil). Secao de Engenharia Nuclear; Silva, Ademir X., E-mail: ademir@nuclear.ufrj.br [Corrdenacao dos Programas de Pos-Graduacao em Egenharia (COPPE/UFRJ), Rio de Janeiro, RJ (Brazil). Programa de Engenharia Nuclear
2015-07-01
Nuclear explosions are usually described in terms of its total yield and associated shock wave, thermal radiation and nuclear radiation effects. The nuclear radiation produced in such events has several components, consisting mainly of alpha and beta particles, neutrinos, X-rays, neutrons and gamma rays. For practical purposes, the radiation from a nuclear explosion is divided into {sup i}nitial nuclear radiation{sup ,} referring to what is issued within one minute after the detonation, and 'residual nuclear radiation' covering everything else. The initial nuclear radiation can also be split between 'instantaneous or 'prompt' radiation, which involves neutrons and gamma rays from fission and from interactions between neutrons and nuclei of surrounding materials, and 'delayed' radiation, comprising emissions from the decay of fission products and from interactions of neutrons with nuclei of the air. This work aims at presenting isodose curves calculations at ground level by Monte Carlo simulation, allowing risk assessment and consequences modeling in radiation protection context. The isodose curves are related to neutrons produced by the prompt nuclear radiation from a hypothetical nuclear explosion with a total yield of 20 KT. Neutron fluency and emission spectrum were based on data available in the literature. Doses were calculated in the form of ambient dose equivalent due to neutrons H*(10){sub n}{sup -}. (author)
International Nuclear Information System (INIS)
O’Brien, Donal; Shalloo, Laurence; Crosson, Paul; Donnellan, Trevor; Farrelly, Niall; Finnan, John; Hanrahan, Kevin; Lalor, Stan; Lanigan, Gary; Thorne, Fiona; Schulte, Rogier
2014-01-01
Highlights: • Improving productivity was the most effective strategy to reduce emissions and costs. • The accounting methods disagreed on the total abatement potential of mitigation measures. • Thus, it may be difficult to convince farmers to adopt certain abatement measures. • Domestic offsetting and consumption based accounting are options to overcome current methodological issues. - Abstract: Marginal abatement cost curve (MACC) analysis allows the evaluation of strategies to reduce agricultural greenhouse gas (GHG) emissions relative to some reference scenario and encompasses their costs or benefits. A popular approach to quantify the potential to abate national agricultural emissions is the Intergovernmental Panel on Climate Change guidelines for national GHG inventories (IPCC-NI method). This methodology is the standard for assessing compliance with binding national GHG reduction targets and uses a sector based framework to attribute emissions. There is however an alternative to the IPCC-NI method, known as life cycle assessment (LCA), which is the preferred method to assess the GHG intensity of food production (kg of GHG/unit of food). The purpose of this study was to compare the effect of using the IPCC-NI and LCA methodologies when completing a MACC analysis of national agricultural GHG emissions. The MACC was applied to the Irish agricultural sector and mitigation measures were only constrained by the biophysical environment. The reference scenario chosen assumed that the 2020 growth targets set by the Irish agricultural industry would be achieved. The comparison of methodologies showed that only 1.1 Mt of the annual GHG abatement potential that can be achieved at zero or negative cost could be attributed to agricultural sector using the IPCC-NI method, which was only 44% of the zero or negative cost abatement potential attributed to the sector using the LCA method. The difference between methodologies was because the IPCC-NI method attributes the
Directory of Open Access Journals (Sweden)
F. Schöpfer
2010-01-01
Full Text Available Dispersion curves of elastic guided waves in plates can be efficiently computed by the Strip-Element Method. This method is based on a finite-element discretization in the thickness direction of the plate and leads to an eigenvalue problem relating frequencies to wavenumbers of the wave modes. In this paper we present a rigorous mathematical background of the Strip-Element Method for anisotropic media including a thorough analysis of the corresponding infinite-dimensional eigenvalue problem as well as a proof of the existence of eigenvalues.
International Nuclear Information System (INIS)
Vo Dac Bang; Phan Thu Huong.
1983-01-01
An application of the new standardization method for rapid activation mass analysis with the registration of the strongly absorbed low-energy gamma radiation is described. This method makes it possible to avoid the Use of the time-consumina and laboriuous method of Internal Standard
Electron microscopy of flatworms standard and cryo-preparation methods.
Salvenmoser, Willi; Egger, Bernhard; Achatz, Johannes G; Ladurner, Peter; Hess, Michael W
2010-01-01
Electron microscopy (EM) has long been indispensable for flatworm research, as most of these worms are microscopic in dimension and provide only a handful of characters recognizable by eye or light microscopy. Therefore, major progress in understanding the histology, systematics, and evolution of this animal group relied on methods capable of visualizing ultrastructure. The rise of molecular and cellular biology renewed interest in such ultrastructural research. In the light of recent developments, we offer a best-practice guide for users of transmission EM and provide a comparison of well-established chemical fixation protocols with cryo-processing methods (high-pressure freezing/freeze-substitution, HPF/FS). The organisms used in this study include the rhabditophorans Macrostomum lignano, Polycelis nigra and Dugesia gonocephala, as well as the acoel species Isodiametra pulchra. Copyright © 2010 Elsevier Inc. All rights reserved.
Dabiri, M.; Ghafouri, M.; Rohani Raftar, H. R.; Björk, T.
2018-03-01
Methods to estimate the strain-life curve, which were divided into three categories: simple approximations, artificial neural network-based approaches and continuum damage mechanics models, were examined, and their accuracy was assessed in strain-life evaluation of a direct-quenched high-strength steel. All the prediction methods claim to be able to perform low-cycle fatigue analysis using available or easily obtainable material properties, thus eliminating the need for costly and time-consuming fatigue tests. Simple approximations were able to estimate the strain-life curve with satisfactory accuracy using only monotonic properties. The tested neural network-based model, although yielding acceptable results for the material in question, was found to be overly sensitive to the data sets used for training and showed an inconsistency in estimation of the fatigue life and fatigue properties. The studied continuum damage-based model was able to produce a curve detecting early stages of crack initiation. This model requires more experimental data for calibration than approaches using simple approximations. As a result of the different theories underlying the analyzed methods, the different approaches have different strengths and weaknesses. However, it was found that the group of parametric equations categorized as simple approximations are the easiest for practical use, with their applicability having already been verified for a broad range of materials.
Standard Test Methods for Determining Mechanical Integrity of Photovoltaic Modules
American Society for Testing and Materials. Philadelphia
2009-01-01
1.1 These test methods cover procedures for determining the ability of photovoltaic modules to withstand the mechanical loads, stresses and deflections used to simulate, on an accelerated basis, high wind conditions, heavy snow and ice accumulation, and non-planar installation effects. 1.1.1 A static load test to 2400 Pa is used to simulate wind loads on both module surfaces 1.1.2 A static load test to 5400 Pa is used to simulate heavy snow and ice accumulation on the module front surface. 1.1.3 A twist test is used to simulate the non-planar mounting of a photovoltaic module by subjecting it to a twist angle of 1.2°. 1.1.4 A cyclic load test of 10 000 cycles duration and peak loading to 1440 Pa is used to simulate dynamic wind or other flexural loading. Such loading might occur during shipment or after installation at a particular location. 1.2 These test methods define photovoltaic test specimens and mounting methods, and specify parameters that must be recorded and reported. 1.3 Any individual mech...
International Nuclear Information System (INIS)
Todoriki, Setsuko; Saito, Kimie; Tsujimoto, Yuka
2008-01-01
The effect of the integration temperature intervals of TL intensities on the TL glow ratio was examined in comparison of the notified method of the Ministry of Health, Labour and Welfare (MHLW method) with EN1788. Two kinds of un-irradiated geological standard rock and three kinds of spices (black pepper, turmeric, and oregano) irradiated at 0.3 kGy or 1.0 kGy were subjected to TL analysis. Although the TL glow ratio exceeded 0.1 in the andesite according to the calculation of the MHLW notified method (integration interval; 70-490degC), the maximum of the first glow were observed at 300degC or more, attributed the influence of the natural radioactivity and distinguished from food irradiation. When the integration interval was set to 166-227degC according to EN1788, the TL glow ratios became remarkably smaller than 0.1, and the evaluation of the un-irradiated sample became more clear. For spices, the TL glow ratios by the MHLW notified method fell below 0.1 in un-irradiated samples and exceeded 0.1 in irradiated ones. Moreover, Glow1 maximum temperatures of the irradiated samples were observed at the range of 168-196degC, and those of un-irradiated samples were 258degC or more. Therefore, all samples were correctly judged by the criteria of the MHLW method. However, based on the temperature range of integration defined by EN1788, the TL glow ratio of un-irradiated samples remarkably became small compared with that of the MHLW method, and the discrimination of the irradiated sample from non-irradiation sample became clearer. (author)
Rukanova, B.D.
2005-01-01
To summarize, with respect to research question one we constructed a system of concepts, while in answer to research question two we proposed a method of how to apply this system of concepts in practice in order to identify potential problems in early stages of standard implementation projects.
An overview of failure assessment methods in codes and standards
International Nuclear Information System (INIS)
Zerbst, U.; Ainsworth, R.A.
2003-01-01
This volume provides comprehensive up-to-date information on the assessment of the integrity of engineering structures containing crack-like flaws, in the absence of effects of creep at elevated temperatures (see volume 5) and of environment (see volume 6). Key methods are extensively reviewed and background information as well as validation is given. However, it should be kept in mind that for actual detailed assessments the relevant documents have to be consulted. In classical engineering design, an applied stress is compared with the appropriate material resistance expressed in terms of a limit stress, such as the yield strength or fatigue endurance limit. As long as the material resistance exceeds the applied stress, integrity of the component is assured. It is implicitly assumed that the component is defect-free but design margins provide some protection against defects. Modern design and operation philosophies, however, take explicit account of the possible presence of defects in engineering components. Such defects may arise from fabrication, e.g., during casting, welding, or forming processes, or may develop during operation. They may extend during operation and eventually lead to failure, which in the ideal case occurs beyond the design life of the component. Failure assessment methods are based upon the behavior of sharp cracks in structures, and for this reason all flaws or defects found in structures have to be treated as if they are sharp planar cracks. Hence the terms flaw or defect should be regarded as being interchangeable with the term crack throughout this volume. (orig.)
Mitochondrial structure and function are disrupted by standard isolation methods.
Directory of Open Access Journals (Sweden)
Martin Picard
Full Text Available Mitochondria regulate critical components of cellular function via ATP production, reactive oxygen species production, Ca(2+ handling and apoptotic signaling. Two classical methods exist to study mitochondrial function of skeletal muscles: isolated mitochondria and permeabilized myofibers. Whereas mitochondrial isolation removes a portion of the mitochondria from their cellular environment, myofiber permeabilization preserves mitochondrial morphology and functional interactions with other intracellular components. Despite this, isolated mitochondria remain the most commonly used method to infer in vivo mitochondrial function. In this study, we directly compared measures of several key aspects of mitochondrial function in both isolated mitochondria and permeabilized myofibers of rat gastrocnemius muscle. Here we show that mitochondrial isolation i induced fragmented organelle morphology; ii dramatically sensitized the permeability transition pore sensitivity to a Ca(2+ challenge; iii differentially altered mitochondrial respiration depending upon the respiratory conditions; and iv dramatically increased H(2O(2 production. These alterations are qualitatively similar to the changes in mitochondrial structure and function observed in vivo after cellular stress-induced mitochondrial fragmentation, but are generally of much greater magnitude. Furthermore, mitochondrial isolation markedly altered electron transport chain protein stoichiometry. Collectively, our results demonstrate that isolated mitochondria possess functional characteristics that differ fundamentally from those of intact mitochondria in permeabilized myofibers. Our work and that of others underscores the importance of studying mitochondrial function in tissue preparations where mitochondrial structure is preserved and all mitochondria are represented.
Standardized method for reproducing the sequential X-rays flap
International Nuclear Information System (INIS)
Brenes, Alejandra; Molina, Katherine; Gudino, Sylvia
2009-01-01
A method is validated to estandardize in the taking, developing and analysis of bite-wing radiographs taken in sequential way, in order to compare and evaluate detectable changes in the evolution of the interproximal lesions through time. A radiographic positioner called XCP® is modified by means of a rigid acrylic guide, to achieve proper of the X ray equipment core positioning relative to the XCP® ring and the reorientation during the sequential x-rays process. 16 subjects of 4 to 40 years old are studied for a total number of 32 registries. Two x-rays of the same block of teeth of each subject have been taken in sequential way, with a minimal difference of 30 minutes between each one, before the placement of radiographic attachment. The images have been digitized with a Super Cam® scanner and imported to a software. The measurements in X and Y-axis for both x-rays were performed to proceed to compare. The intraclass correlation index (ICI) has shown that the proposed method is statistically related to measurement (mm) obtained in the X and Y-axis for both sequential series of x-rays (p=0.01). The measures of central tendency and dispersion have shown that the usual occurrence is indifferent between the two measurements (Mode 0.000 and S = 0083 and 0.109) and that the probability of occurrence of different values is lower than expected. (author) [es
Standard test method for creep-fatigue crack growth testing
American Society for Testing and Materials. Philadelphia
2010-01-01
1.1 This test method covers the determination of creep-fatigue crack growth properties of nominally homogeneous materials by use of pre-cracked compact type, C(T), test specimens subjected to uniaxial cyclic forces. It concerns fatigue cycling with sufficiently long loading/unloading rates or hold-times, or both, to cause creep deformation at the crack tip and the creep deformation be responsible for enhanced crack growth per loading cycle. It is intended as a guide for creep-fatigue testing performed in support of such activities as materials research and development, mechanical design, process and quality control, product performance, and failure analysis. Therefore, this method requires testing of at least two specimens that yield overlapping crack growth rate data. The cyclic conditions responsible for creep-fatigue deformation and enhanced crack growth vary with material and with temperature for a given material. The effects of environment such as time-dependent oxidation in enhancing the crack growth ra...
Standard test method for measurement of fatigue crack growth rates
American Society for Testing and Materials. Philadelphia
2015-01-01
1.1 This test method covers the determination of fatigue crack growth rates from near-threshold to Kmax controlled instability. Results are expressed in terms of the crack-tip stress-intensity factor range (ΔK), defined by the theory of linear elasticity. 1.2 Several different test procedures are provided, the optimum test procedure being primarily dependent on the magnitude of the fatigue crack growth rate to be measured. 1.3 Materials that can be tested by this test method are not limited by thickness or by strength so long as specimens are of sufficient thickness to preclude buckling and of sufficient planar size to remain predominantly elastic during testing. 1.4 A range of specimen sizes with proportional planar dimensions is provided, but size is variable to be adjusted for yield strength and applied force. Specimen thickness may be varied independent of planar size. 1.5 The details of the various specimens and test configurations are shown in Annex A1-Annex A3. Specimen configurations other than t...
Kronberg, Max; Soomro, Muhammad Afzal; Top, Jaap
2017-10-01
In this note we extend the theory of twists of elliptic curves as presented in various standard texts for characteristic not equal to two or three to the remaining characteristics. For this, we make explicit use of the correspondence between the twists and the Galois cohomology set H^1\\big({G}_{\\overline{K}/K}, \\operatorname{Aut}_{\\overline{K}}(E)\\big). The results are illustrated by examples.
Standard methods for sampling and sample preparation for gamma spectroscopy
International Nuclear Information System (INIS)
Taskaeva, M.; Taskaev, E.; Nikolov, P.
1993-01-01
The strategy for sampling and sample preparation is outlined: necessary number of samples; analysis and treatment of the results received; quantity of the analysed material according to the radionuclide concentrations and analytical methods; the minimal quantity and kind of the data needed for making final conclusions and decisions on the base of the results received. This strategy was tested in gamma spectroscopic analysis of radionuclide contamination of the region of Eleshnitsa Uranium Mines. The water samples was taken and stored according to the ASTM D 3370-82. The general sampling procedures were in conformity with the recommendations of ISO 5667. The radionuclides was concentrated by coprecipitation with iron hydroxide and ion exchange. The sampling of soil samples complied with the rules of ASTM C 998, and their sample preparation - with ASTM C 999. After preparation the samples were sealed hermetically and measured. (author)
Malau-Aduli, B.S.; Teague, P.A.; D'Souza, K.; Heal, C.; Turner, R.; Garne, D.L.; Vleuten, C. van der
2017-01-01
BACKGROUND: A key issue underpinning the usefulness of the OSCE assessment to medical education is standard setting, but the majority of standard-setting methods remain challenging for performance assessment because they produce varying passing marks. Several studies have compared standard-setting
Retinoblastoma: Achieving new standards with methods of chemotherapy
Directory of Open Access Journals (Sweden)
Swathi Kaliki
2015-01-01
Full Text Available The management of retinoblastoma (RB has dramatically changed over the past two decades from previous radiotherapy methods to current chemotherapy strategies. RB is a remarkably chemotherapy-sensitive tumor. Chemotherapy is currently used as a first-line approach for children with this malignancy and can be delivered by intravenous, intra-arterial, periocular, and intravitreal routes. The choice of route for chemotherapy administration depends upon the tumor laterality and tumor staging. Intravenous chemotherapy (IVC is used most often in bilateral cases, orbital RB, and as an adjuvant treatment in high-risk RB. Intra-arterial chemotherapy (IAC is used in cases with group C or D RB and selected cases of group E tumor. Periocular chemotherapy is used as an adjunct treatment in eyes with group D and E RB and those with persistent/recurrent vitreous seeds. Intravitreal chemotherapy is reserved for eyes with persistent/recurrent vitreous seeds. In this review, we describe the various forms of chemotherapy used in the management of RB. A database search was performed on PubMed, using the terms "RB," and "treatment," "chemotherapy," "systemic chemotherapy," "IVC," "IAC," "periocular chemotherapy," or "intravitreal chemotherapy." Relevant English language articles were extracted, reviewed, and referenced appropriately.
Retinoblastoma: Achieving new standards with methods of chemotherapy
Kaliki, Swathi; Shields, Carol L
2015-01-01
The management of retinoblastoma (RB) has dramatically changed over the past two decades from previous radiotherapy methods to current chemotherapy strategies. RB is a remarkably chemotherapy-sensitive tumor. Chemotherapy is currently used as a first-line approach for children with this malignancy and can be delivered by intravenous, intra-arterial, periocular, and intravitreal routes. The choice of route for chemotherapy administration depends upon the tumor laterality and tumor staging. Intravenous chemotherapy (IVC) is used most often in bilateral cases, orbital RB, and as an adjuvant treatment in high-risk RB. Intra-arterial chemotherapy (IAC) is used in cases with group C or D RB and selected cases of group E tumor. Periocular chemotherapy is used as an adjunct treatment in eyes with group D and E RB and those with persistent/recurrent vitreous seeds. Intravitreal chemotherapy is reserved for eyes with persistent/recurrent vitreous seeds. In this review, we describe the various forms of chemotherapy used in the management of RB. A database search was performed on PubMed, using the terms “RB,” and “treatment,” “chemotherapy,” “systemic chemotherapy,” “IVC,” “IAC,” “periocular chemotherapy,” or “intravitreal chemotherapy.” Relevant English language articles were extracted, reviewed, and referenced appropriately. PMID:25827539
Standard test methods for bend testing of material for ductility
American Society for Testing and Materials. Philadelphia
2009-01-01
1.1 These test methods cover bend testing for ductility of materials. Included in the procedures are four conditions of constraint on the bent portion of the specimen; a guided-bend test using a mandrel or plunger of defined dimensions to force the mid-length of the specimen between two supports separated by a defined space; a semi-guided bend test in which the specimen is bent, while in contact with a mandrel, through a specified angle or to a specified inside radius (r) of curvature, measured while under the bending force; a free-bend test in which the ends of the specimen are brought toward each other, but in which no transverse force is applied to the bend itself and there is no contact of the concave inside surface of the bend with other material; a bend and flatten test, in which a transverse force is applied to the bend such that the legs make contact with each other over the length of the specimen. 1.2 After bending, the convex surface of the bend is examined for evidence of a crack or surface irregu...
Directory of Open Access Journals (Sweden)
Yu Xiu-Juan
2007-10-01
Full Text Available Abstract Background The nucleotide compositional asymmetry between the leading and lagging strands in bacterial genomes has been the subject of intensive study in the past few years. It is interesting to mention that almost all bacterial genomes exhibit the same kind of base asymmetry. This work aims to investigate the strand biases in Chlamydia muridarum genome and show the potential of the Z curve method for quantitatively differentiating genes on the leading and lagging strands. Results The occurrence frequencies of bases of protein-coding genes in C. muridarum genome were analyzed by the Z curve method. It was found that genes located on the two strands of replication have distinct base usages in C. muridarum genome. According to their positions in the 9-D space spanned by the variables u1 – u9 of the Z curve method, K-means clustering algorithm can assign about 94% of genes to the correct strands, which is a few percent higher than those correctly classified by K-means based on the RSCU. The base usage and codon usage analyses show that genes on the leading strand have more G than C and more T than A, particularly at the third codon position. For genes on the lagging strand the biases is reverse. The y component of the Z curves for the complete chromosome sequences show that the excess of G over C and T over A are more remarkable in C. muridarum genome than in other bacterial genomes without separating base and/or codon usages. Furthermore, for the genomes of Borrelia burgdorferi, Treponema pallidum, Chlamydia muridarum and Chlamydia trachomatis, in which distinct base and/or codon usages have been observed, closer phylogenetic distance is found compared with other bacterial genomes. Conclusion The nature of the strand biases of base composition in C. muridarum is similar to that in most other bacterial genomes. However, the base composition asymmetry between the leading and lagging strands in C. muridarum is more significant than that in
Standard setting in student assessment: is a defensible method yet to come?
Barman, A
2008-11-01
Setting, maintaining and re-evaluation of assessment standard periodically are important issues in medical education. The cut-off scores are often "pulled from the air" or set to an arbitrary percentage. A large number of methods/procedures used to set standard or cut score are described in literature. There is a high degree of uncertainty in performance standard set by using these methods. Standards set using the existing methods reflect the subjective judgment of the standard setters. This review is not to describe the existing standard setting methods/procedures but to narrate the validity, reliability, feasibility and legal issues relating to standard setting. This review is on some of the issues in standard setting based on the published articles of educational assessment researchers. Standard or cut-off score should be to determine whether the examinee attained the requirement to be certified competent. There is no perfect method to determine cut score on a test and none is agreed upon as the best method. Setting standard is not an exact science. Legitimacy of the standard is supported when performance standard is linked to the requirement of practice. Test-curriculum alignment and content validity are important for most educational test validity arguments. Representative percentage of must-know learning objectives in the curriculum may be the basis of test items and pass/fail marks. Practice analysis may help in identifying the must-know areas of curriculum. Cut score set by this procedure may give the credibility, validity, defensibility and comparability of the standard. Constructing the test items by subject experts and vetted by multi-disciplinary faculty members may ensure the reliability of the test as well as the standard.
42 CFR 440.260 - Methods and standards to assure quality of services.
2010-10-01
... 42 Public Health 4 2010-10-01 2010-10-01 false Methods and standards to assure quality of services. 440.260 Section 440.260 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH... and Limits Applicable to All Services § 440.260 Methods and standards to assure quality of services...
International Nuclear Information System (INIS)
Morales S, E.; Aguilar S, E.
1989-11-01
A method for bio-available sulfate analysis in soils is described. A Ca(H2PO4) leaching solution was used for soil samples treatment. A standard NaSO4 solution was used for preparing a calibration curve and also the fundamental parameters method approach was employed. An Am-241 (100 mCi) source and a Si-Li detector were employed. Analysis could be done in 5 minutes; good reproducibility, 5 and accuracy, 5 were obtained. The method is very competitive with conventional nephelometry where good and reproducible suspensions are difficult to obtain. (author)
Moore, K. M.; Jaeger, W. K.; Jones, J. A.
2013-12-01
A central characteristic of large river basins in the western US is the spatial and temporal disjunction between the supply of and demand for water. Water sources are typically concentrated in forested mountain regions distant from municipal and agricultural water users, while precipitation is super-abundant in winter and deficient in summer. To cope with these disparities, systems of reservoirs have been constructed throughout the West. These reservoir systems are managed to serve two main competing purposes: to control flooding during winter and spring, and to store spring runoff and deliver it to populated, agricultural valleys during the summer. The reservoirs also provide additional benefits, including recreation, hydropower and instream flows for stream ecology. Since the storage capacity of the reservoirs cannot be used for both flood control and storage at the same time, these uses are traded-off during spring, as the most important, or dominant use of the reservoir, shifts from buffering floods to storing water for summer use. This tradeoff is expressed in the operations rule curve, which specifies the maximum level to which a reservoir can be filled throughout the year, apart from real-time flood operations. These rule curves were often established at the time a reservoir was built. However, climate change and human impacts may be altering the timing and amplitude of flood events and water scarcity is expected to intensify with anticipated changes in climate, land cover and population. These changes imply that reservoir management using current rule curves may not match future societal values for the diverse uses of water from reservoirs. Despite a broad literature on mathematical optimization for reservoir operation, these methods are not often used because they 1) simplify the hydrologic system, raising doubts about the real-world applicability of the solutions, 2) exhibit perfect foresight and assume stationarity, whereas reservoir operators face
A note on families of fragility curves
International Nuclear Information System (INIS)
Kaplan, S.; Bier, V.M.; Bley, D.C.
1989-01-01
In the quantitative assessment of seismic risk, uncertainty in the fragility of a structural component is usually expressed by putting forth a family of fragility curves, with probability serving as the parameter of the family. Commonly, a lognormal shape is used both for the individual curves and for the expression of uncertainty over the family. A so-called composite single curve can also be drawn and used for purposes of approximation. This composite curve is often regarded as equivalent to the mean curve of the family. The equality seems intuitively reasonable, but according to the authors has never been proven. The paper presented proves this equivalence hypothesis mathematically. Moreover, the authors show that this equivalence hypothesis between fragility curves is itself equivalent to an identity property of the standard normal probability curve. Thus, in the course of proving the fragility curve hypothesis, the authors have also proved a rather obscure, but interesting and perhaps previously unrecognized, property of the standard normal curve
Directory of Open Access Journals (Sweden)
Sarita Bajaj
2016-08-01
Full Text Available BACKGROUND EZSCAN is a new, noninvasive technique to detect sudomotor dysfunction and thus neuropathy in diabetes patients at an early stage. It further predicts chances of development of other microvascular complications. In this study, we evaluated EZSCAN for detection of microvascular complications in Type 2 diabetes patients and compared accuracy of EZSCAN with standard screening methods. MATERIALS AND METHODS 104 known diabetes patients, 56 males and 48 females, were studied. All cases underwent the EZSCAN test, Nerve Conduction Study (NCS test, Vibration perception threshold test (VPT, Monofilament test, Fundus examination and Urine micral test. The results of EZSCAN were compared with standard screening methods. The data has been analysed and assessed by applying appropriate statistical tests within different groups. RESULTS Mean age of the subjects was 53.5 ± 11.4 years. For detection of diabetic neuropathy, sensitivity and specificity of EZSCAN was found to be 77.0 % and 95.3%, respectively. Odd’s ratio (OR was 68.82 with p < 0.0001. AUC in ROC curve was 0.930. Sensitivity and specificity of EZSCAN for detection of nephropathy were 67.1% and 94.1%, respectively. OR = 32.69 with p < 0.0001. AUC was 0.926. Sensitivity of EZSCAN for detection of retinopathy was 90% while specificity is 70.3%. OR = 21.27; p< 0.0001. AUC came out to be 0.920. CONCLUSION Results of EZSCAN test compared significantly to the standard screening methods for the detection of microvascular complications of diabetes and can be used as a simple, noninvasive and quick method to detect microvascular complications of diabetes.
Zhang, Wanjun; Gao, Shanping; Cheng, Xiyan; Zhang, Feng
2017-04-01
A novel on high-grade CNC machines tools for B Spline curve method of High-speed interpolation arithmetic is introduced. In the high-grade CNC machines tools CNC system existed the type value points is more trouble, the control precision is not strong and so on, In order to solve this problem. Through specific examples in matlab7.0 simulation result showed that that the interpolation error significantly reduced, the control precision is improved markedly, and satisfy the real-time interpolation of high speed, high accuracy requirements.
Energy Technology Data Exchange (ETDEWEB)
Groot, L. [Utrecht University, Utrecht School of Economics, Janskerkhof 12, 3512 BL Utrecht (Netherlands)
2008-11-15
The purpose of this paper is twofold. First, it exhibits that standard tools in the measurement of income inequality, such as the Lorenz curve and the Gini-index, can successfully be applied to the issues of inequality measurement of carbon emissions and the equity of abatement policies across countries. These tools allow policy-makers and the general public to grasp at a single glance the impact of conventional distribution rules such as equal caps or grandfathering, or more sophisticated ones, on the distribution of greenhouse gas emissions. Second, using the Samuelson rule for the optimal provision of a public good, the Pareto-optimal distribution of carbon emissions is compared with the distribution that follows if countries follow Nash-Cournot abatement strategies. It is shown that the Pareto-optimal distribution under the Samuelson rule can be approximated by the equal cap division, represented by the diagonal in the Lorenz curve diagram.
DEFF Research Database (Denmark)
Villanueva, Héctor; Gómez Arranz, Paula
This report describes the analysis carried out with data from a given turbine in a wind farm and a chosen period. The purpose of the analysis is to correlate the power output of the wind turbine to the wind speed measured by a nacelle-mounted anemometer. The measurements and analysis are not perf......This report describes the analysis carried out with data from a given turbine in a wind farm and a chosen period. The purpose of the analysis is to correlate the power output of the wind turbine to the wind speed measured by a nacelle-mounted anemometer. The measurements and analysis...... are not performed according to IEC 61400-12-1 [1]. Therefore, the results presented in this report cannot be considered a power curve according to the reference standard, and are referred to as “power curve investigation” instead. The measurements have been performed by a customer and the data analysis has been...
Standardization and validation of a novel and simple method to assess lumbar dural sac size
International Nuclear Information System (INIS)
Daniels, M.L.A.; Lowe, J.R.; Roy, P.; Patrone, M.V.; Conyers, J.M.; Fine, J.P.; Knowles, M.R.; Birchard, K.R.
2015-01-01
Aim: To develop and validate a simple, reproducible method to assess dural sac size using standard imaging technology. Materials and methods: This study was institutional review board-approved. Two readers, blinded to the diagnoses, measured anterior–posterior (AP) and transverse (TR) dural sac diameter (DSD), and AP vertebral body diameter (VBD) of the lumbar vertebrae using MRI images from 53 control patients with pre-existing MRI examinations, 19 prospectively MRI-imaged healthy controls, and 24 patients with Marfan syndrome with prior MRI or CT lumbar spine imaging. Statistical analysis utilized linear and logistic regression, Pearson correlation, and receiver operating characteristic (ROC) curves. Results: AP-DSD and TR-DSD measurements were reproducible between two readers (r = 0.91 and 0.87, respectively). DSD (L1–L5) was not different between male and female controls in the AP or TR plane (p = 0.43; p = 0.40, respectively), and did not vary by age (p = 0.62; p = 0.25) or height (p = 0.64; p = 0.32). AP-VBD was greater in males versus females (p = 1.5 × 10 −8 ), resulting in a smaller dural sac ratio (DSR) (DSD/VBD) in males (p = 5.8 × 10 −6 ). Marfan patients had larger AP-DSDs and TR-DSDs than controls (p = 5.9 × 10 −9 ; p = 6.5 × 10 −9 , respectively). Compared to DSR, AP-DSD and TR-DSD better discriminate Marfan from control subjects based on area under the curve (AUC) values from unadjusted ROCs (AP-DSD p < 0.01; TR-DSD p = 0.04). Conclusion: Individual vertebrae and L1–L5 (average) AP-DSD and TR-DSD measurements are simple, reliable, and reproducible for quantitating dural sac size without needing to control for gender, age, or height. - Highlights: • DSD (L1-L5) does not differ in the AP or TR plane by gender, height, or age. • AP- and TR-DSD measures correlate well between readers with different experience. • Height is positively correlated to AP-VBD in both males and females. • Varying
Kentel, E.; Dogulu, N.
2015-12-01
In Turkey the experience and data required for a hydrological model setup is limited and very often not available. Moreover there are many ungauged catchments where there are also many planned projects aimed at utilization of water resources including development of existing hydropower potential. This situation makes runoff prediction at locations with lack of data and ungauged locations where small hydropower plants, reservoirs, etc. are planned an increasingly significant challenge and concern in the country. Flow duration curves have many practical applications in hydrology and integrated water resources management. Estimation of flood duration curve (FDC) at ungauged locations is essential, particularly for hydropower feasibility studies and selection of the installed capacities. In this study, we test and compare the performances of two methods for estimating FDCs in the Western Black Sea catchment, Turkey: (i) FDC based on Map Correlation Method (MCM) flow estimates. MCM is a recently proposed method (Archfield and Vogel, 2010) which uses geospatial information to estimate flow. Flow measurements of stream gauging stations nearby the ungauged location are the only data requirement for this method. This fact makes MCM very attractive for flow estimation in Turkey, (ii) Adaptive Neuro-Fuzzy Inference System (ANFIS) is a data-driven method which is used to relate FDC to a number of variables representing catchment and climate characteristics. However, it`s ease of implementation makes it very useful for practical purposes. Both methods use easily collectable data and are computationally efficient. Comparison of the results is realized based on two different measures: the root mean squared error (RMSE) and the Nash-Sutcliffe Efficiency (NSE) value. Ref: Archfield, S. A., and R. M. Vogel (2010), Map correlation method: Selection of a reference streamgage to estimate daily streamflow at ungaged catchments, Water Resour. Res., 46, W10513, doi:10.1029/2009WR008481.
Using commercial simulators for determining flash distillation curves for petroleum fractions
Directory of Open Access Journals (Sweden)
Eleonora Erdmann
2008-01-01
Full Text Available This work describes a new method for estimating the equilibrium flash vaporisation (EFV distillation curve for petro-leum fractions by using commercial simulators. A commercial simulator was used for implementing a stationary mo-del for flash distillation; this model was adjusted by using a distillation curve obtained from standard laboratory ana-lytical assays. Such curve can be one of many types (eg ASTM D86, D1160 or D2887 and involves an experimental procedure simpler than that required for obtaining an EFV curve. Any commercial simulator able to model petroleum can be used for the simulation (HYSYS and CHEMCAD simulators were used here. Several types of petroleum and fractions were experimentally analysed for evaluating the proposed method; this data was then put into a process si-mulator (according to the proposed method to estimate the corresponding EFV curves. HYSYS- and CHEMCAD-estimated curves were compared to those produced by two traditional estimation methods (Edmister’s and Maswell’s methods. Simulation-estimated curves were close to average Edmister and Maxwell curves in all cases. The propo-sed method has several advantages; it avoids the need for experimentally obtaining an EFV curve, it does not de-pend on the type of experimental curve used to fit the model and it enables estimating several pressures by using just one experimental curve as data.
Directory of Open Access Journals (Sweden)
G. Nicoletto
2009-07-01
Full Text Available Fatigue design of welded structures is primarily based on a nominal stress; hot spot stress methods or local approaches each having several limitations when coupled with finite element modeling. An alternative recent structural stress definition is discussed and implemented in a post-processor. It provides an effective means for the direct coupling of finite element results to the fatigue assessment of welded joints in complex structures. The applications presented in this work confirm the main features of the method: mesh-insensitivity, accurate crack location and life to failure predictions.
Gawart, Matthew; Dupitron, Sabine; Lutfi, Rami
2012-03-01
We aimed to evaluate our learning curve comparing surgical time of laparoendoscopic single-site (LESS) banding with multiport laparoscopy. We performed a retrospective analysis of prospectively collected data comparing our first 48 LESS bands with our first 50 multiport laparoscopic bands at our institution. We then compared the first 24 LESS bands with the last 24 bands. The average body mass index for the LESS group was significantly lower than for the laparoscopic group (43.19 vs 48.3; P < .0001). The surgical time was much faster toward the second half of our experience performing the LESS procedure (85.34 vs 68.8; P = .0055). LESS banding took significantly longer than our early traditional laparoscopic adjustable gastric banding (76.85 vs 64.4; P = .0015). We conclude that in experienced hands, single-incision banding is feasible and safe to perform. Long-term data are needed to prove that LESS banding is as good a surgery as traditional laparoscopic surgery. Copyright © 2012 Elsevier Inc. All rights reserved.
Validation of the Standard Method for Assessing Flicker From Wind Turbines
DEFF Research Database (Denmark)
Barahona Garzon, Braulio; Sørensen, Poul Ejnar; Christensen, L.
2011-01-01
This paper studies the validity of the standard method in IEC 61400-21 for assessing the flicker emission from multiple wind turbines. The standard method is based on testing a single wind turbine and then using the results of this test to assess the flicker emission from a number of wind turbines...... the flicker emission at the collection line; this assessment is then compared to the actual measurements in order to study the accuracy of the estimation. It was observed in both wind farms, that the assessment based on the standard method is statistically conservative compared to the measurements. The reason...... for this is the statistical characteristics of flicker emission....
Reid, J. C.; Seibert, Warren F.
The analysis of previously obtained data concerning short-term visual memory and cognition by a method suggested by Tucker is proposed. Although interesting individual differences undoubtedly exist in people's ability and capacity to process short-term visual information, studies have not generally examined these differences. In fact, conventional…
Laffosse, Jean-Michel; Chiron, Philippe; Accadbled, Franck; Molinier, François; Tricoire, Jean-Louis; Puget, Jean
2006-12-01
We analysed the learning curve of an anterolateral minimally invasive (ALMI) approach for primary total hip replacement (THR). The first 42 THR's with large-diameter heads implanted through this approach (group 1) were compared to a cohort of 58 THR's with a 28-mm head performed through a standard-incision posterior approach (group 2). No selection was made and the groups were comparable. Implant positioning as well as early clinical results were satisfactory and were comparable in the two groups. In group 1, the rate of intraoperative complications was significantly higher (greater trochanter fracture in 4 cases, cortical perforation in 3 cases, calcar fracture in one case, nerve palsy in one case, secondary tilting of the metal back in 2 cases) than in group 2 (one nerve palsy and one calcar crack). At 6 months, one revision of the acetabular cup was performed in group 1 for persistent pain, whereas in group 2, we noted 3 dislocations (2 were revised) and 2 periprosthetic femoral fractures. Our study showed a high rate of intra- and perioperative complications during the learning curve for an ALMI approach. These are more likely to occur in obese or osteoporotic patients, and in those with bulky muscles or very stiff hips. Postoperative complications were rare. The early clinical results are excellent and we may expect to achieve better results with a more standardised procedure. During the initial period of the learning curve, it would be preferable to select patients with an appropriate morphology.
Malau-Aduli, Bunmi Sherifat; Teague, Peta-Ann; D'Souza, Karen; Heal, Clare; Turner, Richard; Garne, David L; van der Vleuten, Cees
2017-12-01
A key issue underpinning the usefulness of the OSCE assessment to medical education is standard setting, but the majority of standard-setting methods remain challenging for performance assessment because they produce varying passing marks. Several studies have compared standard-setting methods; however, most of these studies are limited by their experimental scope, or use data on examinee performance at a single OSCE station or from a single medical school. This collaborative study between 10 Australian medical schools investigated the effect of standard-setting methods on OSCE cut scores and failure rates. This research used 5256 examinee scores from seven shared OSCE stations to calculate cut scores and failure rates using two different compromise standard-setting methods, namely the Borderline Regression and Cohen's methods. The results of this study indicate that Cohen's method yields similar outcomes to the Borderline Regression method, particularly for large examinee cohort sizes. However, with lower examinee numbers on a station, the Borderline Regression method resulted in higher cut scores and larger difference margins in the failure rates. Cohen's method yields similar outcomes as the Borderline Regression method and its application for benchmarking purposes and in resource-limited settings is justifiable, particularly with large examinee numbers.
Larrechi, M S; Rius, F X
2004-01-01
When applied to near-infrared (NIR) data, multivariate curve resolution methods, in particular alternating least squares (ALS), make it possible to calculate the concentration profiles and the spectra of all species involved in the reaction of curing epoxy resins. In this paper, the model reaction between phenyl glicidyl ether and aniline (2:1) was studied at 95 degrees C. A NIR spectrum was recorded every five minutes throughout the eight-hour reaction process. The data display rank deficiency. This problem was overcome by supplying additional information to the system in the form of known spectra of some reactants. The recovered spectra and concentration profiles satisfactorily reproduced the experimental data. In this way, 99.99% of the variance associated with the experimental matrix was reproduced. A value of 0.87% was obtained for lack of fit while the similarity coefficient r between the spectra recovered and the spectra corresponding to the three pure species involved in the reaction were PGE (r = 0.994), aniline (r = 0.994), and tertiary amine (r = 0.999). The maximum and minimum limits associated with the ALS solutions were calculated, which made it possible to limit to a considerable extent the ambiguity that is characteristic of these curve resolution methods.
An automatic method to analyze the Capacity-Voltage and Current-Voltage curves of a sensor
AUTHOR|(CDS)2261553
2017-01-01
An automatic method to perform Capacity versus voltage analysis for all kind of silicon sensor is provided. It successfully calculates the depletion voltage to unirradiated and irradiated sensors, and with measurements with outliers or reaching breakdown. It is built using C++ and using ROOT trees with an analogous skeleton as TRICS, where the data as well as the results of the ts are saved, to make further analysis.
International Nuclear Information System (INIS)
Crevoisier, D.; Voltz, M.; Chanzy, A.
2009-01-01
Ross [Ross PJ. Modeling soil water and solute transport - fast, simplified numerical solutions. Agron J 2003;95:1352-61] developed a fast, simplified method for solving Richards' equation. This non-iterative 1D approach, using Brooks and Corey [Brooks RH, Corey AT. Hydraulic properties of porous media. Hydrol. papers, Colorado St. Univ., Fort Collins: 1964] hydraulic functions, allows a significant reduction in computing time while maintaining the accuracy of the results. The first aim of this work is to confirm these results in a more extensive set of problems, including those that would lead to serious numerical difficulties for the standard numerical method. The second aim is to validate a generalisation of the Ross method to other mathematical representations of hydraulic functions. The Ross method is compared with the standard finite element model, Hydrus-1D [Simunek J, Sejna M, Van Genuchten MTh. The HYDRUS-1D and HYDRUS-2D codes for estimating unsaturated soil hydraulic and solutes transport parameters. Agron Abstr 357; 1999]. Computing time, accuracy of results and robustness of numerical schemes are monitored in 1D simulations involving different types of homogeneous soils, grids and hydrological conditions. The Ross method associated with modified Van Genuchten hydraulic functions [Vogel T, Cislerova M. On the reliability of unsaturated hydraulic conductivity calculated from the moisture retention curve. Transport Porous Media 1988:3:1-15] proves in every tested scenario to be more robust numerically, and the compromise of computing time/accuracy is seen to be particularly improved on coarse grids. Ross method run from 1.25 to 14 times faster than Hydrus-1D. (authors)
40 CFR 1043.50 - Approval of methods to meet Tier 1 retrofit NOX standards.
2010-07-01
... retrofit NOX standards. 1043.50 Section 1043.50 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS CONTROL OF NOX, SOX, AND PM EMISSIONS FROM MARINE ENGINES AND VESSELS SUBJECT TO THE MARPOL PROTOCOL § 1043.50 Approval of methods to meet Tier 1 retrofit NOX standards...
Developing content standards for teaching research skills using a delphi method
Schaaf, M.F. van der; Stokking, K.M.; Verloop, N.
2005-01-01
The increased attention for teacher assessment and current educational reforms ask for procedures to develop adequate content standards. For the development of content standards on teaching research skills, a Delphi method based on stakeholders’ judgments has been designed and tested. In three
Analysis of a non-standard mixed finite element method with applications to superconvergence
Brandts, J.H.
2009-01-01
We show that a non-standard mixed finite element method proposed by Barrios and Gatica in 2007, is a higher order perturbation of the least-squares mixed finite element method. Therefore, it is also superconvergent whenever the least-squares mixed finite element method is superconvergent.
Combining the Best of Two Standard Setting Methods: The Ordered Item Booklet Angoff
Smith, Russell W.; Davis-Becker, Susan L.; O'Leary, Lisa S.
2014-01-01
This article describes a hybrid standard setting method that combines characteristics of the Angoff (1971) and Bookmark (Mitzel, Lewis, Patz & Green, 2001) methods. The proposed approach utilizes strengths of each method while addressing weaknesses. An ordered item booklet, with items sorted based on item difficulty, is used in combination…
Lagrangian Curves on Spectral Curves of Monopoles
International Nuclear Information System (INIS)
Guilfoyle, Brendan; Khalid, Madeeha; Ramon Mari, Jose J.
2010-01-01
We study Lagrangian points on smooth holomorphic curves in TP 1 equipped with a natural neutral Kaehler structure, and prove that they must form real curves. By virtue of the identification of TP 1 with the space LE 3 of oriented affine lines in Euclidean 3-space, these Lagrangian curves give rise to ruled surfaces in E 3 , which we prove have zero Gauss curvature. Each ruled surface is shown to be the tangent lines to a curve in E 3 , called the edge of regression of the ruled surface. We give an alternative characterization of these curves as the points in E 3 where the number of oriented lines in the complex curve Σ that pass through the point is less than the degree of Σ. We then apply these results to the spectral curves of certain monopoles and construct the ruled surfaces and edges of regression generated by the Lagrangian curves.
Directory of Open Access Journals (Sweden)
A. Sommella
2013-09-01
Full Text Available Within the framework of a research project examining the spatial variability of hydraulic characteristics of soil intended for irrigation, some of the more frequently used analytical expressions describing the laws linking diffusivity D to the water content of the soil were verified. By studying the flow field of soil samples tested in the laboratory, under one-dimensional wetting and drying cycles, it has been found that the laws of hydraulic diffusivity of the exponential types can be ascribed to them. Finally, a simplified laboratory method was proposed which, with the aid of nomographs, allows the definition of the law D( to be easily arrived at.
Directory of Open Access Journals (Sweden)
Musa Atiyyah Binti Haji
2018-03-01
Full Text Available Fabric Touch Tester (FTT is a relatively new device from SDL Atlas to determine touch properties of fabrics. It simultaneously measures 13 touch-related fabric physical properties in four modules that include bending and thickness measurements. This study aims to comparatively analyze the thickness and bending measurements made by the FTT and the common standard methods used in the textile industry. The results obtained with the FTT for 11 different fabrics were compared with that of standard methods. Despite the different measurement principle, a good correlation was found between the two methods used for the assessment of thickness and bending. As FTT is a new tool for textile comfort measurement and no standard yet exists, these findings are essential to determine the reliability of the measurements and how they relate to the well-established standard methods.
American Society for Testing and Materials. Philadelphia
2011-01-01
1.1 This test method is applicable to the determination of uranium in urine at levels of detection dependent on sample size, count time, detector background, and tracer yield. It is designed as a screening tool for detection of possible exposure of occupational workers. 1.2 This test method is designed for 50 mL of urine. This test method does not address the sampling protocol or sample preservation methods associated with its use. 1.3 The values stated in SI units are to be regarded as standard. No other units of measurement are included in this standard. 1.4 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.
Current federal regulations require monitoring for fecal coliforms or Salmonella in biosolids destined for land application. Methods used for analysis of fecal coliforms and Salmonella were reviewed and a standard protocol was developed. The protocols were then evaluated by testi...
Curves from Motion, Motion from Curves
2000-01-01
tautochrone and brachistochrone properties. To Descartes, however, the rectification of curves such as the spiral (3) and the cycloid (4) was suspect - they...UNCLASSIFIED Defense Technical Information Center Compilation Part Notice ADP012017 TITLE: Curves from Motion, Motion from Curves DISTRIBUTION...Approved for public release, distribution unlimited This paper is part of the following report: TITLE: International Conference on Curves and Surfaces [4th
International Nuclear Information System (INIS)
Gao Jinsheng; Zheng Siying; Cai Feng
1993-08-01
The micronucleus technique of cytokines block has been proposed as a new method to measure chromosome damage in cytogenetic. The cytokines is blocked by using cytochalasin B (Cyt-B), and micronuclei are scored in cytokines-blocked (CB) cells. This can easily be done owing to the appearance of binucleate cells and large numbers accumulated by adding 3.0 μg/ml cytochalasin B at 44 hours and scoring at 72 hours. The results show that the optimum concentration of Cyt-B is 3.0 μg/ml. the Cyt-B itself can not induce the increase of micronuclei. The micronucleus frequency of normal individuals in vivo, there is an approximately linear relationship between the frequency of induced micronuclei and irradiation dose. The formula is Y 0.36 D + 2.74 (γ 2 = 0.995 P<0.01). Because the cytokines block method is simple and reliable, it is effective for assaying chromosome damage caused by genetic toxic materials
Chun, Sehun
2017-07-01
Applying the method of moving frames to Maxwell's equations yields two important advancements for scientific computing. The first is the use of upwind flux for anisotropic materials in Maxwell's equations, especially in the context of discontinuous Galerkin (DG) methods. Upwind flux has been available only to isotropic material, because of the difficulty of satisfying the Rankine-Hugoniot conditions in anisotropic media. The second is to solve numerically Maxwell's equations on curved surfaces without the metric tensor and composite meshes. For numerical validation, spectral convergences are displayed for both two-dimensional anisotropic media and isotropic spheres. In the first application, invisible two-dimensional metamaterial cloaks are simulated with a relatively coarse mesh by both the lossless Drude model and the piecewisely-parametered layered model. In the second application, extremely low frequency propagation on various surfaces such as spheres, irregular surfaces, and non-convex surfaces is demonstrated.
Energy Technology Data Exchange (ETDEWEB)
Sokolov, Mikhail A. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)
2017-05-01
Small specimens are playing the key role in evaluating properties of irradiated materials. The use of small specimens provides several advantages. Typically, only a small volume of material can be irradiated in a reactor at desirable conditions in terms of temperature, neutron flux, and neutron dose. A small volume of irradiated material may also allow for easier handling of specimens. Smaller specimens reduce the amount of radioactive material, minimizing personnel exposures and waste disposal. However, use of small specimens imposes a variety of challenges as well. These challenges are associated with proper accounting for size effects and transferability of small specimen data to the real structures of interest. Any fracture toughness specimen that can be made out of the broken halves of standard Charpy specimens may have exceptional utility for evaluation of reactor pressure vessels (RPVs) since it would allow one to determine and monitor directly actual fracture toughness instead of requiring indirect predictions using correlations established with impact data. The Charpy V-notch specimen is the most commonly used specimen geometry in surveillance programs. Validation of the mini compact tension specimen (mini-CT) geometry has been performed on previously well characterized Midland beltline Linde 80 (WF-70) weld in the unirradiated condition. It was shown that the fracture toughness transition temperature, To, measured by these Mini-CT specimens is almost the same as To value that was derived from various larger fracture toughness specimens. Moreover, an International collaborative program has been established to extend the assessment and validation efforts to irradiated Linde 80 weld metal. The program is underway and involves the Oak Ridge National Laboratory (ORNL), Central Research Institute for Electrical Power Industry (CRIEPI), and Electric Power Research Institute (EPRI). The irradiated Mini-CT specimens from broken halves of previously tested Charpy
Using commercial simulators for determining flash distillation curves for petroleum fractions
Eleonora Erdmann; Demetrio Humana; Samuel Franco Domínguez; Lorgio Mercado Fuentes
2010-01-01
This work describes a new method for estimating the equilibrium flash vaporisation (EFV) distillation curve for petro-leum fractions by using commercial simulators. A commercial simulator was used for implementing a stationary mo-del for flash distillation; this model was adjusted by using a distillation curve obtained from standard laboratory ana-lytical assays. Such curve can be one of many types (eg ASTM D86, D1160 or D2887) and involves an experimental procedure simpler than that required...
A Standardized Method for 4D Ultrasound-Guided Peripheral Nerve Blockade and Catheter Placement
Directory of Open Access Journals (Sweden)
N. J. Clendenen
2014-01-01
Full Text Available We present a standardized method for using four-dimensional ultrasound (4D US guidance for peripheral nerve blocks. 4D US allows for needle tracking in multiple planes simultaneously and accurate measurement of the local anesthetic volume surrounding the nerve following injection. Additionally, the morphology and proximity of local anesthetic spread around the target nerve is clearly seen with the described technique. This method provides additional spatial information in real time compared to standard two-dimensional ultrasound.
Standard Test Method for Measuring Heat Flux Using a Water-Cooled Calorimeter
American Society for Testing and Materials. Philadelphia
2005-01-01
1.1 This test method covers the measurement of a steady heat flux to a given water-cooled surface by means of a system energy balance. 1.2 The values stated in SI units are to be regarded as standard. No other units of measurement are included in this standard. 1.3 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.
Standard test method for uranium analysis in natural and waste water by X-ray fluorescence
American Society for Testing and Materials. Philadelphia
2004-01-01
1.1 This test method applies for the determination of trace uranium content in waste water. It covers concentrations of U between 0.05 mg/L and 2 mg/L. 1.2 The values stated in SI units are to be regarded as standard. No other units of measurement are included in this standard. 1.3 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.
Energy Technology Data Exchange (ETDEWEB)
Mueller, Martin; /SLAC
2010-12-16
The study of the power density spectrum (PDS) of fluctuations in the X-ray flux from active galactic nuclei (AGN) complements spectral studies in giving us a view into the processes operating in accreting compact objects. An important line of investigation is the comparison of the PDS from AGN with those from galactic black hole binaries; a related area of focus is the scaling relation between time scales for the variability and the black hole mass. The PDS of AGN is traditionally modeled using segments of power laws joined together at so-called break frequencies; associations of the break time scales, i.e., the inverses of the break frequencies, with time scales of physical processes thought to operate in these sources are then sought. I analyze the Method of Light Curve Simulations that is commonly used to characterize the PDS in AGN with a view to making the method as sensitive as possible to the shape of the PDS. I identify several weaknesses in the current implementation of the method and propose alternatives that can substitute for some of the key steps in the method. I focus on the complications introduced by uneven sampling in the light curve, the development of a fit statistic that is better matched to the distributions of power in the PDS, and the statistical evaluation of the fit between the observed data and the model for the PDS. Using archival data on one AGN, NGC 3516, I validate my changes against previously reported results. I also report new results on the PDS in NGC 4945, a Seyfert 2 galaxy with a well-determined black hole mass. This source provides an opportunity to investigate whether the PDS of Seyfert 1 and Seyfert 2 galaxies differ. It is also an attractive object for placement on the black hole mass-break time scale relation. Unfortunately, with the available data on NGC 4945, significant uncertainties on the break frequency in its PDS remain.
Directory of Open Access Journals (Sweden)
Carly A Conran
2016-01-01
Full Text Available Several different approaches are available to clinicians for determining prostate cancer (PCa risk. The clinical validity of various PCa risk assessment methods utilizing single nucleotide polymorphisms (SNPs has been established; however, these SNP-based methods have not been compared. The objective of this study was to compare the three most commonly used SNP-based methods for PCa risk assessment. Participants were men (n = 1654 enrolled in a prospective study of PCa development. Genotypes of 59 PCa risk-associated SNPs were available in this cohort. Three methods of calculating SNP-based genetic risk scores (GRSs were used for the evaluation of individual disease risk such as risk allele count (GRS-RAC, weighted risk allele count (GRS-wRAC, and population-standardized genetic risk score (GRS-PS. Mean GRSs were calculated, and performances were compared using area under the receiver operating characteristic curve (AUC and positive predictive value (PPV. All SNP-based methods were found to be independently associated with PCa (all P 0.05 for comparisons between the three methods, and all three SNP-based methods had a significantly higher AUC than family history (all P < 0.05. Results from this study suggest that while the three most commonly used SNP-based methods performed similarly in discriminating PCa from non-PCa at the population level, GRS-PS is the method of choice for risk assessment at the individual level because its value (where 1.0 represents average population risk can be easily interpreted regardless of the number of risk-associated SNPs used in the calculation.
American Society for Testing and Materials. Philadelphia
1995-01-01
1.1 This test method covers the interferometric determination of linear thermal expansion of premelted glaze frits and fired ceramic whiteware materials at temperatures lower than 1000°C (1830°F). 1.2 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.
International Nuclear Information System (INIS)
Cacais, F.L.; Delgado, J.U.; Loayza, V.M.
2016-01-01
In preparing solutions for the production of radionuclide metrology standards is necessary measuring the quantity Activity by mass. The gravimetric method by elimination is applied to perform weighing with smaller uncertainties. At this work is carried out the validation, by the Monte Carlo method, of the uncertainty calculation approach implemented by Lourenco and Bobin according to ISO GUM for the method by elimination. The results obtained by both uncertainty calculation methods were consistent indicating that were fulfilled the conditions for the application of ISO GUM in the preparation of radioactive standards. (author)
Directory of Open Access Journals (Sweden)
WANG Yong
2016-05-01
Full Text Available As points of interest (POIon the internet, exists widely incomplete addresses and inconsistent literal expressions, a fast standardization processing method of network POIs address information based on spatial constraints was proposed. Based on the model of the extensible address expression, first of all, address information of POI was segmented and extracted. Address elements are updated by means of matching with the address tree layer by layer. Then, by defining four types of positional relations, corresponding set are selected from standard POI library as candidate for enrichment and amendment of non-standard address. At last, the fast standardized processing of POI address information was achieved with the help of backtracking address elements with minimum granularity. Experiments in this paper proved that the standardization processing of an address can be realized by means of this method with higher accuracy in order to build the address database.
Standard Test Method for Gel Time of Carbon Fiber-Epoxy Prepreg
American Society for Testing and Materials. Philadelphia
1999-01-01
1.1 This test method covers the determination of gel time of carbon fiber-epoxy tape and sheet. The test method is suitable for the measurement of gel time of resin systems having either high or low viscosity. 1.2 The values stated in SI units are to be regarded as standard. The values in parentheses are for reference only. 1.3 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.
Standard test methods for conducting time-for-rupture notch tension tests of materials
American Society for Testing and Materials. Philadelphia
2009-01-01
1.1 These test methods cover the determination of the time for rupture of notched specimens under conditions of constant load and temperature. These test methods also includes the essential requirements for testing equipment. 1.2 The values stated in inch-pound units are to be regarded as the standard. The units in parentheses are for information only. 1.3 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.
The standard deviation method: data analysis by classical means and by neural networks
International Nuclear Information System (INIS)
Bugmann, G.; Stockar, U. von; Lister, J.B.
1989-08-01
The Standard Deviation Method is a method for determining particle size which can be used, for instance, to determine air-bubble sizes in a fermentation bio-reactor. The transmission coefficient of an ultrasound beam through a gassy liquid is measured repetitively. Due to the displacements and random positions of the bubbles, the measurements show a scatter whose standard deviation is dependent on the bubble-size. The precise relationship between the measured standard deviation, the transmission and the particle size has been obtained from a set of computer-simulated data. (author) 9 figs., 5 refs
Jirschitzka, Jens; Kimmerle, Joachim; Cress, Ulrike
2016-01-01
In four studies we tested a new methodological approach to the investigation of evaluation bias. The usage of piecewise growth curve modeling allowed for investigation into the impact of people's attitudes on their persuasiveness ratings of pro- and con-arguments, measured over the whole range of the arguments' polarity from an extreme con to an extreme pro position. Moreover, this method provided the opportunity to test specific hypotheses about the course of the evaluation bias within certain polarity ranges. We conducted two field studies with users of an existing online information portal (Studies 1a and 2a) as participants, and two Internet laboratory studies with mostly student participants (Studies 1b and 2b). In each of these studies we presented pro- and con-arguments, either for the topic of MOOCs (massive open online courses, Studies 1a and 1b) or for the topic of M-learning (mobile learning, Studies 2a and 2b). Our results indicate that using piecewise growth curve models is more appropriate than simpler approaches. An important finding of our studies was an asymmetry of the evaluation bias toward pro- or con-arguments: the evaluation bias appeared over the whole polarity range of pro-arguments and increased with more and more extreme polarity. This clear-cut result pattern appeared only on the pro-argument side. For the con-arguments, in contrast, the evaluation bias did not feature such a systematic picture.
Energy Technology Data Exchange (ETDEWEB)
Cardoso, Vanderlei
2002-07-01
The present work describes a few methodologies developed for fitting efficiency curves obtained by means of a HPGe gamma-ray spectrometer. The interpolated values were determined by simple polynomial fitting and polynomial fitting between the ratio of experimental peak efficiency and total efficiency, calculated by Monte Carlo technique, as a function of gamma-ray energy. Moreover, non-linear fitting has been performed using a segmented polynomial function and applying the Gauss-Marquardt method. For the peak area obtainment different methodologies were developed in order to estimate the background area under the peak. This information was obtained by numerical integration or by using analytical functions associated to the background. One non-calibrated radioactive source has been included in the curve efficiency in order to provide additional calibration points. As a by-product, it was possible to determine the activity of this non-calibrated source. For all fittings developed in the present work the covariance matrix methodology was used, which is an essential procedure in order to give a complete description of the partial uncertainties involved. (author)
Directory of Open Access Journals (Sweden)
Jens Jirschitzka
Full Text Available In four studies we tested a new methodological approach to the investigation of evaluation bias. The usage of piecewise growth curve modeling allowed for investigation into the impact of people's attitudes on their persuasiveness ratings of pro- and con-arguments, measured over the whole range of the arguments' polarity from an extreme con to an extreme pro position. Moreover, this method provided the opportunity to test specific hypotheses about the course of the evaluation bias within certain polarity ranges. We conducted two field studies with users of an existing online information portal (Studies 1a and 2a as participants, and two Internet laboratory studies with mostly student participants (Studies 1b and 2b. In each of these studies we presented pro- and con-arguments, either for the topic of MOOCs (massive open online courses, Studies 1a and 1b or for the topic of M-learning (mobile learning, Studies 2a and 2b. Our results indicate that using piecewise growth curve models is more appropriate than simpler approaches. An important finding of our studies was an asymmetry of the evaluation bias toward pro- or con-arguments: the evaluation bias appeared over the whole polarity range of pro-arguments and increased with more and more extreme polarity. This clear-cut result pattern appeared only on the pro-argument side. For the con-arguments, in contrast, the evaluation bias did not feature such a systematic picture.
Color Standardization Method and System for Whole Slide Imaging Based on Spectral Sensing
Directory of Open Access Journals (Sweden)
Shinsuke Tani
2012-01-01
Full Text Available In the field of whole slide imaging, the imaging device or staining process cause color variations for each slide that affect the result of image analysis made by pathologist. In order to stabilize the analysis, we developed a color standardization method and system as described below: 1 Color standardization method based on RGB imaging and multi spectral sensing, which utilize less band (16 bands than conventional method (60 bands, 2 High speed spectral sensing module. As a result, we confirmed the following effect: 1 We confirmed the performance improvement of nucleus detection by the color standardization. And we can conduct without training data set which is needed in conventional method, 2 We can get detection performance of H&E component equivalent to conventional method (60 bands. And measurement process is more than 255 times faster.
Besley, Aiken; Vijver, Martina G; Behrens, Paul; Bosker, Thijs
2017-01-15
Microplastics are ubiquitous in the environment, are frequently ingested by organisms, and may potentially cause harm. A range of studies have found significant levels of microplastics in beach sand. However, there is a considerable amount of methodological variability among these studies. Methodological variation currently limits comparisons as there is no standard procedure for sampling or extraction of microplastics. We identify key sampling and extraction procedures across the literature through a detailed review. We find that sampling depth, sampling location, number of repeat extractions, and settling times are the critical parameters of variation. Next, using a case-study we determine whether and to what extent these differences impact study outcomes. By investigating the common practices identified in the literature with the case-study, we provide a standard operating procedure for sampling and extracting microplastics from beach sand. Copyright © 2016 Elsevier Ltd. All rights reserved.
International Nuclear Information System (INIS)
Stepanov, A.V.; Stepanov, D.A.; Nikitina, S.A.; Gogoleva, T.D.; Grigor'eva, M.G.; Bulyanitsa, L.S.; Panteleev, Yu.A.; Pevtsova, E.V.; Domkin, V.D.; Pen'kin, M.V.
2006-01-01
Precision method of spectrophotometry with inner standardization is used for analysis of pure Pu solutions. Improvement of the spectrophotometer and spectrophotometric method of analysis is done to decrease accidental constituent of relative error of the method. Influence of U, Np impurities and corrosion products on systematic constituent of error of the method, and effect of fluoride-ion on completeness of Pu oxidation in sample preparation are studied [ru
International Nuclear Information System (INIS)
Dietrich, R.
1984-01-01
The basic concepts of the finite element method are explained. The results are compared to existing calibration curves for such test piece geometries derived using experimental procedures. (orig./HP) [de
American Society for Testing and Materials. Philadelphia
1971-01-01
1.1 These test methods cover the measurement of solar energy transmittance and reflectance (terrestrial) of materials in sheet form. Method A, using a spectrophotometer, is applicable for both transmittance and reflectance and is the referee method. Method B is applicable only for measurement of transmittance using a pyranometer in an enclosure and the sun as the energy source. Specimens for Method A are limited in size by the geometry of the spectrophotometer while Method B requires a specimen 0.61 m2 (2 ft2). For the materials studied by the drafting task group, both test methods give essentially equivalent results. 1.2 This standard does not purport to address all of the safety problems, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.
24 CFR Appendix II to Subpart C of... - Development of Standards; Calculation Methods
2010-04-01
...; Calculation Methods II Appendix II to Subpart C of Part 51 Housing and Urban Development Office of the...; Calculation Methods I. Background Information Concerning the Standards (a) Thermal Radiation: (1) Introduction... radiation being emitted. The radiation can cause severe burn, injuries and even death to exposed persons...
Addressing Next Generation Science Standards: A Method for Supporting Classroom Teachers
Pellien, Tamara; Rothenburger, Lisa
2014-01-01
The Next Generation Science Standards (NGSS) will define science education for the foreseeable future, yet many educators struggle to see the bridge between current practice and future practices. The inquiry-based methods used by Extension professionals (Kress, 2006) can serve as a guide for classroom educators. Described herein is a method of…
Standard Error Estimation of 3PL IRT True Score Equating with an MCMC Method
Liu, Yuming; Schulz, E. Matthew; Yu, Lei
2008-01-01
A Markov chain Monte Carlo (MCMC) method and a bootstrap method were compared in the estimation of standard errors of item response theory (IRT) true score equating. Three test form relationships were examined: parallel, tau-equivalent, and congeneric. Data were simulated based on Reading Comprehension and Vocabulary tests of the Iowa Tests of…
Zhang, Dabing; Guo, Jinchao
2011-07-01
As the worldwide commercialization of genetically modified organisms (GMOs) increases and consumers concern the safety of GMOs, many countries and regions are issuing labeling regulations on GMOs and their products. Analytical methods and their standardization for GM ingredients in foods and feed are essential for the implementation of labeling regulations. To date, the GMO testing methods are mainly based on the inserted DNA sequences and newly produced proteins in GMOs. This paper presents an overview of GMO testing methods as well as their standardization. © 2011 Institute of Botany, Chinese Academy of Sciences.
Vo, Martin
2017-08-01
Light Curves Classifier uses data mining and machine learning to obtain and classify desired objects. This task can be accomplished by attributes of light curves or any time series, including shapes, histograms, or variograms, or by other available information about the inspected objects, such as color indices, temperatures, and abundances. After specifying features which describe the objects to be searched, the software trains on a given training sample, and can then be used for unsupervised clustering for visualizing the natural separation of the sample. The package can be also used for automatic tuning parameters of used methods (for example, number of hidden neurons or binning ratio). Trained classifiers can be used for filtering outputs from astronomical databases or data stored locally. The Light Curve Classifier can also be used for simple downloading of light curves and all available information of queried stars. It natively can connect to OgleII, OgleIII, ASAS, CoRoT, Kepler, Catalina and MACHO, and new connectors or descriptors can be implemented. In addition to direct usage of the package and command line UI, the program can be used through a web interface. Users can create jobs for ”training” methods on given objects, querying databases and filtering outputs by trained filters. Preimplemented descriptors, classifier and connectors can be picked by simple clicks and their parameters can be tuned by giving ranges of these values. All combinations are then calculated and the best one is used for creating the filter. Natural separation of the data can be visualized by unsupervised clustering.
A Mapmark method of standard setting as implemented for the National Assessment Governing Board.
Schulz, E Matthew; Mitzel, Howard C
2011-01-01
This article describes a Mapmark standard setting procedure, developed under contract with the National Assessment Governing Board (NAGB). The procedure enhances the bookmark method with spatially representative item maps, holistic feedback, and an emphasis on independent judgment. A rationale for these enhancements, and the bookmark method, is presented, followed by a detailed description of the materials and procedures used in a meeting to set standards for the 2005 National Assessment of Educational Progress (NAEP) in Grade 12 mathematics. The use of difficulty-ordered content domains to provide holistic feedback is a particularly novel feature of the method. Process evaluation results comparing Mapmark to Anghoff-based methods previously used for NAEP standard setting are also presented.
DEFF Research Database (Denmark)
Pavlovic, M; Holstein-Rathlou, N H; Madsen, F
1985-01-01
We compared the provocative concentration (PC) values obtained by two different methods of performing bronchial histamine challenge. One test was done on an APTA, an apparatus which allows simultaneous provocation with histamine and measurement of airway resistance (Rtot) by the interrupter method....... The second test was a conventional tidal breathing method, with measurement of the FEV1. There was a high correlation between the PC20-FEV1 and the PC30-, PC40- and PC50-Rtot values. The correlation coefficients were 0.85, 0.71 and 0.70 (P less than 0.05) respectively. We further tested the reproducibility...
American Society for Testing and Materials. Philadelphia
2010-01-01
1.1 This test method covers the measurement of the optical angular deviation of a light ray imposed by flat transparent parts such as a commercial or military aircraft windshield, canopy or cabin window. 1.2 The values stated in SI units are to be regarded as standard. No other units of measurement are included in this standard. 1.2.1 Exceptions—The values given in parentheses are for information only. Also, print size is provided in inch-pound measurements. 1.3 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.
International Nuclear Information System (INIS)
Sharma, R.B.; Ghildyal, B.P.
1976-01-01
The root distribution of wheat variety UP 301 was obtained by determining the 32 P activity in soil-root cores by two methods, viz., ignition and triacid digestion. Root distribution obtained by these two methods was compared with that by standard root core washing procedure. The percent error in root distribution as determined by triacid digestion method was within +- 2.1 to +- 9.0 as against +- 5.5 to +- 21.2 by ignition method. Thus triacid digestion method proved better over the ignition method. (author)
Han, Yang; Hou, Shao-Yang; Ji, Shang-Zhi; Cheng, Juan; Zhang, Meng-Yue; He, Li-Juan; Ye, Xiang-Zhong; Li, Yi-Min; Zhang, Yi-Xuan
2017-11-15
A novel method, real-time reverse transcription PCR (real-time RT-PCR) coupled with probe-melting curve analysis, has been established to detect two kinds of samples within one fluorescence channel. Besides a conventional TaqMan probe, this method employs another specially designed melting-probe with a 5' terminus modification which meets the same label with the same fluorescent group. By using an asymmetric PCR method, the melting-probe is able to detect an extra sample in the melting stage effectively while it almost has little influence on the amplification detection. Thus, this method allows the availability of united employment of both amplification stage and melting stage for detecting samples in one reaction. The further demonstration by simultaneous detection of human immunodeficiency virus (HIV) and hepatitis C virus (HCV) in one channel as a model system is presented in this essay. The sensitivity of detection by real-time RT-PCR coupled with probe-melting analysis was proved to be equal to that detected by conventional real-time RT-PCR. Because real-time RT-PCR coupled with probe-melting analysis can double the detection throughputs within one fluorescence channel, it is expected to be a good solution for the problem of low-throughput in current real-time PCR. Copyright © 2017 Elsevier Inc. All rights reserved.
Yamamoto, Kazuya; Takaoka, Toshimitsu; Fukui, Hidetoshi; Haruta, Yasuyuki; Yamashita, Tomoya; Kitagawa, Seiichiro
2016-03-01
In general, thin-film coating process is widely applied on optical lens surface as anti-reflection function. In normal production process, at first lens is manufactured by molding, then anti-reflection is added by thin-film coating. In recent years, instead of thin-film coating, sub-wavelength structures adding on surface of molding die are widely studied and development to keep anti-reflection performance. As merits, applying sub-wavelength structure, coating process becomes unnecessary and it is possible to reduce man-hour costs. In addition to cost merit, these are some technical advantages on this study. Adhesion of coating depends on material of plastic, and it is impossible to apply anti-reflection function on arbitrary surface. Sub-wavelength structure can solve both problems. Manufacturing method of anti-reflection structure can be divided into two types mainly. One method is with the resist patterning, and the other is mask-less method that does not require patterning. What we have developed is new mask-less method which is no need for resist patterning and possible to impart an anti-reflection structure to large area and curved lens surface, and can be expected to apply to various market segments. We report developed technique and characteristics of production lens.
International Nuclear Information System (INIS)
M’kacher, Radhia; Maalouf, Elie E.L.; Ricoul, Michelle; Heidingsfelder, Leonhard; Laplagne, Eric; Cuceu, Corina; Hempel, William M.; Colicchio, Bruno; Dieterlen, Alain; Sabatier, Laure
2014-01-01
Graphical abstract: - Highlights: • We have applied telomere and centromere (TC) staining to the scoring of dicentrics. • TC staining renders the scoring of dicentrics more rapid and robust. • TC staining allows the scoring of not only dicentrics but all chromosomal anomalies. • TC staining has led to a reevaluation of the radiation dose–response curve. • TC staining allows automation of the scoring of chromosomal aberations. • Automated scoring of dicentrics after TC staining was as efficient as manual scoring. - Abstract: Purpose: The dicentric chromosome (dicentric) assay is the international gold-standard method for biological dosimetry and classification of genotoxic agents. The introduction of telomere and centromere (TC) staining offers the potential to render dicentric scoring more efficient and robust. In this study, we improved the detection of dicentrics and all unstable chromosomal aberrations (CA) leading to a significant reevaluation of the dose–effect curve and developed an automated approach following TC staining. Material and methods: Blood samples from 16 healthy donors were exposed to 137 Cs at 8 doses from 0.1 to 6 Gy. CA were manually and automatically scored following uniform (Giemsa) or TC staining. The detection of centromeric regions and telomeric sequences using PNA probes allowed the detection of all unstable CA: dicentrics, centric and acentric rings, and all acentric fragments (with 2, 4 or no telomeres) leading to the precise quantification of estimated double strand breaks (DSB). Results: Manual scoring following TC staining revealed a significantly higher frequency of dicentrics (p < 10 −3 ) (up to 30%) and estimated DSB (p < 10 −4 ) compared to uniform staining due to improved detection of dicentrics with centromeres juxtaposed with other centromeres or telomeres. This improvement permitted the development of the software, TCScore, that detected 95% of manually scored dicentrics compared to 50% for the best
Energy Technology Data Exchange (ETDEWEB)
M’kacher, Radhia [Laboratoire de Radiobiologie et Oncologie (LRO), Commissariat à l’Energie Atomique (CEA), Route du Panorama, 92265 Fontenay-aux-Roses (France); Maalouf, Elie E.L. [Laboratoire de Radiobiologie et Oncologie (LRO), Commissariat à l’Energie Atomique (CEA), Route du Panorama, 92265 Fontenay-aux-Roses (France); Laboratoire MIPS – Groupe TIIM3D, Université de Haute-Alsace, F-68093 Mulhouse (France); Ricoul, Michelle [Laboratoire de Radiobiologie et Oncologie (LRO), Commissariat à l’Energie Atomique (CEA), Route du Panorama, 92265 Fontenay-aux-Roses (France); Heidingsfelder, Leonhard [MetaSystems GmbH, Robert-Bosch-Str. 6, 68804 Altlussheim (Germany); Laplagne, Eric [Pole Concept, 61 Rue Erlanger, 75016 Paris (France); Cuceu, Corina; Hempel, William M. [Laboratoire de Radiobiologie et Oncologie (LRO), Commissariat à l’Energie Atomique (CEA), Route du Panorama, 92265 Fontenay-aux-Roses (France); Colicchio, Bruno; Dieterlen, Alain [Laboratoire MIPS – Groupe TIIM3D, Université de Haute-Alsace, F-68093 Mulhouse (France); Sabatier, Laure, E-mail: laure.sabatier@cea.fr [Laboratoire de Radiobiologie et Oncologie (LRO), Commissariat à l’Energie Atomique (CEA), Route du Panorama, 92265 Fontenay-aux-Roses (France)
2014-12-15
Graphical abstract: - Highlights: • We have applied telomere and centromere (TC) staining to the scoring of dicentrics. • TC staining renders the scoring of dicentrics more rapid and robust. • TC staining allows the scoring of not only dicentrics but all chromosomal anomalies. • TC staining has led to a reevaluation of the radiation dose–response curve. • TC staining allows automation of the scoring of chromosomal aberations. • Automated scoring of dicentrics after TC staining was as efficient as manual scoring. - Abstract: Purpose: The dicentric chromosome (dicentric) assay is the international gold-standard method for biological dosimetry and classification of genotoxic agents. The introduction of telomere and centromere (TC) staining offers the potential to render dicentric scoring more efficient and robust. In this study, we improved the detection of dicentrics and all unstable chromosomal aberrations (CA) leading to a significant reevaluation of the dose–effect curve and developed an automated approach following TC staining. Material and methods: Blood samples from 16 healthy donors were exposed to {sup 137}Cs at 8 doses from 0.1 to 6 Gy. CA were manually and automatically scored following uniform (Giemsa) or TC staining. The detection of centromeric regions and telomeric sequences using PNA probes allowed the detection of all unstable CA: dicentrics, centric and acentric rings, and all acentric fragments (with 2, 4 or no telomeres) leading to the precise quantification of estimated double strand breaks (DSB). Results: Manual scoring following TC staining revealed a significantly higher frequency of dicentrics (p < 10{sup −3}) (up to 30%) and estimated DSB (p < 10{sup −4}) compared to uniform staining due to improved detection of dicentrics with centromeres juxtaposed with other centromeres or telomeres. This improvement permitted the development of the software, TCScore, that detected 95% of manually scored dicentrics compared to 50% for
Standard Test Method for Bird Impact Testing of Aerospace Transparent Enclosures
American Society for Testing and Materials. Philadelphia
2010-01-01
1.1 This test method covers conducting bird impact tests under a standard set of conditions by firing a packaged bird at a stationary transparency mounted in a support structure. 1.2 The values stated in inch-pound units are to be regarded as standard. The values given in parentheses are mathematical conversions to SI units that are provided for information only and are not considered standard. 1.3 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use. For specific hazard statements, see Section 8.
ANSI/ASHRAE/IES Standard 90.1-2016 Performance Rating Method Reference Manual
Energy Technology Data Exchange (ETDEWEB)
Goel, Supriya [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Rosenberg, Michael I. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Eley, Charles [Eley and Associates, Hobe Sound, FL (United States)
2017-09-29
This document is intended to be a reference manual for the Appendix G Performance Rating Method (PRM) of ANSI/ASHRAE/IES Standard 90.1-2016 (Standard 90.1-2016). The PRM can be used to demonstrate compliance with the standard and to rate the energy efficiency of commercial and high-rise residential buildings with designs that exceed the requirements of Standard 90.1. Use of the PRM for demonstrating compliance with Standard 90.1 is a new feature of the 2016 edition. The procedures and processes described in this manual are designed to provide consistency and accuracy by filling in gaps and providing additional details needed by users of the PRM.
Standard test method for measurement of web/roller friction characteristics
American Society for Testing and Materials. Philadelphia
2003-01-01
1.1 This test method covers the simulation of a roller/web transport tribosystem and the measurement of the static and kinetic coefficient of friction of the web/roller couple when sliding occurs between the two. The objective of this test method is to provide users with web/roller friction information that can be used for process control, design calculations, and for any other function where web/roller friction needs to be known. 1.2 The values stated in SI units are to be regarded as standard. No other units of measurement are included in this standard. 1.3 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.
Standard test method for pin-type bearing test of metallic materials
American Society for Testing and Materials. Philadelphia
1984-01-01
1.1 This test method covers a pin-type bearing test of metallic materials to determine bearing yield strength and bearing strength. Note 1—The presence of incidental lubricants on the bearing surfaces may significantly lower the value of bearing yield strength obtained by this method. 1.2 Units—The values stated in inch-pound units are to be regarded as standard. The values given in parentheses are mathematical conversions to SI units that are provided for information only and are not considered standard. 1.3 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.
American Society for Testing and Materials. Philadelphia
2007-01-01
1.1 This method covers the determination of americium–241 in soil by means of chemical separations and alpha spectrometry. It is designed to analyze up to ten grams of soil or other sample matrices that contain up to 30 mg of combined rare earths. This method allows the determination of americium–241 concentrations from ambient levels to applicable standards. The values stated in SI units are to be regarded as standard. 1.2 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use. For specific precaution statements, see Section 10.
Standard Test Method for Bond Strength of Ceramic Tile to Portland Cement Paste
American Society for Testing and Materials. Philadelphia
2002-01-01
1.1 This test method covers the determination of the ability of glazed ceramic wall tile, ceramic mosaic tile, quarry tile, and pavers to be bonded to portland cement paste. This test method includes both face-mounted and back-mounted tile. 1.2 The values stated in inch-pound units are to be regarded as standard. The values given in parentheses are mathematical conversions to SI units that are provided for information only and are not considered standard. 1.3 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.
Standard Test Method for Measuring Heat-Transfer Rate Using a Thermal Capacitance (Slug) Calorimeter
American Society for Testing and Materials. Philadelphia
2008-01-01
1.1 This test method describes the measurement of heat transfer rate using a thermal capacitance-type calorimeter which assumes one-dimensional heat conduction into a cylindrical piece of material (slug) with known physical properties. 1.2 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use. 1.3 The values stated in SI units are to be regarded as standard. No other units of measurement are included in this standard. Note 1—For information see Test Methods E 285, E 422, E 458, E 459, and E 511.
The Heating Curve Adjustment Method
Kornaat, W.; Peitsman, H.C.
1995-01-01
In apartment buildings with a collective heating system usually a weather compensator is used for controlling the heat delivery to the various apartments. With this weather compensator the supply water temperature to the apartments is regulated depending on the outside air temperature. With
Kamath, M Ganesh; Pallath, Vinod; Ramnarayan, K; Kamath, Asha; Torke, Sharmila; Gonsalves, James
2016-01-01
The undergraduate curriculum at our institution is divided system-wise into four blocks, each block ending with theory and objective structured practical examination (OSPE). The OSPE in Physiology consists of 12 stations, and a conventional minimum score to qualify is 50%. We aimed to incorporate standard setting using the modified Angoff method in OSPE to differentiate the competent from the non-competent student and to explore the possibility of introducing standard setting in Physiology OSPE at our institution. Experts rated the OSPE using the modified Angoff method to obtain the standard set cut-off in two of the four blocks. We assessed the OSPE marks of 110 first year medical students. Chi-square test was used to compare the number of students who scored less than standard set cut-off and conventional cut-off; correlation coefficient was used to assess the relation between OSPE and theory marks in both blocks. Feedback was obtained from the experts. The standard set was 62% and 67% for blocks II and III, respectively. The use of standard set cut-off resulted in 16.3% (n=18) and 22.7% (n=25) students being declared unsuccessful in blocks II and III, respectively. Comparison between the number, who scored less than standard set and conventional cut-off was statistically significant (p=0.001). The correlation coefficient was 0.65 (p=0.003) and 0.52 (p<0.001) in blocks II and III, respectively. The experts welcomed the idea of standard setting. Standard setting helped in differentiating the competent from the non-competent student, indicating that standard setting enhances the quality of OSPE as an assessment tool.
American Society for Testing and Materials. Philadelphia
2002-01-01
1.1 This test method covers the determination of the stability in storage, of liquid, water-base chemical cleaning compounds, used to clean the exterior surfaces of aircraft. 1.2 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.
Method of Fabricating NASA-Standard Macro-Fiber Composite Piezoelectric Actuators
High, James W.; Wilkie, W. Keats
2003-01-01
The NASA Macro-Fiber Composite actuator is a flexible piezoelectric composite device designed for controlling vibrations and shape deformations in high performance aerospace structures. A complete method for fabricating the standard NASA Macro-Fiber Composite actuator is presented in this document. When followed precisely, these procedures will yield devices with electromechanical properties identical to the standard actuator manufactured by NASA Langley Research Center.
Standard Test Method for Resin Flow of Carbon Fiber-Epoxy Prepreg
American Society for Testing and Materials. Philadelphia
1999-01-01
1.1 This test method covers the determination of the amount of resin flow that will take place from prepreg tape or sheet under given conditions of temperature and pressure. 1.2 The values stated in SI units are to be regarded as standard. The values in parentheses are for reference only. 1.3 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.
Implementation of sum-peak method for standardization of positron emission radionuclides
International Nuclear Information System (INIS)
Fragoso, Maria da Conceicao de Farias; Oliveira, Mercia Liane de; Lima, Fernando Roberto de Andrade
2015-01-01
Positron Emission Tomography (PET) is being increasingly recognized as an important quantitative imaging tool for diagnosis and assessing response to therapy. As correct dose administration plays a crucial part in nuclear medicine, it is important that the instruments used to assay the activity of the short-lived radionuclides are calibrated accurately, with traceability to the national or international standards. The sum-peak method has been widely used for radionuclide standardization. The purpose of this study was to implement the methodology for standardization of PET radiopharmaceuticals at the Regional Center for Nuclear Sciences of the Northeast (CRCN-NE). (author)
Transition curves for highway geometric design
Kobryń, Andrzej
2017-01-01
This book provides concise descriptions of the various solutions of transition curves, which can be used in geometric design of roads and highways. It presents mathematical methods and curvature functions for defining transition curves. .
Stolzer, Alan J.; Halford, Carl
2007-01-01
In a previous study, multiple regression techniques were applied to Flight Operations Quality Assurance-derived data to develop parsimonious model(s) for fuel consumption on the Boeing 757 airplane. The present study examined several data mining algorithms, including neural networks, on the fuel consumption problem and compared them to the multiple regression results obtained earlier. Using regression methods, parsimonious models were obtained that explained approximately 85% of the variation in fuel flow. In general data mining methods were more effective in predicting fuel consumption. Classification and Regression Tree methods reported correlation coefficients of .91 to .92, and General Linear Models and Multilayer Perceptron neural networks reported correlation coefficients of about .99. These data mining models show great promise for use in further examining large FOQA databases for operational and safety improvements.
Normalization method for metabolomics data using optimal selection of multiple internal standards
Directory of Open Access Journals (Sweden)
Yetukuri Laxman
2007-03-01
Full Text Available Abstract Background Success of metabolomics as the phenotyping platform largely depends on its ability to detect various sources of biological variability. Removal of platform-specific sources of variability such as systematic error is therefore one of the foremost priorities in data preprocessing. However, chemical diversity of molecular species included in typical metabolic profiling experiments leads to different responses to variations in experimental conditions, making normalization a very demanding task. Results With the aim to remove unwanted systematic variation, we present an approach that utilizes variability information from multiple internal standard compounds to find optimal normalization factor for each individual molecular species detected by metabolomics approach (NOMIS. We demonstrate the method on mouse liver lipidomic profiles using Ultra Performance Liquid Chromatography coupled to high resolution mass spectrometry, and compare its performance to two commonly utilized normalization methods: normalization by l2 norm and by retention time region specific standard compound profiles. The NOMIS method proved superior in its ability to reduce the effect of systematic error across the full spectrum of metabolite peaks. We also demonstrate that the method can be used to select best combinations of standard compounds for normalization. Conclusion Depending on experiment design and biological matrix, the NOMIS method is applicable either as a one-step normalization method or as a two-step method where the normalization parameters, influenced by variabilities of internal standard compounds and their correlation to metabolites, are first calculated from a study conducted in repeatability conditions. The method can also be used in analytical development of metabolomics methods by helping to select best combinations of standard compounds for a particular biological matrix and analytical platform.
Marjanovic, Ljiljana; McCrindle, Robert I; Botha, Barend M; Potgieter, Herman J
2004-05-01
The simplified generalized standard additions method (GSAM) was investigated as an alternative method for the ICP-OES analysis of solid materials, introduced into the plasma in the form of slurries. The method is an expansion of the conventional standard additions method. It is based on the principle of varying both the sample mass and the amount of standard solution added. The relationship between the sample mass, standard solution added and signal intensity is assumed to be linear. Concentration of the analyte can be found either geometrically from the slope of the two-dimensional response plane in a three-dimensional space or mathematically from the ratio of the parameters estimated by multiple linear regression. The analysis of a series of certified reference materials (CRMs) (cement CRM-BCS No 353, gypsum CRM-Gyp A and basic slag CRM No 382/I) introduced into the plasma in the form of slurry is described. The slurries contained glycerol and hydrochloric acid and were placed in an ultrasonic bath to ensure good dispersion. "Table curve 3D" software was used to fit the data. Results obtained showed that the method could be successfully applied to the analysis of cement, gypsum and slag samples, without the need to dissolve them. In this way, we could avoid the use of hazardous chemicals (concentrated acids), incomplete dissolution and loss of some volatiles. The application of the simplified GSAM for the analysis did not require a CRM with similar chemical and mineralogical properties for the calibration of the instrument.
Standardization is superior to traditional methods of teaching open vascular simulation.
Bath, Jonathan; Lawrence, Peter; Chandra, Ankur; O'Connell, Jessica; Uijtdehaage, Sebastian; Jimenez, Juan Carlos; Davis, Gavin; Hiatt, Jonathan
2011-01-01
Standardizing surgical skills teaching has been proposed as a method to rapidly attain technical competence. This study compared acquisition of vascular skills by standardized vs traditional teaching methods. The study randomized 18 first-year surgical residents to a standardized or traditional group. Participants were taught technical aspects of vascular anastomosis using femoral anastomosis simulation (Limbs & Things, Savannah, Ga), supplemented with factual information. One expert instructor taught a standardized anastomosis technique using the same method each time to one group over four sessions, while, similar to current vascular training, four different expert instructors each taught one session to the other (traditional) group. Knowledge and technical skill were assessed at study completion by an independent vascular expert using Objective Structured Assessment of Technical Skill (OSATS) performance metrics. Participants also provided a written evaluation of the study experience. The standardized group had significantly higher mean overall technical (95.7% vs 75.8%; P = .038) and global skill scores (83.4% vs 67%; P = .006). Tissue handling, efficiency of motion, overall technical skill, and flow of operation were rated significantly higher in the standardized group (mean range, 88%-96% vs 67.6%-77.6%; P teaching methods on performance outcome. Findings from this report suggest that for simulation training, standardized may be more effective than traditional methods of teaching. Transferability of simulator-acquired skills to the clinical setting will be required before open simulation can be unequivocally recommended as a major component of resident technical skill training. Copyright © 2011 Society for Vascular Surgery. Published by Mosby, Inc. All rights reserved.
DEFF Research Database (Denmark)
Pavlovic, M; Holstein-Rathlou, N H; Madsen, F
1985-01-01
We compared the provocative concentration (PC) values obtained by two different methods of performing bronchial histamine challenge. One test was done on an APTA, an apparatus which allows simultaneous provocation with histamine and measurement of airway resistance (Rtot) by the interrupter metho...
Standard test method for determining residual stresses by the hole-drilling strain-gage method
American Society for Testing and Materials. Philadelphia
2008-01-01
1.1 Residual Stress Determination: 1.1.1 This test method specifies a hole-drilling procedure for determining residual stress profiles near the surface of an isotropic linearly elastic material. The test method is applicable to residual stress profile determinations where in-plane stress gradients are small. The stresses may remain approximately constant with depth (“uniform” stresses) or they may vary significantly with depth (“non-uniform” stresses). The measured workpiece may be “thin” with thickness much less than the diameter of the drilled hole or “thick” with thickness much greater than the diameter of the drilled hole. Only uniform stress measurements are specified for thin workpieces, while both uniform and non-uniform stress measurements are specified for thick workpieces. 1.2 Stress Measurement Range: 1.2.1 The hole-drilling method can identify in-plane residual stresses near the measured surface of the workpiece material. The method gives localized measurements that indicate the...
American Society for Testing and Materials. Philadelphia
1996-01-01
1.1 This test method covers an electronic hydrogen detection instrument procedure for measurement of plating permeability to hydrogen. This method measures a variable related to hydrogen absorbed by steel during plating and to the hydrogen permeability of the plate during post plate baking. A specific application of this method is controlling cadmium-plating processes in which the plate porosity relative to hydrogen is critical, such as cadmium on high-strength steel. This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use. For specific hazard statement, see Section 8. 1.2 The values stated in SI units are to be regarded as the standard. The values given in parentheses are for information only.
Standard test method for determination of surface lubrication on flexible webs
American Society for Testing and Materials. Philadelphia
1999-01-01
1.1 This test method has been used since 1988 as an ANSI/ISO standard test for determination of lubrication on processed photographic films. Its purpose was to determine the presence of process-surviving lubricants on photographic films. It is the purpose of this test method to expand the applicability of this test method to other flexible webs that may need lubrication for suitable performance. This test measures the breakaway (static) coefficient of friction of a metal rider on the web by the inclined plane method. The objectives of the test is to determine if a web surface has a lubricant present or not. It is not intended to assign a friction coefficient to a material. It is not intended to rank lubricants. 1.2 The values stated in SI units are to be regarded as standard. No other units of measurement are included in this standard. 1.3 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish ...
American Society for Testing and Materials. Philadelphia
2008-01-01
1.1 This standard describes three test methods for determining the modulus of elasticity in bending and the bending strength of metallic strips or sheets intended for the use in flat springs: 1.1.1 Test Method A—a cantilever beam, 1.1.2 Test Method B—a three-point loaded beam (that is, a beam resting on two supports and centrally loaded), and 1.1.3 Test Method C—a four-point loaded beam (that is, a beam resting on two supports and loaded at two points equally spaced from each support). 1.2 The values stated in inch-pound units are to be regarded as standard. The values given in parentheses are mathematical conversions to SI units that are provided for information only and are not considered standard. 1.3 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use. 6.1 This test me...
Multiphasic growth curve analysis.
Koops, W.J.
1986-01-01
Application of a multiphasic growth curve is demonstrated with 4 data sets, adopted from literature. The growth curve used is a summation of n logistic growth functions. Human height growth curves of this type are known as "double logistic" (n = 2) and "triple logistic" (n = 3) growth curves (Bock
Energy Technology Data Exchange (ETDEWEB)
Sanchez, Maria Ambert [Iowa State Univ., Ames, IA (United States)
2003-12-12
The implementation of x-ray computerized tomography (CT) on agricultural soils has been used in this research to quantify soil physical properties to be compared with standard laboratory (STD) methods. The overall research objective was to more accurately quantify soil physical properties for long-term management systems. Two field studies were conducted at Iowa State University's Northeast Research and Demonstration Farm near Nashua, IA using two different soil management strategies. The first field study was conducted in 1999 using continuous corn crop rotation for soil under chisel plow with no-till treatments. The second study was conducted in 2001 and on soybean crop rotation for the same soil but under chisel plow and no-till practices with wheel track and no-wheel track compaction treatments induced by a tractor-manure wagon. In addition, saturated hydraulic (K{sub s}) conductivity and the convection-dispersion (CDE) model were also applied using long-term soil management systems only during 2001. The results obtained for the 1999 field study revealed no significant differences between treatments and laboratory methods, but significant differences were found at deeper depths of the soil column for tillage treatments. The results for standard laboratory procedure versus CT method showed significant differences at deeper depths for the chisel plow treatment and at the second lower depth for no-till treatment for both laboratory methods. The macroporosity distribution experiment showed significant differences at the two lower depths between tillage practices. Bulk density and percent porosity had significant differences at the two lower depths of the soil column. The results obtained for the 2001 field study showed no significant differences between tillage practices and compaction practices for both laboratory methods, but significant differences between tillage practices with wheel track and no-wheel compaction treatments were found along the soil
[Blood pressure measurement by primary care physicians: comparison with the standard method].
Asai, Y; Kawamoto, R; Nago, N; Kajii, E
2000-04-01
To examine the usual methods of blood pressure (BP) measurement by primary care physicians and to compare them with the standard methods. Cross-sectional survey by self-administered questionnaire. Primary care physicians who graduated from Jichi Medical School and were working at clinics. Each standard method for 20 items was defined as the one that was most frequently recommended by 6 guidelines (USA 3, UK 1, Canada 1, Japan 1) and a recent comprehensive review about BP measurement. Of 333 physicians, 190 (58%) responded (median age 33, range 26 to 45 years). Standard methods and percentages of physicians who follow them are: [BP measurement, 17 items] supported arm 96%; measurement to 2 mmHg 91%; sitting position 86%; mercury sphygmomanometer 83%; waiting > or = 1 minute between readings 58%; palpation to assess systolic BP before auscultation 57%; check accuracy of home BP monitor 56%; Korotkoff Phase V for diastolic BP 51%; bilateral measurements on initial visit 44%; small cuff available 41%; > or = 2 readings in patients with atrial fibrillation 38%; > or = 2 readings on one visit 20%; cuff deflation rate of 2 mmHg/pulse 14%; large cuff available 13%; check accuracy of monitor used for home visit 8%; waiting time > or = 5 minute 3%; readings from the arm with the higher BP 1%. [Knowledge about BP monitor, 2 items] appropriate size bladder: length 11%; width 11%. [Check of sphygmomanometer for leakage, inflate to 200 mmHg then close valve for 1 minute] leakage < 2 mmHg 6%; median 10 (range 0-200) mmHg. Average percentage of all 20 items was 39%. Number of methods physicians follow as standard: median 8 (range 4 to 15) and this number did not correlate with any background characteristics of the physicians. Furthermore, we also obtained information on methods not compared with the standard. Fifty-four percentage of physicians used more standard methods in deciding the start or change of treatment than in measuring BP of patients with good control. About 80% of
Another Look at the Method of Y-Standardization in Logit and Probit Models
DEFF Research Database (Denmark)
Karlson, Kristian Bernt
2015-01-01
This paper takes another look at the derivation of the method of Y-standardization used in sociological analysis involving comparisons of coefficients across logit or probit models. It shows that the method can be derived under less restrictive assumptions than hitherto suggested. Rather than...... assuming that the logit or probit fixes the variance of the latent error at a known constant, it suffices to assume that the variance of the error is unknown. A further result suggests that using Y-standardization for cross-model comparisons is likely to be biased by model differences in the fit...
Primary activity standardization of {sup 57}Co by sum-peak method
Energy Technology Data Exchange (ETDEWEB)
Iwahara, A. [Laboratorio Nacional de Metrologia das Radiacoes Ionizantes (LNMRI)/Instituto de Radioprotecao e Dosimetria (IRD)/Comissao Nacional de Energia Nuclear - CNEN, Av. Salvador Allende, s/no. Recreio dos Bandeirantes - CEP 22780-160 Rio de Janeiro (Brazil)], E-mail: iwahara@ird.gov.br; Poledna, R.; Silva, C.J. da; Tauhata, L. [Laboratorio Nacional de Metrologia das Radiacoes Ionizantes (LNMRI)/Instituto de Radioprotecao e Dosimetria (IRD)/Comissao Nacional de Energia Nuclear - CNEN, Av. Salvador Allende, s/no. Recreio dos Bandeirantes - CEP 22780-160 Rio de Janeiro (Brazil)
2009-10-15
The sum-peak method was applied to standardize a {sup 57}Co solution within the framework of an international comparison organized by International Atomic Energy Agency, in 2008, aimed toward international traceability of activity measurements. A planar germanium detector was used with the sources placed on top of the detector for activity determination measurements. An analytical expression for accidental summing correction was derived and the effect of the germanium characteristic KX-ray escape peak of 112 keV was taken into account. The standard uncertainty associated to the activity concentration value was 0.37% and the result was compared with other measurement methods.
A procedure for the improvement in the determination of a TXRF spectrometer sensitivity curve
International Nuclear Information System (INIS)
Bennun, Leonardo; Sanhueza, Vilma
2010-01-01
A simple procedure is proposed to determine the total reflection X-ray fluorescence (TXRF) spectrometer sensitivity curve; this procedure provides better accuracy and exactitude than the standard established method. It uses individual pure substances instead of the use of vendor-certified values of reference calibration standards, which are expensive and lack any method to check their quality. This method avoids problems like uncertainties in the determination of the sensitivity curve according to different standards. It also avoids the need for validation studies between different techniques, in order to assure the quality of their TXRF results. (author)
A procedure for the improvement in the determination of a TXRF spectrometer sensitivity curve.
Bennun, Leonardo; Sanhueza, Vilma
2010-01-01
A simple procedure is proposed to determine the total reflection X-ray fluorescence (TXRF) spectrometer sensitivity curve; this procedure provides better accuracy and exactitude than the standard established method. It uses individual pure substances instead of the use of vendor-certified values of reference calibration standards, which are expensive and lack any method to check their quality. This method avoids problems like uncertainties in the determination of the sensitivity curve according to different standards. It also avoids the need for validation studies between different techniques, in order to assure the quality of their TXRF results.
Ice detection on wind turbines using observed power curve
DEFF Research Database (Denmark)
Davis, Neil; Byrkjedal, Øyvind; Hahmann, Andrea N.
2016-01-01
be used to separate iced production periods from non-iced production periods. The first approach relies on a percentage deviation from the manufacturer’s power curve. The other two approaches fit threshold curves based on the observed variance of non-iced production data. These approaches are applied......Icing on the blades of a wind turbine can lead to significant production losses during the winter months for wind parks in cold climate regions. However, there is no standard way of identifying ice-induced power loss. This paper describes three methods for creating power threshold curves that can...... to turbines in four wind parks and compared with each other and to observations of icing on the nacelle of one of the turbines in each park. It is found that setting an ice threshold curve using 0.1 quantile of the observed power data during normal operation with a 2-h minimum duration is the best approach...
Directory of Open Access Journals (Sweden)
Yasuhisa Fujiki
Full Text Available Functional fluorescence imaging has been widely applied to analyze spatio-temporal patterns of cellular dynamics in the brain and spinal cord. However, it is difficult to integrate spatial information obtained from imaging data in specific regions of interest across multiple samples, due to large variability in the size, shape and internal structure of samples. To solve this problem, we attempted to standardize transversely sectioned spinal cord images focusing on the laminar structure in the gray matter. We employed three standardization methods, the affine transformation (AT, the angle-dependent transformation (ADT and the combination of these two methods (AT+ADT. The ADT is a novel non-linear transformation method developed in this study to adjust an individual image onto the template image in the polar coordinate system. We next compared the accuracy of these three standardization methods. We evaluated two indices, i.e., the spatial distribution of pixels that are not categorized to any layer and the error ratio by the leave-one-out cross validation method. In this study, we used neuron-specific marker (NeuN-stained histological images of transversely sectioned cervical spinal cord slices (21 images obtained from 4 rats to create the standard atlas and also to serve for benchmark tests. We found that the AT+ADT outperformed other two methods, though the accuracy of each method varied depending on the layer. This novel image standardization technique would be applicable to optical recording such as voltage-sensitive dye imaging, and will enable statistical evaluations of neural activation across multiple samples.
Standard test method for plutonium assay by plutonium (III) diode array spectrophotometry
American Society for Testing and Materials. Philadelphia
2002-01-01
1.1 This test method describes the determination of total plutonium as plutonium(III) in nitrate and chloride solutions. The technique is applicable to solutions of plutonium dioxide powders and pellets (Test Methods C 697), nuclear grade mixed oxides (Test Methods C 698), plutonium metal (Test Methods C 758), and plutonium nitrate solutions (Test Methods C 759). Solid samples are dissolved using the appropriate dissolution techniques described in Practice C 1168. The use of this technique for other plutonium-bearing materials has been reported (1-5), but final determination of applicability must be made by the user. The applicable concentration range for plutonium sample solutions is 10–200 g Pu/L. 1.2 The values stated in SI units are to be regarded as standard. No other units of measurement are included in this standard. 1.3 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropria...
Use of sum-peak and coincidence counting methods for activity standardization of {sup 22}Na
Energy Technology Data Exchange (ETDEWEB)
Oliveira, E.M. de, E-mail: estela@ird.gov.br [Laboratorio Nacional de Metrologia das Radiacoes Ionizantes (LNMRI/IRD/CNEN), Av. Salvador Allende, s/n, Recreio, CEP 22780-160 Rio de Janeiro (Brazil); Iwahara, A.; Poledna, R. [Laboratorio Nacional de Metrologia das Radiacoes Ionizantes (LNMRI/IRD/CNEN), Av. Salvador Allende, s/n, Recreio, CEP 22780-160 Rio de Janeiro (Brazil); Silva, M.A.L. da [Coordenacao Geral de Instalacoes Nucleares/Comissao Nacional de Energia Nuclear, R. Gal. Severiano, 90 - Botafogo, CEP 22290-901 Rio de Janeiro (Brazil); Tauhata, L. [Fundacao Carlos Chagas Filho de Amparo a Pesquisa do Estado do Rio de Janeiro (FAPERJ), Av. Erasmo Braga, 118-6 Degree-Sign andar, CEP 20020-000 Centro, Rio de Janeiro (Brazil); Delgado, J.U. [Laboratorio Nacional de Metrologia das Radiacoes Ionizantes (LNMRI/IRD/CNEN), Av. Salvador Allende, s/n, Recreio, CEP 22780-160 Rio de Janeiro (Brazil); Lopes, R.T. [Laboratorio de Instrumentacao Nuclear (LIN/PEN/COPPE/UFRJ), Caixa Postal 68509, CEP 21945-970 Rio de Janeiro (Brazil)
2012-09-21
A solution containing the positron emitter {sup 22}Na has been absolutely standardized using the 4{pi}{beta}-{gamma} coincidence counting method and the sum-peak spectrometry counting method. In the 4{pi}{beta}-{gamma} coincidence method two ways for the activity concentration measurements were used: gating on the 1275 keV photopeak and on the 1786 keV sum-peak where the knowledge of the {beta}{sup +}-branching ratio is required. In the sum-peak method the measurements were carried out using three experimental arrangements: the first composed by a well type 5 in. Multiplication-Sign 5 in. NaI(Tl) scintillation crystal, the second by a 3 in. Multiplication-Sign 3 in. NaI(Tl) scintillation crystal placed on the top of the first, resulting in a 4{pi} counting geometry and the third arrangement is a high purity coaxial germanium detector. The results that are obtained by these two methods are compatible within the standard uncertainty values with a coverage factor of k=2 ({approx}95% of the confidence level). This means that the sum-peak counting with its more simple experimental setup than the complex coincidence 4{pi}{beta}-{gamma} counting system gives consistent results for the activity standardization of {sup 22}Na with smaller uncertainties. Besides, the time period involved to attain the result of the standardization was quite shorter than the coincidence measurements used in this work.
CYCLING CURVES AND THEIR APPLICATIONS
Directory of Open Access Journals (Sweden)
RAICU Lucian
2015-06-01
Full Text Available This paper proposes an analysis of the cyclic curves that can be considered as some of the most important regarding their applications in science, technique, design, architecture and art. These curves include the following: cycloid, epicycloid, hypocycloid, spherical cycloid and special cases thereof. In the first part of the paper the main curves of cycloids family are presented with their methods of generating and setting parametric equations. In the last part some of cycloid applications are highlighted in different areas of science, technology and art.
A simple web-based tool to compare freshwater fish data collected using AFS standard methods
Bonar, Scott A.; Mercado-Silva, Norman; Rahr, Matt; Torrey, Yuta T.; Cate, Averill
2016-01-01
The American Fisheries Society (AFS) recently published Standard Methods for Sampling North American Freshwater Fishes. Enlisting the expertise of 284 scientists from 107 organizations throughout Canada, Mexico, and the United States, this text was developed to facilitate comparisons of fish data across regions or time. Here we describe a user-friendly web tool that automates among-sample comparisons in individual fish condition, population length-frequency distributions, and catch per unit effort (CPUE) data collected using AFS standard methods. Currently, the web tool (1) provides instantaneous summaries of almost 4,000 data sets of condition, length frequency, and CPUE of common freshwater fishes collected using standard gears in 43 states and provinces; (2) is easily appended with new standardized field data to update subsequent queries and summaries; (3) compares fish data from a particular water body with continent, ecoregion, and state data summaries; and (4) provides additional information about AFS standard fish sampling including benefits, ongoing validation studies, and opportunities to comment on specific methods. The web tool—programmed in a PHP-based Drupal framework—was supported by several AFS Sections, agencies, and universities and is freely available from the AFS website and fisheriesstandardsampling.org. With widespread use, the online tool could become an important resource for fisheries biologists.
[Establish quality evaluation method based on standard decoction of Danshen extract].
Dong, Qing; Yu, Hua-Tao; Dai, Yun-Tao; Chao, Jung; Fan, Zi-Quan; Wang, Dan-Dan; Zhu, Chao; Chen, Shi-Lin
2017-03-01
The quality of Danshen extract granules on market is largely different from each other mainly due to the heterogeneous quality of raw materials of Salvia miltiorrhiza, various producing procedures and lack of good quality evaluation method. Formula granule and "standard decoction" have the same quality. In this paper, a systematic evaluation method for the quality of Danshen decoction was established from the perspective of "standard decoction", in order to explore the main factors affecting the quality uniformity of Danshen extract granules. Danshen standard decoction was prepared; then the fingerprint method was developed to determine the content of salvianolic acid B; and the main peaks in the fingerprint were identified with UPLC-QTOF/MS to clarify the chemical compositions of Danshen decoction. Three indexes were calculated to evaluate the stability of whole process, including the extraction ratio; transfer rate of index components and pH value. The results showed that the main components of Danshen decoction were phenolic acids, while the extraction rate, the transfer rate of salvianolic acid B and pH value were in a relatively stable level, and the similarity in the fingerprint of standard decoction was high, indicating that the preparation procedure was stable. The level of salvianolic acid B in the standard decoction was in a large range, which was mainly due to the difference in the quality of Salviae Miltiorrhizae Radix et Rhizoma. Copyright© by the Chinese Pharmaceutical Association.
THE COST MANAGEMENT BY APPLYING THE STANDARD COSTING METHOD IN THE FURNITURE INDUSTRY-Case study
Directory of Open Access Journals (Sweden)
Radu Mărginean
2013-06-01
Full Text Available Among the modern calculation methods used in managerial accounting, with a large applicability in the industrial production field, we can find the standard costing method. This managerial approach of cost calculation has a real value in the managerial accounting field, due to its usefulness in forecasting production costs, helping the managers in the decision making process. The standard costing method in managerial accounting is part of modern managerial accounting methods, used in many enterprises with production activity. As research objectives for this paper, we propose studying the possibility of implementing this modern method of cost calculation in a company from the Romanian furniture industry, using real financial data. In order to achieve this aim, we used some specialized literature in the field of managerial accounting, showing the strengths and weaknesses of this method. The case study demonstrates that the standard costing modern method of cost calculation has full applicability in our case, and in conclusion it has a real value in the cost management process for enterprises in the Romanian furniture industry.
Energy Technology Data Exchange (ETDEWEB)
Reiner, Jessica L. [National Institute of Standards and Technology, Analytical Chemistry Division, Gaithersburg, MD (United States); National Institute of Standards and Technology, Analytical Chemistry Division, Hollings Marine Laboratory, Charleston, SC (United States); Phinney, Karen W. [National Institute of Standards and Technology, Analytical Chemistry Division, Gaithersburg, MD (United States); Keller, Jennifer M. [National Institute of Standards and Technology, Analytical Chemistry Division, Hollings Marine Laboratory, Charleston, SC (United States)
2011-11-15
Perfluorinated compounds (PFCs) were measured in three National Institute of Standards and Technology (NIST) Standard Reference Materials (SRMs) (SRMs 1950 Metabolites in Human Plasma, SRM 1957 Organic Contaminants in Non-fortified Human Serum, and SRM 1958 Organic Contaminants in Fortified Human Serum) using two analytical approaches. The methods offer some independence, with two extraction types and two liquid chromatographic separation methods. The first extraction method investigated the acidification of the sample followed by solid-phase extraction (SPE) using a weak anion exchange cartridge. The second method used an acetonitrile extraction followed by SPE using a graphitized non-porous carbon cartridge. The extracts were separated using a reversed-phase C{sub 8} stationary phase and a pentafluorophenyl (PFP) stationary phase. Measured values from both methods for the two human serum SRMs, 1957 and 1958, agreed with reference values on the Certificates of Analysis. Perfluorooctane sulfonate (PFOS) values were obtained for the first time in human plasma SRM 1950 with good reproducibility among the methods (below 5% relative standard deviation). The nominal mass interference from taurodeoxycholic acid, which has caused over estimation of the amount of PFOS in biological samples, was separated from PFOS using the PFP stationary phase. Other PFCs were also detected in SRM 1950 and are reported. SRM 1950 can be used as a control material for human biomonitoring studies and as an aid to develop new measurement methods. (orig.)
Standard test method for measurement of oxidation-reduction potential (ORP) of soil
American Society for Testing and Materials. Philadelphia
2009-01-01
1.1 This test method covers a procedure and related test equipment for measuring oxidation-reduction potential (ORP) of soil samples removed from the ground. 1.2 The procedure in Section 9 is appropriate for field and laboratory measurements. 1.3 Accurate measurement of oxidation-reduction potential aids in the analysis of soil corrosivity and its impact on buried metallic structure corrosion rates. 1.4 The values stated in inch-pound units are to be regarded as standard. The values given in parentheses are mathematical conversions to SI units that are provided for information only and are not considered standard. 1.5 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.
American Society for Testing and Materials. Philadelphia
2010-01-01
1.1 This test method covers the determination of the abrasiveness of ink-impregnated fabric printer ribbons and other web materials by means of a sliding wear test. 1.2 The values stated in inch-pound units are to be regarded as standard. The values given in parentheses are mathematical conversions to SI units that are provided for information only and are not considered standard. 1.3 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.
Standard test method for wear testing with a pin-on-disk apparatus
American Society for Testing and Materials. Philadelphia
2005-01-01
1.1 This test method covers a laboratory procedure for determining the wear of materials during sliding using a pin-on-disk apparatus. Materials are tested in pairs under nominally non-abrasive conditions. The principal areas of experimental attention in using this type of apparatus to measure wear are described. The coefficient of friction may also be determined. 1.2 The values stated in SI units are to be regarded as standard. No other units of measurement are included in this standard. 1.3 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.
American Society for Testing and Materials. Philadelphia
2008-01-01
1.1 This test method covers the determination of nonvolatile residue (NVR) fallout in environmentally controlled areas used for the assembly, testing, and processing of spacecraft. 1.2 The NVR of interest is that which is deposited on sampling plate surfaces at room temperature: it is left to the user to infer the relationship between the NVR found on the sampling plate surface and that found on any other surfaces. 1.3 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use. 1.4 The values stated in SI units are to be regarded as standard. No other units of measurement are included in this standard.
Standard test method for measurement of corrosion potentials of Aluminum alloys
American Society for Testing and Materials. Philadelphia
1997-01-01
1.1 This test method covers a procedure for measurement of the corrosion potential (see Note 1) of an aluminum alloy in an aqueous solution of sodium chloride with enough hydrogen peroxide added to provide an ample supply of cathodic reactant. Note 1—The corrosion potential is sometimes referred to as the open-circuit solution or rest potential. 1.2 The values stated in SI units are to be regarded as standard. No other units of measurement are included in this standard. 1.3 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.
Standard Test Method for Hail Impact Resistance of Aerospace Transparent Enclosures
American Society for Testing and Materials. Philadelphia
2010-01-01
1.1 This test method covers the determination of the impact resistance of an aerospace transparent enclosure, hereinafter called windshield, during hailstorm conditions using simulated hailstones consisting of ice balls molded under tightly controlled conditions. 1.2 The values stated in inch-pound units are to be regarded as standard. The values given in parentheses are mathematical conversions to SI units that are provided for information only and are not considered standard. 1.3 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use. For specific hazard statements see Section 7.
Standard Test Method for Gravimetric Determination of Nonvolatile Residue From Cleanroom Wipers
American Society for Testing and Materials. Philadelphia
2006-01-01
1.1 This test method covers the determination of solvent extractable nonvolatile residue (NVR) from wipers used in assembly, cleaning, or testing of spacecraft, but not from those used for analytical surface sampling of hardware. 1.2 The values stated in SI units are to be regarded as the standard. No other units of measurement are included in this standard. 1.3 The NVR of interest is that which can be extracted from cleanroom wipers using a specified solvent that has been selected for its extractive qualities. Alternative solvents may be selected, but since their use may result in different values being generated, they must be identified in the procedure data sheet. This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.
Standard test methods for elevated temperature tension tests of metallic materials
American Society for Testing and Materials. Philadelphia
2009-01-01
1.1 These test methods cover procedure and equipment for the determination of tensile strength, yield strength, elongation, and reduction of area of metallic materials at elevated temperatures. 1.2 Determination of modulus of elasticity and proportional limit are not included. 1.3 Tension tests under conditions of rapid heating or rapid strain rates are not included. 1.4 The values stated in inch-pound units are to be regarded as standard. The values given in parentheses are mathematical conversions to SI units that are provided for information only and are not considered standard. 1.5 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.
Standard test method for plastic strain ratio r for sheet metal
American Society for Testing and Materials. Philadelphia
2000-01-01
1.1 This test method covers special tension testing for the measurement of the plastic strain ratio, r, of sheet metal intended for deep-drawing applications. 1.2 The values stated in inch-pound units are to be regarded as standard. The values given in parentheses are mathematical conversions to SI units that are provided for information only and are not considered standard. 1.3 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.
American Society for Testing and Materials. Philadelphia
2010-01-01
1.1 This practice facilitates the interoperability of computed radiography (CR) imaging and data acquisition equipment by specifying image data transfer and archival storage methods in commonly accepted terms. This practice is intended to be used in conjunction with Practice E2339 on Digital Imaging and Communication in Nondestructive Evaluation (DICONDE). Practice E2339 defines an industrial adaptation of the NEMA Standards Publication titled Digital Imaging and Communications in Medicine (DICOM, see http://medical.nema.org), an international standard for image data acquisition, review, storage and archival storage. The goal of Practice E2339, commonly referred to as DICONDE, is to provide a standard that facilitates the display and analysis of NDE results on any system conforming to the DICONDE standard. Toward that end, Practice E2339 provides a data dictionary and a set of information modules that are applicable to all NDE modalities. This practice supplements Practice E2339 by providing information objec...
American Society for Testing and Materials. Philadelphia
1985-01-01
1.1 This test method covers a procedure for a laboratory test for performing an initial evaluation (screening) of nonmetallic seal materials by immersion in a simulated geothermal test fluid. 1.2 The values stated in SI units are to be regarded as standard. No other units of measurement are included in this standard. 1.3 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use. For specific precautionary statements, see Section 6 and 11.7.
Standard test method for plutonium by Iron (II)/Chromium (VI) amperometric titration
American Society for Testing and Materials. Philadelphia
2002-01-01
1.1 This test method covers the determination of plutonium in unirradiated nuclear-grade plutonium dioxide, uranium-plutonium mixed oxides with uranium (U)/plutonium (Pu) ratios up to 21, plutonium metal, and plutonium nitrate solutions. Optimum quantities of plutonium to measure are 7 to 15 mg. 1.2 The values stated in SI units are to be regarded as standard. No other units of measurement are included in this standard. 1.3 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.
Standard test method for calibration of surface/stress measuring devices
American Society for Testing and Materials. Philadelphia
1997-01-01
Return to Contents page 1.1 This test method covers calibration or verification of calibration, or both, of surface-stress measuring devices used to measure stress in annealed and heat-strengthened or tempered glass using polariscopic or refractometry based principles. 1.2 This test method is nondestructive. 1.3 This test method uses transmitted light, and therefore, is applicable to light-transmitting glasses. 1.4 This test method is not applicable to chemically tempered glass. 1.5 Using the procedure described, surface stresses can be measured only on the “tin” side of float glass. 1.6 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.
Standard Test Method for Water Absorption of Core Materials for Structural Sandwich Constructions
American Society for Testing and Materials. Philadelphia
2001-01-01
1.1 This test method covers the determination of the relative amount of water absorption by various types of structural core materials when immersed or in a high relative humidity environment. This test method is intended to apply to only structural core materials; honeycomb, foam, and balsa wood. 1.2 The values stated in SI units are to be regarded as the standard. The inch-pound units given may be approximate. This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.
INFLUENCE OF MOVING LOADS ON CURVED BRIDGES
Thamer A. Z*, Jabbbar S. A
2016-01-01
The behavior of a curved slab bridge decks with uniform thickness under moving load is investigated in this study. Three radii of curvature "R" are used (25, 50 and 75m) along with the straight bridge, R = ∞. The decks are simply supported or clamped along the radial edges and free at the circular edges. The AASHTO[1] standard axle load of the truck H20-44 is used and assumed to move in three track positions on the bridge. The finite element method is employed for the analysis and the ANSYS 5...
Differential geometry and topology of curves
Animov, Yu
2001-01-01
Differential geometry is an actively developing area of modern mathematics. This volume presents a classical approach to the general topics of the geometry of curves, including the theory of curves in n-dimensional Euclidean space. The author investigates problems for special classes of curves and gives the working method used to obtain the conditions for closed polygonal curves. The proof of the Bakel-Werner theorem in conditions of boundedness for curves with periodic curvature and torsion is also presented. This volume also highlights the contributions made by great geometers. past and present, to differential geometry and the topology of curves.
The Method of Eichhorn with Non-Standard Projections for a Single Plate
Cardona, O.; Corona-Galindo, M.
1990-11-01
RESUMEN. Se desarrollan las expresiones para el metodo de Eichhorn en astrometria para proyecciones diferentes a la estandar. El se usa para obtener las coordenadas esfericas de estrellas en placas astron6micas cuando las variables contienen errores. ABSTRACT. We develop the expressions for the Eichhorn's Method in astrometry for non-standard projections. The method is used to obtain spherical coordinates of stars in astronomical plates, when all the variables have errors. Key words: ASTROMETRY
Multigrid method based on a space-time approach with standard coarsening for parabolic problems
S.R. Franco (Sebastião Romero); F.J. Gaspar Lorenz (Franscisco); M.A. Villela Pinto (Marcio Augusto); C. Rodrigo (Carmen)
2018-01-01
textabstractIn this work, a space-time multigrid method which uses standard coarsening in both temporal and spatial domains and combines the use of different smoothers is proposed for the solution of the heat equation in one and two space dimensions. In particular, an adaptive smoothing strategy,
Standard test method for measurement of 235U fraction using enrichment meter principle
American Society for Testing and Materials. Philadelphia
2008-01-01
1.1 This test method covers the quantitative determination of the fraction of 235U in uranium using measurement of the 185.7 keV gamma-ray produced during the decay of 235U. 1.2 This test method is applicable to items containing homogeneous uranium-bearing materials of known chemical composition in which the compound is considered infinitely thick with respect to 185.7 keV gamma-rays. 1.3 This test method can be used for the entire range of 235U fraction as a weight percent, from depleted (0.2 % 235U) to highly enriched (97.5 % 235U). 1.4 Measurement of items that have not reached secular equilibrium between 238U and 234Th may not produce the stated bias when low-resolution detectors are used with the computational method listed in Annex A2. 1.5 The values stated in SI units are to be regarded as standard. No other units of measurement are included in this standard. 1.6 This standard may involve hazardous materials, operations, and equipment. This standard does not purport to address all of the safety co...
Non-standard perturbative methods for the effective potential in λφ4 QFT
International Nuclear Information System (INIS)
Okopinska, A.
1986-07-01
The effective potential in scalar QFT is calculated in the non-standard perturbative methods and compared with the conventional loop expansion. In the space time dimensions 0 and 1 the results are compared with the ''exact'' effective potential obtained numerically. In 4 dimensions we show that λφ 4 theory is non-interacting. (author)
Next Generation Science Standards: A National Mixed-Methods Study on Teacher Readiness
Haag, Susan; Megowan, Colleen
2015-01-01
Next Generation Science Standards (NGSS) science and engineering practices are ways of eliciting the reasoning and applying foundational ideas in science. As research has revealed barriers to states and schools adopting the NGSS, this mixed-methods study attempts to identify characteristics of professional development (PD) that will support NGSS…
Building America Guidance for Identifying and Overcoming Code, Standard, and Rating Method Barriers
Energy Technology Data Exchange (ETDEWEB)
Cole, P. C. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Halverson, M. A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)
2013-09-01
This guidance document was prepared using the input from the meeting summarized in the draft CSI Roadmap to provide Building America research teams and partners with specific information and approaches to identifying and overcoming potential barriers to Building America innovations arising in and/or stemming from codes, standards, and rating methods.
Progress Report on Alloy 617 Isochronous Stress-Strain Curves
Energy Technology Data Exchange (ETDEWEB)
Jill K. Wright; Richard N. Wright; Nancy J. Lybeck
2014-03-01
Isochronous stress-strain curves for Alloy 617 up to a temperature of 1000°C will be required to qualify the material for elevated temperature design in Section III, Division 1, Subsection NH of the ASME Boiler and Pressure Vessel Code. Several potential methods for developing these curves are reviewed in this report. It is shown that in general power-law creep is the rate controlling deformation mechanism for a wide range of alloy heats, test temperatures and stresses. Measurement of the strain rate sensitivity of Alloy 617 indicates that the material is highly strain rate sensitive in the tensile deformation range above about 750°C. This suggests that the concept of a hot tensile curve as a bounding case on the isochronous stress-strain diagrams is problematic. The impact of strain rate on the hot tensile curves is examined and it is concluded that incorporating such a curve is only meaningful if a single tensile strain rate (typically the ASTM standard rate of 0.5%/min) is arbitrarily defined. Current experimentally determined creep data are compared to isochronous stress-strain curves proposed previously by the German programs in the 1980s and by the 1990 draft ASME Code Case. Variability in how well the experimental data are represented by the proposed design curves that suggests further analysis is necessary prior to completing a new draft Code Case.
Directory of Open Access Journals (Sweden)
Fatemeh Dehghan
2014-11-01
Full Text Available Background: Detection of bacterial contamination in drinking water by culture method is a time and cost consuming method and spends a few days depending on contamination degree. However, the people use the tap water during that time. Molecular methods are rapid and sensitive. In this study a rapid Multiplex PCR method was used for rapid analysis both coliform bacteria and E.coli, and probable detection of VBNC bacteria in drinking water, the experiments were performed in bacteriological lab of water and Wastewater Corporation in Markazi province. Material and Methods:Amplification of a fragment from each of lacZ and uidA genes in a Multiplex PCR was used for detection of coliforms. Eight samples was taken from Arak drinking water system including 36 samples of wells, 41 samples of water distribution network and 3 samples from water storages were examined by amplification of lacZ and uidA genes in a Multiplex PCR. Equivalently, the MPN test was applied as a standard method for all samples for comparison of results. Standard bacteria, pure bacteria isolated from positive MPN and CRM were examined by PCR and MPN method. Results: The result of most samples water network, water storages, and water well were same in both MPN and PCR method .The results of standard bacteria and pure cultures of bacteria isolated from positive MPN and CRM confirmed the PCR method. Five samples were positive in PCR but negative in MPN method. Duration time of PCR was decreased about 105 min by changing the PCR program and electrophoreses factors. Conclusion: The Multiplex PCR can detect coliform bacteria and E.coli synchronous in drinking water.
American Society for Testing and Materials. Philadelphia
2011-01-01
1.1 This test method is used to determine the percent nodularity and the nodule count per unit area (that is, number of nodules per mm2) using a light microscopical image of graphite in nodular cast iron. Images generated by other devices, such as a scanning electron microscope, are not specifically addressed, but can be utilized if the system is calibrated in both x and y directions. 1.2 Measurement of secondary or temper carbon in other types of cast iron, for example, malleable cast iron or in graphitic tool steels, is not specifically included in this standard because of the different graphite shapes and sizes inherent to such grades 1.3 This standard deals only with the recommended test method and nothing in it should be construed as defining or establishing limits of acceptability or fitness for purpose of the material tested. 1.4 The values stated in SI units are to be regarded as standard. No other units of measurement are included in this standard. 1.5 This standard does not purport to address al...
Standard test method for guided bend test for ductility of welds
American Society for Testing and Materials. Philadelphia
2002-01-01
1.1 This test method covers a guided bend test for the determination of soundness and ductility of welds in ferrous and nonferrous products. Defects, not shown by X rays, may appear in the surface of a specimen when it is subjected to progressive localized overstressing. This guided bend test has been developed primarily for plates and is not intended to be substituted for other methods of bend testing. 1.2 The values stated in inch-pound units are to be regarded as standard. The values given in parentheses are mathematical conversions to SI units that are provided for information only and are not considered standard. Note 1—For additional information see Terminology E 6, and American Welding Society Standard D 1.1. 1.3 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.
Directory of Open Access Journals (Sweden)
Lihong Zhang
2013-01-01
Full Text Available To bridge the convergence between traditional Chinese medicine (TCM and modern medicine originated from the West, a new method of area under the absorbance-wavelength curve (AUAWC by spectrophotometer scanning was investigated and compared with HPLC method to explore metabolomic pharmacokinetics in rats. AUAWC and drug total concentration were obtained after Yangxue was injected to rats. Meanwhile, individual concentrations of sodium ferulate, tetramethylpyrazine hydrochloride, tanshinol sodium, and sodium tanshinone IIA sulfonate in plasma were determined by HPLC. Metabolomic profile of multicomponents plasma concentration time from AUAWC and that of individual components from HPLC were compared. The data from AUAWC had one-compartment model with mean area under concentration versus time (AUC of 9370.58 min·μg/mL and mean elimination half-life (t1/2 of 12.92 min. The results by HPLC demonstrated that sodium ferulate and tetramethylpyrazine hydrochloride had one-compartment model with AUC of 6075.50 and 876.94 min·μg/mL, t1/2 of 10.85 and 20.57 min, respectively. Tanshinol sodium and sodium tanshinone IIA sulfonate showed two-compartment model, and AUC was 29.58 and 201.46 with t1/2β of 1.76 and 16.90, respectively. The profiles indicated that method of AUAWC can be used to study pharmacokinetics of TCM with multicomponents and to improve its development of active theory and application in clinic combined with in vivo metabolomic profile of HPLC.
Fan, Zhichao; Hwang, Keh-Chih; Rogers, John A.; Huang, Yonggang; Zhang, Yihui
2018-02-01
Mechanically-guided 3D assembly based on controlled, compressive buckling represents a promising, emerging approach for forming complex 3D mesostructures in advanced materials. Due to the versatile applicability to a broad set of material types (including device-grade single-crystal silicon) over length scales from nanometers to centimeters, a wide range of novel applications have been demonstrated in soft electronic systems, interactive bio-interfaces as well as tunable electromagnetic devices. Previously reported 3D designs relied mainly on finite element analyses (FEA) as a guide, but the massive numerical simulations and computational efforts necessary to obtain the assembly parameters for a targeted 3D geometry prevent rapid exploration of engineering options. A systematic understanding of the relationship between a 3D shape and the associated parameters for assembly requires the development of a general theory for the postbuckling process. In this paper, a double perturbation method is established for the postbuckling analyses of planar curved beams, of direct relevance to the assembly of ribbon-shaped 3D mesostructures. By introducing two perturbation parameters related to the initial configuration and the deformation, the highly nonlinear governing equations can be transformed into a series of solvable, linear equations that give analytic solutions to the displacements and curvatures during postbuckling. Systematic analyses of postbuckling in three representative ribbon shapes (sinusoidal, polynomial and arc configurations) illustrate the validity of theoretical method, through comparisons to the results of experiment and FEA. These results shed light on the relationship between the important deformation quantities (e.g., mode ratio and maximum strain) and the assembly parameters (e.g., initial configuration and the applied strain). This double perturbation method provides an attractive route to the inverse design of ribbon-shaped 3D geometries, as
Soil Water Thermodynamic to Unify Water Retention Curve by Pressure Plates and Tensiometer
Directory of Open Access Journals (Sweden)
Erik eBraudeau
2014-10-01
Full Text Available The pressure plate method is a standard method for measuring the pF curves, also called soil water retention curves, in a large soil moisture range from saturation to a dry state corresponding to a tension pressure of near 1500 kPa. However, the pressure plate can only provide discrete water retention curves represented by a dozen measured points. In contrast, the measurement of the soil water retention curves by tensiometer is direct and continuous, but limited to the range of the tensiometer reading: from saturation to near 70-80 kPa. The two methods stem from two very different concepts of measurement and the compatibility of both methods has never been demonstrated. The recently established thermodynamic formulation of the pedostructure water retention curve, will allow the compatibility of the two curves to be studied, both theoretically and experimentally. This constitutes the object of the present article. We found that the pressure plate method provides accurate measurement points of the pedostructure water retention curve h(W, conceptually the same as that accurately measured by the tensiometer. However, contrarily to what is usually thought, h is not equal to the applied air pressure on the sample, but rather, is proportional to its logarithm, in agreement with the thermodynamic theory developed in the article. The pF curve and soil water retention curve, as well as their methods of measurement are unified in a same physical theory. It is the theory of the soil medium organization (pedostructure and its interaction with water. We show also how the hydrostructural parameters of the theoretical curve equation can be estimated from any measured curve, whatever the method of measurement. An application example using published pF curves is given.
Standardization of shape memory alloy test methods toward certification of aerospace applications
Hartl, D. J.; Mabe, J. H.; Benafan, O.; Coda, A.; Conduit, B.; Padan, R.; Van Doren, B.
2015-08-01
The response of shape memory alloy (SMA) components employed as actuators has enabled a number of adaptable aero-structural solutions. However, there are currently no industry or government-accepted standardized test methods for SMA materials when used as actuators and their transition to commercialization and production has been hindered. This brief fast track communication introduces to the community a recently initiated collaborative and pre-competitive SMA specification and standardization effort that is expected to deliver the first ever regulatory agency-accepted material specification and test standards for SMA as employed as actuators for commercial and military aviation applications. In the first phase of this effort, described herein, the team is working to review past efforts and deliver a set of agreed-upon properties to be included in future material certification specifications as well as the associated experiments needed to obtain them in a consistent manner. Essential for the success of this project is the participation and input from a number of organizations and individuals, including engineers and designers working in materials and processing development, application design, SMA component fabrication, and testing at the material, component, and system level. Going forward, strong consensus among this diverse body of participants and the SMA research community at large is needed to advance standardization concepts for universal adoption by the greater aerospace community and especially regulatory bodies. It is expected that the development and release of public standards will be done in collaboration with an established standards development organization.
Standard test method for ball punch deformation of metallic sheet material
American Society for Testing and Materials. Philadelphia
2009-01-01
1.1 This test method covers the procedure for conducting the ball punch deformation test for metallic sheet materials intended for forming applications. The test applies to specimens with thicknesses between 0.008 and 0.080 in. (0.20 and 2.00 mm). 1.2 The values stated in inch–pound units are to be regarded as the standard. Note 1—The ball punch deformation test is intended to replace the Olsen cup test by standardizing many of the test parameters that previously have been left to the discretion of the testing laboratory. Note 2—The modified Erichsen test has been standardized in Europe. The main differences between the ball punch deformation test and the Erichsen test are the diameters of the penetrator and the dies. Erichsen cup heights are given in SI units. 1.3 The values stated in inch-pound units are to be regarded as standard. The values given in parentheses are mathematical conversions to SI units that are provided for information only and are not considered standard. 1.4 This standard does...
DEFF Research Database (Denmark)
Lyon, H O; De Leenheer, A P; Horobin, R W
1994-01-01
The need for the standardization of reagents and methods used in the histology laboratory is demonstrated. After definitions of dyes, stains, and chromogenic reagents, existing standards and standards organizations are discussed. This is followed by practical instructions on how to standardize dyes...
Method and platform standardization in MRM-based quantitative plasma proteomics.
Percy, Andrew J; Chambers, Andrew G; Yang, Juncong; Jackson, Angela M; Domanski, Dominik; Burkhart, Julia; Sickmann, Albert; Borchers, Christoph H
2013-12-16
There exists a growing demand in the proteomics community to standardize experimental methods and liquid chromatography-mass spectrometry (LC/MS) platforms in order to enable the acquisition of more precise and accurate quantitative data. This necessity is heightened by the evolving trend of verifying and validating candidate disease biomarkers in complex biofluids, such as blood plasma, through targeted multiple reaction monitoring (MRM)-based approaches with stable isotope-labeled standards (SIS). Considering the lack of performance standards for quantitative plasma proteomics, we previously developed two reference kits to evaluate the MRM with SIS peptide approach using undepleted and non-enriched human plasma. The first kit tests the effectiveness of the LC/MRM-MS platform (kit #1), while the second evaluates the performance of an entire analytical workflow (kit #2). Here, these kits have been refined for practical use and then evaluated through intra- and inter-laboratory testing on 6 common LC/MS platforms. For an identical panel of 22 plasma proteins, similar concentrations were determined, regardless of the kit, instrument platform, and laboratory of analysis. These results demonstrate the value of the kit and reinforce the utility of standardized methods and protocols. The proteomics community needs standardized experimental protocols and quality control methods in order to improve the reproducibility of MS-based quantitative data. This need is heightened by the evolving trend for MRM-based validation of proposed disease biomarkers in complex biofluids such as blood plasma. We have developed two kits to assist in the inter- and intra-laboratory quality control of MRM experiments: the first kit tests the effectiveness of the LC/MRM-MS platform (kit #1), while the second evaluates the performance of an entire analytical workflow (kit #2). In this paper, we report the use of these kits in intra- and inter-laboratory testing on 6 common LC/MS platforms. This
Application of quantitative salt iodine analysis compared with the standard method.
Chongchirasiri, S; Pattanachak, S; Putrasreni, N; Suwanik, R; Pattanachak, H; Tojinda, N; Pleehachinda, R
2001-06-01
Laboratory investigation of 50 iodated salt samples (from producers, households, markets etc) were studied at the Research Nuclear Medicine Building, Siriraj Hospital. Two methods for the determination of iodine in salt are herein described. The standard method as recommended by The Programme Against Micronutrient Malnutrition (PAMM) / The Micronutrient Initiative (MI)/ The International Council for Control of Iodine Deficiency Disorders (ICCIDD) was the iodometric titration method. The starch-KI salt iodine quantitative method was developed in our laboratory for validation purposes. This method is high in precision, accuracy, sensitivity as well as specificity. The coefficient of variation (%CV) for intra and inter assay was below 10. Iodine contents as low as 10 ppm, could be detected. The proposed starch-KI method offered some advantages: e.g. not complicated, easier to learn and easier to perform competently, could be applied for spot qualitative test and readily performed outside the laboratory. The results obtained by the starch-KI method correlated well with the standard method (y = 0.98x - 3.22, r = 0.99).
American Society for Testing and Materials. Philadelphia
2008-01-01
1.1 These test methods cover the determination of the size distribution and quantity of particulate matter contamination from aerospace fluids isolated on a membrane filter. The microscopical techniques described may also be applied to other properly prepared samples of small particles. Two test methods are described for sizing particles as follows: 1.1.1 Test Method A—Particle sizes are measured as the diameter of a circle whose area is equal to the projected area of the particle. 1.1.2 Test Method B—Particle sizes are measured by their longest dimension. 1.2 The test methods are intended for application to particle contamination determination of aerospace fluids, gases, surfaces, and environments. 1.3 The values stated in SI units are to be regarded as standard. No other units of measurement are included in this standard. 1.4 These test methods do not provide for sizing particles smaller than 5 μm. Note 1—Results of these methods are subject to variables inherent in any statistical method. The...
Standard test method for radiochemical determination of plutonium in Soil by alpha spectroscopy
American Society for Testing and Materials. Philadelphia
2011-01-01
1.1 This test method covers the determination of plutonium in soils at levels of detection dependent on count time, sample size, detector, background, and tracer yield. This test method describes one acceptable approach to the determination of plutonium in soil. 1.2 This test method is designed for 10 g of soil, previously collected and treated as described in Practices C998 and C999, but sample sizes up to 50 g may be analyzed by this test method. This test method may not be able to completely dissolve all forms of plutonium in the soil matrix. 1.3 The values stated in SI units are to be regarded as standard. The values given in parentheses are for information only. 1.4 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use. Specific hazard statements are given in Section 9.
Directory of Open Access Journals (Sweden)
Bo Li
2014-01-01
Full Text Available The lack of evaluation standard for safety coefficient based on finite element method (FEM limits the wide application of FEM in roller compacted concrete dam (RCCD. In this paper, the strength reserve factor (SRF method is adopted to simulate gradual failure and possible unstable modes of RCCD system. The entropy theory and catastrophe theory are used to obtain the ultimate bearing resistance and failure criterion of the RCCD. The most dangerous sliding plane for RCCD failure is found using the Latin hypercube sampling (LHS and auxiliary analysis of partial least squares regression (PLSR. Finally a method for determining the evaluation standard of RCCD safety coefficient based on FEM is put forward using least squares support vector machines (LSSVM and particle swarm optimization (PSO. The proposed method is applied to safety coefficient analysis of the Longtan RCCD in China. The calculation shows that RCCD failure is closely related to RCCD interface strength, and the Longtan RCCD is safe in the design condition. Considering RCCD failure characteristic and combining the advantages of several excellent algorithms, the proposed method determines the evaluation standard for safety coefficient of RCCD based on FEM for the first time and can be popularized to any RCCD.
Directory of Open Access Journals (Sweden)
Weiping Liu
2017-10-01
Full Text Available It is important to determine the soil–water characteristic curve (SWCC for analyzing slope seepage and stability under the conditions of rainfall. However, SWCCs exhibit high uncertainty because of complex influencing factors, which has not been previously considered in slope seepage and stability analysis under conditions of rainfall. This study aimed to evaluate the uncertainty of the SWCC and its effects on the seepage and stability analysis of an unsaturated soil slope under conditions of rainfall. The SWCC model parameters were treated as random variables. An uncertainty evaluation of the parameters was conducted based on the Bayesian approach and the Markov chain Monte Carlo (MCMC method. Observed data from granite residual soil were used to test the uncertainty of the SWCC. Then, different confidence intervals for the model parameters of the SWCC were constructed. The slope seepage and stability analysis under conditions of rainfall with the SWCC of different confidence intervals was investigated using finite element software (SEEP/W and SLOPE/W. The results demonstrated that SWCC uncertainty had significant effects on slope seepage and stability. In general, the larger the percentile value, the greater the reduction of negative pore-water pressure in the soil layer and the lower the safety factor of the slope. Uncertainties in the model parameters of the SWCC can lead to obvious errors in predicted pore-water pressure profiles and the estimated safety factor of the slope under conditions of rainfall.
International Nuclear Information System (INIS)
Yang, Xiaoli; Hofmann, Ralf; Dapp, Robin; Van de Kamp, Thomas; Rolo, Tomy dos Santos; Xiao, Xianghui; Moosmann, Julian; Kashef, Jubin; Stotzka, Rainer
2015-01-01
High-resolution, three-dimensional (3D) imaging of soft tissues requires the solution of two inverse problems: phase retrieval and the reconstruction of the 3D image from a tomographic stack of two-dimensional (2D) projections. The number of projections per stack should be small to accommodate fast tomography of rapid processes and to constrain X-ray radiation dose to optimal levels to either increase the duration o fin vivo time-lapse series at a given goal for spatial resolution and/or the conservation of structure under X-ray irradiation. In pursuing the 3D reconstruction problem in the sense of compressive sampling theory, we propose to reduce the number of projections by applying an advanced algebraic technique subject to the minimisation of the total variation (TV) in the reconstructed slice. This problem is formulated in a Lagrangian multiplier fashion with the parameter value determined by appealing to a discrete L-curve in conjunction with a conjugate gradient method. The usefulness of this reconstruction modality is demonstrated for simulated and in vivo data, the latter acquired in parallel-beam imaging experiments using synchrotron radiation
Stability and non-standard finite difference method of the generalized Chua's circuit
Radwan, Ahmed G.
2011-08-01
In this paper, we develop a framework to obtain approximate numerical solutions of the fractional-order Chua\\'s circuit with Memristor using a non-standard finite difference method. Chaotic response is obtained with fractional-order elements as well as integer-order elements. Stability analysis and the condition of oscillation for the integer-order system are discussed. In addition, the stability analyses for different fractional-order cases are investigated showing a great sensitivity to small order changes indicating the poles\\' locations inside the physical s-plane. The GrnwaldLetnikov method is used to approximate the fractional derivatives. Numerical results are presented graphically and reveal that the non-standard finite difference scheme is an effective and convenient method to solve fractional-order chaotic systems, and to validate their stability. © 2011 Elsevier Ltd. All rights reserved.
Standard test method for conducting erosion tests by solid particle impingement using gas jets
American Society for Testing and Materials. Philadelphia
2007-01-01
1.1 This test method covers the determination of material loss by gas-entrained solid particle impingement erosion with jetnozzle type erosion equipment. This test method may be used in the laboratory to measure the solid particle erosion of different materials and has been used as a screening test for ranking solid particle erosion rates of materials in simulated service environments (1,2 ). Actual erosion service involves particle sizes, velocities, attack angles, environments, and so forth, that will vary over a wide range (3-5). Hence, any single laboratory test may not be sufficient to evaluate expected service performance. This test method describes one well characterized procedure for solid particle impingement erosion measurement for which interlaboratory test results are available. This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determi...
Directory of Open Access Journals (Sweden)
Daniele Gallazzi
2010-01-01
Full Text Available The study focuses on the method and the problems about quantitative analyses in the research on Lactobacillus acidophilus after its addition to commercial poultry-feed, whose rough grinding is not suitable for the “IDF Standard quantitative method for lactic acid bacteria count at 37°C” employed in dairy products. Poultry-feed was prepared every month. A sample was collected before and after adding Lactobacillus acidophilus, while analyses were carried out respectively at T 0, 15 and 28 days after the food storage at 4-6°C. The best outcomes (more 30% of recovered cells compared to the standard method resulted from samples subjected to the homogenization and the addition of Skim Milk Powder.
Energy Technology Data Exchange (ETDEWEB)
Walker, Iain [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Stratton, Chris [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)
2015-08-01
This project improved the accuracy of air flow measurements used in commissioning California heating and air conditioning systems in Title 24 (Building and Appliance Efficiency Standards), thereby improving system performance and efficiency of California residences. The research team at Lawrence Berkeley National Laboratory addressed the issue that typical tools used by contractors in the field to test air flows may not be accurate enough to measure return flows used in Title 24 applications. The team developed guidance on performance of current diagnostics as well as a draft test method for use in future evaluations. The study team prepared a draft test method through ASTM International to determine the uncertainty of air flow measurements at residential heating ventilation and air conditioning returns and other terminals. This test method, when finalized, can be used by the Energy Commission and other entities to specify required accuracy of measurement devices used to show compliance with standards.
Standard test method for measurement of roll wave optical distortion in heat-treated flat glass
American Society for Testing and Materials. Philadelphia
2008-01-01
1.1 This test method is applicable to the determination of the peak-to-valley depth and peak-to-peak distances of the out-of-plane deformation referred to as roll wave which occurs in flat, heat-treated architectural glass substrates processed in a heat processing continuous or oscillating conveyance oven. 1.2 The values stated in inch-pound units are to be regarded as standard. The values given in parentheses are mathematical conversions to SI units that are provided for information only and are not considered standard. 1.3 This test method does not address other flatness issues like edge kink, ream, pocket distortion, bow, or other distortions outside of roll wave as defined in this test method. 1.4 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.
Standard test method for measuring pH of soil for use in corrosion testing
American Society for Testing and Materials. Philadelphia
1995-01-01
1.1 This test method covers a procedure for determining the pH of a soil in corrosion testing. The principle use of the test is to supplement soil resistivity measurements and thereby identify conditions under which the corrosion of metals in soil may be accentuated (see G 57 - 78 (1984)). 1.2 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.
Standard method for economic analyses of inertial confinement fusion power plants
International Nuclear Information System (INIS)
Meier, W.R.
1986-01-01
A standard method for calculating the total capital cost and the cost of electricity for a typical inertial confinement fusion electric power plant has been developed. A standard code of accounts at the two-digit level is given for the factors making up the total capital cost of the power plant. Equations are given for calculating the indirect capital costs, the project contingency, and the time-related costs. Expressions for calculating the fixed charge rate, which is necessary to determine the cost of electricity, are also described. Default parameters are given to define a reference case for comparative economic analyses
American Society for Testing and Materials. Philadelphia
2005-01-01
1.1 This test method covers the determination of the concentration and isotopic composition of uranium and plutonium in solutions. The purified uranium or plutonium from samples ranging from nuclear materials to environmental or bioassay matrices is loaded onto a mass spectrometric filament. The isotopic ratio is determined by thermal ionization mass spectrometry, the concentration is determined by isotope dilution. 1.2 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish safety and health practices and determine the applicability of regulatory limitations prior to use.
American Society for Testing and Materials. Philadelphia
2011-01-01
1.1 These test methods cover procedures for subsampling and for chemical, mass spectrometric, spectrochemical, nuclear, and radiochemical analysis of uranium hexafluoride UF6. Most of these test methods are in routine use to determine conformance to UF6 specifications in the Enrichment and Conversion Facilities. 1.2 The analytical procedures in this document appear in the following order: Note 1—Subcommittee C26.05 will confer with C26.02 concerning the renumbered section in Test Methods C761 to determine how concerns with renumbering these sections, as analytical methods are replaced with stand-alone analytical methods, are best addressed in subsequent publications. Sections Subsampling of Uranium Hexafluoride 7 - 10 Gravimetric Determination of Uranium 11 - 19 Titrimetric Determination of Uranium 20 Preparation of High-Purity U3O 8 21 Isotopic Analysis 22 Isotopic Analysis by Double-Standard Mass-Spectrometer Method 23 - 29 Determination of Hydrocarbons, Chlorocarbons, and Partially Substitut...
Directory of Open Access Journals (Sweden)
Janusz Charatonik
1991-11-01
Full Text Available Results concerning contractibility of curves (equivalently: of dendroids are collected and discussed in the paper. Interrelations tetween various conditions which are either sufficient or necessary for a curve to be contractible are studied.
Woodruff, David; Traynor, Anne; Cui, Zhongmin; Fang, Yu
2013-01-01
Professional standards for educational testing recommend that both the overall standard error of measurement and the conditional standard error of measurement (CSEM) be computed on the score scale used to report scores to examinees. Several methods have been developed to compute scale score CSEMs. This paper compares three methods, based on…